The first true pressure measurement was made by Evangelista Torricelli in the 17th Century, when he invented the mercury barometer and measured atmospheric pressure. The mercury barometer is simply a glass tube that is filled with mercury and placed upside down in a dish of mercury without allowing any air to enter (Figure 1). What Torricelli found was that the force that the atmosphere exerts on the mercury in the dish (at sea level) is sufficient to support a column of mercury in the tube that is 760 mm high. The weight (force) of the column of mercury exactly counter-balances the pressure (force) that the atmosphere exerts on the mercury in the dish. If the pressure on the surface of the mercury in the dish is less than normal atmospheric pressure, the mercury column will have a height that is less than 760 mm, since the force on the mercury in the dish will be less. If there was no gas pressure exerted on the mercury in the dish (i.e., if it were in a vacuum) the level of the column would be equal to the level of mercury in the dish. So, vacuum - i.e., pressures lower than atmospheric pressure - has been historically divided into 760 units, based on the mercury height in a manometer. This experiment is the reason that today we use a unit of measurement for vacuum pressure that is named after Torricelli - the Torr - and why a Torr is exactly 1/760th of normal atmospheric pressure at sea level. A derivative of the Torr often used in semiconductor process vacuum measurements is the millitorr or 1/1000th Torr. Pressures lower than 1.0 millitorr are usually expressed as scientific notification, (e.g., 1.0 x 10-6 Torr). Vacuum and meteorological measurements in the European and Asian systems usually refer to pressures in "atmospheres" where 1 atmosphere (referred to as 1 bar) is the normal atmospheric pressure at sea level. Vacuum measurements in this system are usually reported in terms of 1/1000th's of an atmosphere (the millibar). The Pascal, another unit often used for pressure measurement is the SI unit of pressure measurement, equal to 1 Newton/m2.
Since Torricelli's time, liquid filled manometers have remained in use as a fundamental standard for absolute vacuum pressure measurement, with many customized designs having been developed. Since liquid manometers make a direct and absolute measurement of vacuum and pressure, they are often considered as the fundamental measurement standard to which measurements made by other kinds of pressure measurement devices can be referenced. Many kinds of vacuum measurement devices, commonly called vacuum gauges, have been developed since Torricelli's time.
Probably the most common type of vacuum gauges encountered in technological environments are those that employ mechanical deformation as the underlying principle for measuring pressure. These include diaphragm, bellows, and Bourdon gauges. Since pressure is a measure of force per unit area, pressure has the ability to deform different kinds of material elements in a reproducible way. The degree of deformation that an element undergoes is proportional to both the material properties of the element and to the pressure exerted on it. In this way thin, flexible elements can be used to measure low pressures (thicker, stiffer ones can be similarly used for measuring high pressures). The degree of deflection of these elements can be measured in a variety of ways, including direct mechanical measurement, variation in electrical properties of a device containing the element and deflection of optical probes. Figure 2 shows some of the structures that can be employed as deformation elements in a pressure measurement device. Mechanical deformation gauges include capacitance manometers, Bourdon gauges, resonant diaphragm gauges, bellows gauges, piezoelectric gauges, and others.
Capacitance manometers detect the deflection of a diaphragm using changes in capacitance between the diaphragm and a powered electrode. They are commonly used for pressure/vacuum measurement. Strain-based gauges are a variant on this approach that are commonly encountered in positive-pressure (i.e. greater than atmospheric pressure) applications. Strain gauges employ a thin diaphragm with a strain sensing electronic circuit mounted on its backside. A change in pressure causes the diaphragm to deflect producing strain that is detected by the sensor.
Some vacuum gauge designs are based on a response to gas density and some species dependent molecular property such as specific heat. For instance, in thermal conductivity gauges, gas pressure is determined by measuring the energy transfer from a hot wire to the surrounding gas. The heat is transferred into the gas through molecular collisions with the wire and the frequency (and therefore the degree of heat transferred) of these collisions is dependent on the gas pressure and the molecular weight of the gas molecules. These gauges only exhibit simple proportionality between pressure and heat transfer at relatively low pressures, with typical measurement ranges lying between 10-4 and 10 Torr. Thermal conductivity gauges include thermocouple, thermistor and Pirani gauges. They are generally relatively inexpensive and reliable. Figure 3 provides a representation of the physical configuration of a Pirani gauge head and the electrical circuit used for pressure measurement. The measurement uses the thermal element as one arm of a Wheatstone bridge. Using the bridge, changes in the electrical resistance of the element can be measured and these changes are proportional to its temperature. Using this information and the known electrical characteristics of the element, the heat transfer can be calculated and related to the gas pressure. Thermocouple and thermistor pressure gauges are very similar to Pirani gauges with the primary difference being that the hot wire temperature is measured directly using either a thermocouple or thermistor attached to the element.
Another type of density measurement involves ionization of the surrounding gas. The principle of operation for one variety of this type of ionization gauge, the hot cathode gauge, is shown in Figure 4. A filament (the cathode) emits electrons by thermionic emission and a positive electrical potential on the grid accelerates these electrons away from the filament. The electrons oscillate through the grid until they eventually strike either the grid or a molecule of gas. When an electron impacts a gas molecule, a positively charged cation is created that is accelerated toward and collected by a negative electrode known as the collector. The electrical current created in this manner is directly proportional to the number of ions that are created in the gas phase which, in turn, is directly proportional to the gas pressure.
Residual gas analyzers (RGAs) are often used to measure pressures, especially in the semiconductor industry where they are commonly available for other process purposes. RGAs are compact mass spectrometers that can detect and analyze residual gases within a vacuum system. Within the RGA, incoming gases are ionized in a manner similar to that used in ionization gauges and the ions are mass filtered using an electromagnetic quadrupole. The quadrupole can be controlled to create a variable electromagnetic field that can be swept to detect ions of different mass to charge ratio. After exiting the quadrupole, the ions are detected using a Faraday cup or other more sensitive detectors. A detailed discussion of the operating principles and characteristics of RGAs is discussed in Section E.
Other specialty methods have been used to measure vacuum pressures in the UHV to XHV pressure regime. These are variations of ionization gauges.
The selection of a measurement method for a given vacuum application depends largely on the expected vacuum environment and the degree of accuracy required for the measurement. For example, a gauge that is expected to measure process pressures during a deposition or etch must, by necessity, be exposed to the process gases. This may or may not impact the measurement accuracy and it may cause physicochemical problems for the components of the gauge, depending on the type of gas that is employed in the process. If a vacuum gauge is needed for the measurement of base pressure in a system in which the chemical nature of residual gases is unknown, gauges in which the measurement has a dependence on the molecular properties of the gas being measured may not be suitable. In a manufacturing environment, it is also important that gauge selection be cost effective. In the case of vacuum gauges, depending on the design and associated ancillary equipment, gauge costs can vary by orders of magnitude. Once the specifications for the vacuum gauge range, accuracy, process compatibility, and cost have been defined, it is important to select the gauge that best matches these specifications. Table 1 provides a comparison of the different gauge types for relative performance in several areas along with the relative cost for such gauges. This section will provide more detail on the vacuum gauge types commonly found in semiconductor device fabrication applications.
|Mechanical Gauges||Thermal Gauges||Strain Gauges||Capacitance Gauges||Ionization Gauges||Spinning Rotor Gauges|
|Pressure/Vacuum Range||0 to 500 atm||10-4 to 760 Torr||0 to 500 atm||10-4 to 500 atm||10-9 to 0.01||10-7 to 1|
|Accuracy||Poor to Good||Fair||Fair to Good||Excellent||Fair||Excellent|
|Physical Size||Good to Poor||Good||Excellent||Excellent||Fair||Fair|
|Overall Safety for Personnel||Fair||Poor||Fair||Good||Fair||Good|
|Relative Cost||Fair to Excellent||Good||Fair to Excellent||Fair||Fair||High|
Table 1. Performance and relative cost of major pressure gauge types.
Because of the almost universal requirement for pressure sensors that can provide signals for process control and automation in semiconductor manufacturing, simple mechanical gauges such as the Bourdon gauge and bellows gauge find only limited use in semiconductor device fabs and will not be discussed further.