Optical resolution
Encyclopedia
Optical resolution describes the ability of an imaging system to resolve detail in the object that is being imaged.
An imaging system may have many individual components including a lens and recording and display components. Each of these contributes to the optical resolution of the system, as will the environment in which the imaging is done.
R = 1.22λf# = 0.61λ/NA
where
. Light coming from a point in the object diffracts through the lens aperture
such that it forms a diffraction pattern in the image which has a central spot and surrounding bright rings, separated by dark nulls; this pattern is known as an Airy pattern, and the central bright lobe as an Airy disk. The angular radius of the Airy disk (measured from the center to the first null) is given by
Two adjacent points in the object give rise to two diffraction patterns. If the angular separation of the two points is significantly less than the Airy disk angular radius, then the two points cannot be resolved in the image, but if their angular separation is much greater than this, distinct images of the two points are formed and they can therefore be resolved. Rayleigh
defined the somewhat arbitrary "Rayleigh criterion" that two points whose angular separation is equal to the Airy disk radius to first null can be considered to be resolved. It can be seen that the greater the diameter of the lens or its aperture, the greater the resolution. Astronomical telescopes have increasingly large lenses so they can 'see' ever finer detail in the stars.
Only the very highest quality lenses have diffraction limited resolution, however, and normally the quality of the lens limits its ability to resolve detail. This ability is expressed by the Optical Transfer Function
which describes the spatial (angular) variation of the light signal as a function of spatial (angular) frequency. When the image is projected onto a flat plane, such as photographic film or a solid state detector, spatial frequency is the preferred domain, but when the image is referred to the lens alone, angular frequency is preferred. OTF may be broken down into the magnitude and phase components as follows:
where
The OTF accounts for aberration
, which the limiting frequency expression above does not. The magnitude is known as the Modulation Transfer Function (MTF) and the phase portion is known as the Phase Transfer Function (PTF).
In imaging systems, the phase component is typically not captured by the sensor. Thus, the important measure with respect to imaging systems is the MTF.
Phase is critically important to adaptive optics
and holographic systems.
, solid-state devices (CCD, CMOS
detectors, and infrared detectors like PtSi
and InSb), tube detectors (vidicon, plumbicon, and photomultiplier
tubes used in night-vision devices), scanning detectors (mainly used for IR), pyroelectric
detectors, and microbolometer
detectors. The ability of such a detector to resolve those differences depends mostly on the size of the detecting elements.
Spatial resolution is typically expressed in line pairs per millimeter (lppmm), lines (of resolution, mostly for analog video), contrast vs. cycles/mm, or MTF (the modulus of OTF)). The MTF may be found by taking the two-dimensional Fourier transform
of the spatial sampling function. Smaller pixels result in wider MTF curves and thus better detection of higher frequency energy.
This is analogous to taking the Fourier transform of a signal sampling
function; as in that case, the dominant factor is the sampling period, which is analogous to the size of the picture element (pixel
).
Other factors include pixel noise, pixel cross-talk, substrate penetration, and fill factor.
A common problem among non-technicians is the use of the number of pixels on the detector to describe the resolution. If all sensors were the same size, this would be acceptable. Since they are not, the use of the number of pixels can be misleading. For example, a 2 megapixel camera of 20 micrometre square pixels will have worse resolution than a 1 megapixel camera with 8 micrometre pixels, all else being equal.
For resolution measurement, film manufacturers typically publish a plot of Response (%) vs. Spatial Frequency (cycles per millimeter). The plot is derived experimentally. Solid state sensor and camera manufacturers normally publish specifications from which the user may derive a theoretical MTF according to the procedure outlined below. A few may also publish MTF curves, while others (especially intensifier manufacturers) will publish the response (%) at the Nyquist frequency
, or, alternatively, publish the frequency at which the response is 50%.
To find a theoretical MTF curve for a sensor, it is necessary to know three characteristics of the sensor: the active sensing area, the area comprising the sensing area and the interconnection and support structures ("real estate"), and the total number of those areas (the pixel count). The total pixel count is almost always given. Sometimes the overall sensor dimensions are given, from which the real estate area can be calculated. Whether the real estate area is given or derived, if the active pixel area is not given, it may be derived from the real estate area and the fill factor, where fill factor is the ratio of the active area to the dedicated real estate area.
where
In Gaskill's notation, the sensing area is a 2D comb(x, y) function of the distance between pixels (the pitch), convolved with a 2D rect(x, y) function of the active area of the pixel, bounded by a 2D rect(x, y) function of the overall sensor dimension. The Fourier transform of this is a function governed by the distance between pixels, convolved with a function governed by the number of pixels, and multiplied by the function corresponding to the active area. That last function serves as an overall envelope to the MTF function; so long as the number of pixels is much greater than one (1), then the active area size dominates the MTF.
Sampling function:
where the sensor has M×N pixels
apply to this system as to any signal sampling system.
All sensors have a characteristic time response. Film is limited at both the short resolution and the long resolution extremes by reciprocity breakdown. These are typically held to be anything longer than 1 second and shorter than 1/10,000 second. Furthermore, film requires a mechanical system to advance it through the exposure mechanism, or a moving optical system to expose it. These limit the speed at which successive frames may be exposed.
CCD and CMOS are the modern preferences for video sensors. CCD is speed-limited by the rate at which the charge can be moved from one site to another. CMOS has the advantage of having individually addressable cells, and this has led to its advantage in the high speed photography
industry.
Vidicons, Plumbicons, and image intensifier
s have specific applications. The speed at which they can be sampled depends upon the decay rate of the phosphor
used. For example, the P46 phosphor has a decay time of less than 2 microseconds, while the P43 decay time is on the order of 2-3 milliseconds. The P43 is therefore unusable at frame rates above 1000 frames per second ( frame/s). See External links for links to phosphor information.
Pyroelectric detectors
respond to changes in temperature. Therefore, a static scene will not be detected, so they require choppers
. They also have a decay time, so the pyroelectric system temporal response will be a bandpass, while the other detectors discussed will be a lowpass.
If objects within the scene are in motion relative to the imaging system, the resulting motion blur
will result in lower spatial resolution. Short integration times will minimize the blur, but integration times are limited by sensor sensitivity. Furthermore, motion between frames in motion pictures will impact digital movie compression schemes (e.g. MPEG-1, MPEG-2). Finally, there are sampling schemes that require real or apparent motion inside the camera (scanning mirrors, rolling shutters) that may result in incorrect rendering of image motion. Therefore, sensor sensitivity and other time-related factors will have a direct impact on spatial resolution.
In analog systems, each horizontal line is transmitted as a high-frequency analog signal. Each picture element (pixel) is therefore converted to an analog electrical value (voltage), and changes in values between pixels therefore become changes in voltage. The transmission standards require that the sampling be done in a fixed time (outlined below), so more pixels per line becomes a requirement for more voltage changes per unit time, i.e. higher frequency. Since such signals are typically band-limited by cables, amplifiers, recorders, transmitters, and receivers, the band-limitation on the analog signal acts as an effective low-pass filter
on the spatial resolution. The difference in resolutions between VHS
(240 discernible lines per scanline), Betamax
(280 lines), and the newer ED Beta format (500 lines) is explained primarily by the difference in the recording bandwidth.
In the NTSC
transmission standard, each field contains 262.5 lines, and 59.94 fields are transmitted every second. Each line must therefore take 63 microseconds, 10.7 of which are for reset to the next line. Thus, the retrace rate is 15.734 kHz. For the picture to appear to have approximately the same horizontal and vertical resolution (see Kell factor
), it should be able to display 228 cycles per line, requiring a bandwidth of 4.28 MHz. If the line (sensor) width is known, this may be converted directly into cycles per millimeter, the unit of spatial resolution.
B/G/I/K television system signals (usually used with PAL
colour encoding) transmit frames less often (50 Hz), but the frame contains more lines and is wider, so bandwidth requirements are similar.
Note that a "discernible line" forms one half of a cycle (a cycle requires a dark and a light line), so "228 cycles" and "456 lines" are equivalent measures.
s, first with the image and the lens, then the result of that procedure with the sensor, and so on through all of the components of the system. This is computationally expensive, and must be performed anew for each object to be imaged.
The other method is to transform each of the components of the system into the spatial frequency domain, and then to multiply the 2-D results. A system response may be determined without reference to an object. Although this method is considerably more difficult to comprehend conceptually, it becomes easier to use computationally, especially when different design iterations or imaged objects are to be tested.
The transformation to be used is the Fourier transform.
is a limiting feature of many systems, when the goal of the system is to present data to humans for processing.
For example, in a security or air traffic control function, the display and work station must be constructed so that average humans can detect problems and direct corrective measures. Other examples are when a human is using eyes to carry out a critical task such as flying (piloting by visual reference), driving a vehicle, and so forth.
The best visual acuity
of the human eye at its optical centre (the fovea) is less than 1 arc minute per line pair, reducing rapidly away from the fovea.
The human brain
requires more than just a line pair to understand what the eye is imaging. Johnson's Criteria
defines the number of line pairs of ocular resolution, or sensor resolution, needed to recognize or identify an item.
. A key measure of the quality of atmospheric turbulence is the seeing diameter
, also known as Fried's seeing diameter
. A path which is temporally coherent is known as an isoplanatic patch.
Large apertures may suffer from aperture averaging, the result of several paths being integrated into one image.
Turbulence scales with wavelength at approximately a 6/5 power. Thus, seeing is better at infrared wavelengths than at visible wavelengths.
Short exposures suffer from turbulence less than longer exposures due to the "inner" and "outer" scale turbulence; short is considered to be much less than 10 ms for visible imaging (typically, anything less than 2 ms). Inner scale turbulence arises due to the eddies in the turbulent flow, while outer scale turbulence arises from large air mass flow. These masses typically move slowly, and so are reduced by decreasing the integration period.
A system limited only by the quality of the optics is said to be diffraction-limited
. However, since atmospheric turbulence is normally the limiting factor for visible systems looking through long atmospheric paths, most systems are turbulence-limited. Corrections can be made by using adaptive optics
or post-processing techniques.
where
is the spatial frequency is the wavelength
, multi-frame blind deconvolution, and other methods.
Typical test charts for Contrast Transfer Function (CTF) consist of repeated bar patterns (see Discussion below). The limiting resolution is measured by determining the smallest group of bars, both vertically and horizontally, for which the correct number of bars can be seen. By calculating the contrast between the black and white areas at several different frequencies, however, points of the CTF can be determined with the contrast equation.
where
When the system can no longer resolve the bars, the black and white areas have the same value, so Contrast = 0. At very low spatial frequencies, Cmax = 1 and Cmin = 0 so Modulation = 1. Some modulation may be seen above the limiting resolution; these may be aliased and phase-reversed.
When using other methods, including the interferogram, sinusoid, and the edge in the ISO 12233 target, it is possible to compute the entire MTF curve. The response to the edge is similar to a step response
, and the Fourier Transform of the first difference of the step response yields the MTF.
consists of a pattern of 3 bar targets. Often found covering a range of 0.25 to 228 cycles/mm. Each group consists of six elements. The group is designated by a group number (-2, -1, 0, 1, 2, etc.) which is the power to which 2 should be raised to obtain the spatial frequency of the first element (e.g., group -2 is 0.25 line pairs per millimeter). Each element is the 6th root of 2 smaller than the preceding element in the group (e.g. element 1 is 2^0, element 2 is 2^(-1/6), element 3 is 2(-1/3), etc.). By reading off the group and element number of the first element which cannot be resolved, the limiting resolution may be determined by inspection. The complex numbering system and use of a look-up chart can be avoided by use of a newer layout chart, which labels the groups directly in cycles/mm and is available in the links below from Applied Image.
. They are offset from the vertical by 5 degrees so that the edges will be sampled in many different phases, which allow estimation of the spatial frequency response beyond the Nyquist frequency
of the sampling.
pattern in acoustics to determine system frequency response.
frequency for NTSC video.
An imaging system may have many individual components including a lens and recording and display components. Each of these contributes to the optical resolution of the system, as will the environment in which the imaging is done.
Lateral resolution
Two point sources radiate incoherently, the interaction of the separate object images can be described using intensity point spread functions and objects are resolved when the center of Airy disk from one overlaps the first dark ring in the diffraction pattern of the second:R = 1.22λf# = 0.61λ/NA
where
- R is the resolution,
- λ is the wavelengthWavelengthIn physics, the wavelength of a sinusoidal wave is the spatial period of the wave—the distance over which the wave's shape repeats.It is usually determined by considering the distance between consecutive corresponding points of the same phase, such as crests, troughs, or zero crossings, and is a...
of light, - f# is the F-numberF-numberIn optics, the f-number of an optical system expresses the diameter of the entrance pupil in terms of the focal length of the lens; in simpler terms, the f-number is the focal length divided by the "effective" aperture diameter...
- and NA is the Numerical apertureNumerical apertureIn optics, the numerical aperture of an optical system is a dimensionless number that characterizes the range of angles over which the system can accept or emit light. By incorporating index of refraction in its definition, NA has the property that it is constant for a beam as it goes from one...
Lens resolution
The ability of a lens to resolve detail is usually determined by the quality of the lens but is ultimately limited by diffractionDiffraction
Diffraction refers to various phenomena which occur when a wave encounters an obstacle. Italian scientist Francesco Maria Grimaldi coined the word "diffraction" and was the first to record accurate observations of the phenomenon in 1665...
. Light coming from a point in the object diffracts through the lens aperture
Aperture
In optics, an aperture is a hole or an opening through which light travels. More specifically, the aperture of an optical system is the opening that determines the cone angle of a bundle of rays that come to a focus in the image plane. The aperture determines how collimated the admitted rays are,...
such that it forms a diffraction pattern in the image which has a central spot and surrounding bright rings, separated by dark nulls; this pattern is known as an Airy pattern, and the central bright lobe as an Airy disk. The angular radius of the Airy disk (measured from the center to the first null) is given by
where
|
Two adjacent points in the object give rise to two diffraction patterns. If the angular separation of the two points is significantly less than the Airy disk angular radius, then the two points cannot be resolved in the image, but if their angular separation is much greater than this, distinct images of the two points are formed and they can therefore be resolved. Rayleigh
Rayleigh
Rayleigh may refer to:*Rayleigh scattering*Rayleigh–Jeans law*Rayleigh waves*Rayleigh , named after the son of Lord Rayleigh*Rayleigh criterion in angular resolution*Rayleigh distribution*Rayleigh fading...
defined the somewhat arbitrary "Rayleigh criterion" that two points whose angular separation is equal to the Airy disk radius to first null can be considered to be resolved. It can be seen that the greater the diameter of the lens or its aperture, the greater the resolution. Astronomical telescopes have increasingly large lenses so they can 'see' ever finer detail in the stars.
Only the very highest quality lenses have diffraction limited resolution, however, and normally the quality of the lens limits its ability to resolve detail. This ability is expressed by the Optical Transfer Function
Optical transfer function
The optical transfer function of an imaging system is the true measure of resolution that the system is capable of...
which describes the spatial (angular) variation of the light signal as a function of spatial (angular) frequency. When the image is projected onto a flat plane, such as photographic film or a solid state detector, spatial frequency is the preferred domain, but when the image is referred to the lens alone, angular frequency is preferred. OTF may be broken down into the magnitude and phase components as follows:
where
- and are spatial frequency in the x- and y-plane, respectively.
The OTF accounts for aberration
Aberration in optical systems
Aberrations are departures of the performance of an optical system from the predictions of paraxial optics. Aberration leads to blurring of the image produced by an image-forming optical system. It occurs when light from one point of an object after transmission through the system does not converge...
, which the limiting frequency expression above does not. The magnitude is known as the Modulation Transfer Function (MTF) and the phase portion is known as the Phase Transfer Function (PTF).
In imaging systems, the phase component is typically not captured by the sensor. Thus, the important measure with respect to imaging systems is the MTF.
Phase is critically important to adaptive optics
Adaptive optics
Adaptive optics is a technology used to improve the performance of optical systems by reducing the effect of wavefront distortions. It is used in astronomical telescopes and laser communication systems to remove the effects of atmospheric distortion, and in retinal imaging systems to reduce the...
and holographic systems.
Sensor resolution (spatial)
Some optical sensors are designed to detect spatial differences in electromagnetic energy. These include photographic filmPhotographic film
Photographic film is a sheet of plastic coated with an emulsion containing light-sensitive silver halide salts with variable crystal sizes that determine the sensitivity, contrast and resolution of the film...
, solid-state devices (CCD, CMOS
CMOS
Complementary metal–oxide–semiconductor is a technology for constructing integrated circuits. CMOS technology is used in microprocessors, microcontrollers, static RAM, and other digital logic circuits...
detectors, and infrared detectors like PtSi
Platinum silicide
Platinum silicide is a semiconductor material used in infrared detectors. It is used in detectors for infrared astronomy.Platinum silicide is capable of operating at 1–5 µm wavelength range. It has a good sensitivity and high stability...
and InSb), tube detectors (vidicon, plumbicon, and photomultiplier
Photomultiplier
Photomultiplier tubes , members of the class of vacuum tubes, and more specifically phototubes, are extremely sensitive detectors of light in the ultraviolet, visible, and near-infrared ranges of the electromagnetic spectrum...
tubes used in night-vision devices), scanning detectors (mainly used for IR), pyroelectric
Pyroelectricity
Pyroelectricity is the ability of certain materials to generate a temporary voltage when they are heated or cooled. The change in temperature modifies the positions of the atoms slightly within the crystal structure, such that the polarization of the material changes. This polarization change...
detectors, and microbolometer
Microbolometer
A microbolometer is a specific type of bolometer used as a detector in a thermal camera. Infrared radiation with wavelengths between 7.5-14 μm strikes the detector material, heating it, and thus changing its electrical resistance. This resistance change is measured and processed into temperatures...
detectors. The ability of such a detector to resolve those differences depends mostly on the size of the detecting elements.
Spatial resolution is typically expressed in line pairs per millimeter (lppmm), lines (of resolution, mostly for analog video), contrast vs. cycles/mm, or MTF (the modulus of OTF)). The MTF may be found by taking the two-dimensional Fourier transform
Fourier transform
In mathematics, Fourier analysis is a subject area which grew from the study of Fourier series. The subject began with the study of the way general functions may be represented by sums of simpler trigonometric functions...
of the spatial sampling function. Smaller pixels result in wider MTF curves and thus better detection of higher frequency energy.
This is analogous to taking the Fourier transform of a signal sampling
Sampling (signal processing)
In signal processing, sampling is the reduction of a continuous signal to a discrete signal. A common example is the conversion of a sound wave to a sequence of samples ....
function; as in that case, the dominant factor is the sampling period, which is analogous to the size of the picture element (pixel
Pixel
In digital imaging, a pixel, or pel, is a single point in a raster image, or the smallest addressable screen element in a display device; it is the smallest unit of picture that can be represented or controlled....
).
Other factors include pixel noise, pixel cross-talk, substrate penetration, and fill factor.
A common problem among non-technicians is the use of the number of pixels on the detector to describe the resolution. If all sensors were the same size, this would be acceptable. Since they are not, the use of the number of pixels can be misleading. For example, a 2 megapixel camera of 20 micrometre square pixels will have worse resolution than a 1 megapixel camera with 8 micrometre pixels, all else being equal.
For resolution measurement, film manufacturers typically publish a plot of Response (%) vs. Spatial Frequency (cycles per millimeter). The plot is derived experimentally. Solid state sensor and camera manufacturers normally publish specifications from which the user may derive a theoretical MTF according to the procedure outlined below. A few may also publish MTF curves, while others (especially intensifier manufacturers) will publish the response (%) at the Nyquist frequency
Nyquist frequency
The Nyquist frequency, named after the Swedish-American engineer Harry Nyquist or the Nyquist–Shannon sampling theorem, is half the sampling frequency of a discrete signal processing system...
, or, alternatively, publish the frequency at which the response is 50%.
To find a theoretical MTF curve for a sensor, it is necessary to know three characteristics of the sensor: the active sensing area, the area comprising the sensing area and the interconnection and support structures ("real estate"), and the total number of those areas (the pixel count). The total pixel count is almost always given. Sometimes the overall sensor dimensions are given, from which the real estate area can be calculated. Whether the real estate area is given or derived, if the active pixel area is not given, it may be derived from the real estate area and the fill factor, where fill factor is the ratio of the active area to the dedicated real estate area.
where
- the active area of the pixel has dimensions a×b
- the pixel real estate has dimensions c×d
In Gaskill's notation, the sensing area is a 2D comb(x, y) function of the distance between pixels (the pitch), convolved with a 2D rect(x, y) function of the active area of the pixel, bounded by a 2D rect(x, y) function of the overall sensor dimension. The Fourier transform of this is a function governed by the distance between pixels, convolved with a function governed by the number of pixels, and multiplied by the function corresponding to the active area. That last function serves as an overall envelope to the MTF function; so long as the number of pixels is much greater than one (1), then the active area size dominates the MTF.
Sampling function:
where the sensor has M×N pixels
Sensor resolution (temporal)
An imaging system running at 24 frames per second is essentially a discrete sampling system that samples a 2D area. The same limitations described by NyquistNyquist frequency
The Nyquist frequency, named after the Swedish-American engineer Harry Nyquist or the Nyquist–Shannon sampling theorem, is half the sampling frequency of a discrete signal processing system...
apply to this system as to any signal sampling system.
All sensors have a characteristic time response. Film is limited at both the short resolution and the long resolution extremes by reciprocity breakdown. These are typically held to be anything longer than 1 second and shorter than 1/10,000 second. Furthermore, film requires a mechanical system to advance it through the exposure mechanism, or a moving optical system to expose it. These limit the speed at which successive frames may be exposed.
CCD and CMOS are the modern preferences for video sensors. CCD is speed-limited by the rate at which the charge can be moved from one site to another. CMOS has the advantage of having individually addressable cells, and this has led to its advantage in the high speed photography
High speed photography
High speed photography is the science of taking pictures of very fast phenomena. In 1948, the Society of Motion Picture and Television Engineers defined high-speed photography as any set of photographs captured by a camera capable of 128 frames per second or greater, and of at least three...
industry.
Vidicons, Plumbicons, and image intensifier
Image intensifier
An image intensifier tube is a vacuum tube device for increasing the intensity of available light in an optical system to allow use under low light conditions such as at night, to facilitate visual imaging of low-light processes such as fluorescence of materials to X-rays or gamma rays, or for...
s have specific applications. The speed at which they can be sampled depends upon the decay rate of the phosphor
Phosphor
A phosphor, most generally, is a substance that exhibits the phenomenon of luminescence. Somewhat confusingly, this includes both phosphorescent materials, which show a slow decay in brightness , and fluorescent materials, where the emission decay takes place over tens of nanoseconds...
used. For example, the P46 phosphor has a decay time of less than 2 microseconds, while the P43 decay time is on the order of 2-3 milliseconds. The P43 is therefore unusable at frame rates above 1000 frames per second ( frame/s). See External links for links to phosphor information.
Pyroelectric detectors
Pyroelectricity
Pyroelectricity is the ability of certain materials to generate a temporary voltage when they are heated or cooled. The change in temperature modifies the positions of the atoms slightly within the crystal structure, such that the polarization of the material changes. This polarization change...
respond to changes in temperature. Therefore, a static scene will not be detected, so they require choppers
Optical chopper
An optical chopper is a mechanical device which periodically interrupts a light beam. Three types are available: variable frequency rotating disc choppers, fixed frequency tuning fork choppers, and optical shutters...
. They also have a decay time, so the pyroelectric system temporal response will be a bandpass, while the other detectors discussed will be a lowpass.
If objects within the scene are in motion relative to the imaging system, the resulting motion blur
Motion blur
Motion blur is the apparent streaking of rapidly moving objects in a still image or a sequence of images such as a movie or animation. It results when the image being recorded changes during the recording of a single frame, either due to rapid movement or long exposure.- Photography :When a camera...
will result in lower spatial resolution. Short integration times will minimize the blur, but integration times are limited by sensor sensitivity. Furthermore, motion between frames in motion pictures will impact digital movie compression schemes (e.g. MPEG-1, MPEG-2). Finally, there are sampling schemes that require real or apparent motion inside the camera (scanning mirrors, rolling shutters) that may result in incorrect rendering of image motion. Therefore, sensor sensitivity and other time-related factors will have a direct impact on spatial resolution.
Analog bandwidth effect on resolution
The spatial resolution of digital systems (e.g. HDTV and VGA) are fixed independently of the analog bandwidth because each pixel is digitized, transmitted, and stored as a discrete value. Digital cameras, recorders, and displays must be selected so that the resolution is identical from camera to display. However, in analog systems, the resolution of the camera, recorder, cabling, amplifiers, transmitters, receivers, and display may all be independent and the overall system resolution is governed by the bandwidth of the lowest performing component.In analog systems, each horizontal line is transmitted as a high-frequency analog signal. Each picture element (pixel) is therefore converted to an analog electrical value (voltage), and changes in values between pixels therefore become changes in voltage. The transmission standards require that the sampling be done in a fixed time (outlined below), so more pixels per line becomes a requirement for more voltage changes per unit time, i.e. higher frequency. Since such signals are typically band-limited by cables, amplifiers, recorders, transmitters, and receivers, the band-limitation on the analog signal acts as an effective low-pass filter
Low-pass filter
A low-pass filter is an electronic filter that passes low-frequency signals but attenuates signals with frequencies higher than the cutoff frequency. The actual amount of attenuation for each frequency varies from filter to filter. It is sometimes called a high-cut filter, or treble cut filter...
on the spatial resolution. The difference in resolutions between VHS
VHS
The Video Home System is a consumer-level analog recording videocassette standard developed by Victor Company of Japan ....
(240 discernible lines per scanline), Betamax
Betamax
Betamax was a consumer-level analog videocassette magnetic tape recording format developed by Sony, released on May 10, 1975. The cassettes contain -wide videotape in a design similar to the earlier, professional wide, U-matic format...
(280 lines), and the newer ED Beta format (500 lines) is explained primarily by the difference in the recording bandwidth.
In the NTSC
NTSC
NTSC, named for the National Television System Committee, is the analog television system that is used in most of North America, most of South America , Burma, South Korea, Taiwan, Japan, the Philippines, and some Pacific island nations and territories .Most countries using the NTSC standard, as...
transmission standard, each field contains 262.5 lines, and 59.94 fields are transmitted every second. Each line must therefore take 63 microseconds, 10.7 of which are for reset to the next line. Thus, the retrace rate is 15.734 kHz. For the picture to appear to have approximately the same horizontal and vertical resolution (see Kell factor
Kell factor
The Kell factor, named after RCA engineer Raymond D. Kell, is a parameter used to limit the bandwidth of a sampled image signal to avoid the appearance of beat frequency patterns when displaying the image in a discrete display devices, usually taken to be 0.7. The number was first measured in 1934...
), it should be able to display 228 cycles per line, requiring a bandwidth of 4.28 MHz. If the line (sensor) width is known, this may be converted directly into cycles per millimeter, the unit of spatial resolution.
B/G/I/K television system signals (usually used with PAL
PAL
PAL, short for Phase Alternating Line, is an analogue television colour encoding system used in broadcast television systems in many countries. Other common analogue television systems are NTSC and SECAM. This page primarily discusses the PAL colour encoding system...
colour encoding) transmit frames less often (50 Hz), but the frame contains more lines and is wider, so bandwidth requirements are similar.
Note that a "discernible line" forms one half of a cycle (a cycle requires a dark and a light line), so "228 cycles" and "456 lines" are equivalent measures.
System resolution
There are two methods by which to determine system resolution. The first is to perform a series of two dimensional convolutionConvolution
In mathematics and, in particular, functional analysis, convolution is a mathematical operation on two functions f and g, producing a third function that is typically viewed as a modified version of one of the original functions. Convolution is similar to cross-correlation...
s, first with the image and the lens, then the result of that procedure with the sensor, and so on through all of the components of the system. This is computationally expensive, and must be performed anew for each object to be imaged.
The other method is to transform each of the components of the system into the spatial frequency domain, and then to multiply the 2-D results. A system response may be determined without reference to an object. Although this method is considerably more difficult to comprehend conceptually, it becomes easier to use computationally, especially when different design iterations or imaged objects are to be tested.
The transformation to be used is the Fourier transform.
Ocular resolution
The human eyeHuman eye
The human eye is an organ which reacts to light for several purposes. As a conscious sense organ, the eye allows vision. Rod and cone cells in the retina allow conscious light perception and vision including color differentiation and the perception of depth...
is a limiting feature of many systems, when the goal of the system is to present data to humans for processing.
For example, in a security or air traffic control function, the display and work station must be constructed so that average humans can detect problems and direct corrective measures. Other examples are when a human is using eyes to carry out a critical task such as flying (piloting by visual reference), driving a vehicle, and so forth.
The best visual acuity
Visual acuity
Visual acuity is acuteness or clearness of vision, which is dependent on the sharpness of the retinal focus within the eye and the sensitivity of the interpretative faculty of the brain....
of the human eye at its optical centre (the fovea) is less than 1 arc minute per line pair, reducing rapidly away from the fovea.
The human brain
Brain
The brain is the center of the nervous system in all vertebrate and most invertebrate animals—only a few primitive invertebrates such as sponges, jellyfish, sea squirts and starfishes do not have one. It is located in the head, usually close to primary sensory apparatus such as vision, hearing,...
requires more than just a line pair to understand what the eye is imaging. Johnson's Criteria
Johnson's Criteria
Johnson's criteria, or the Johnson criteria, created by John Johnson, describe both image- and frequency-domain approaches to analyzing the ability of observers to perform visual tasks using image intensifier technology. It was an important breakthrough in understanding the performance of visual...
defines the number of line pairs of ocular resolution, or sensor resolution, needed to recognize or identify an item.
Atmospheric resolution
Systems looking through long atmospheric paths may be limited by turbulenceTurbulence
In fluid dynamics, turbulence or turbulent flow is a flow regime characterized by chaotic and stochastic property changes. This includes low momentum diffusion, high momentum convection, and rapid variation of pressure and velocity in space and time...
. A key measure of the quality of atmospheric turbulence is the seeing diameter
Astronomical seeing
Astronomical seeing refers to the blurring and twinkling of astronomical objects such as stars caused by turbulent mixing in the Earth's atmosphere varying the optical refractive index...
, also known as Fried's seeing diameter
David L. Fried
David L. Fried is a scientist, best known for his contributions to optics. Fried described what has come to be known as Fried's seeing diameter, or r0 . The seeing diameter is effectively a limiting aperture due to atmospheric turbulence, and is found either empirically or statistically. The...
. A path which is temporally coherent is known as an isoplanatic patch.
Large apertures may suffer from aperture averaging, the result of several paths being integrated into one image.
Turbulence scales with wavelength at approximately a 6/5 power. Thus, seeing is better at infrared wavelengths than at visible wavelengths.
Short exposures suffer from turbulence less than longer exposures due to the "inner" and "outer" scale turbulence; short is considered to be much less than 10 ms for visible imaging (typically, anything less than 2 ms). Inner scale turbulence arises due to the eddies in the turbulent flow, while outer scale turbulence arises from large air mass flow. These masses typically move slowly, and so are reduced by decreasing the integration period.
A system limited only by the quality of the optics is said to be diffraction-limited
Diffraction-limited
The resolution of an optical imaging system — a microscope, telescope, or camera — can be limited by factors such as imperfections in the lenses or misalignment. However, there is a fundamental maximum to the resolution of any optical system which is due to diffraction...
. However, since atmospheric turbulence is normally the limiting factor for visible systems looking through long atmospheric paths, most systems are turbulence-limited. Corrections can be made by using adaptive optics
Adaptive optics
Adaptive optics is a technology used to improve the performance of optical systems by reducing the effect of wavefront distortions. It is used in astronomical telescopes and laser communication systems to remove the effects of atmospheric distortion, and in retinal imaging systems to reduce the...
or post-processing techniques.
where
is the spatial frequency is the wavelength
- f is the focal length
- D is the aperture diameter
- b is a constant (1 for far-field propagation)
- and is Fried's seeing diameter
Super resolution
Discussion of super-resolutionSuper-resolution
Super-resolution are techniques that enhance the resolution of an imaging system. Some SR techniques break the diffraction-limit of systems, while other SR techniques improve over the resolution of digital imaging sensor....
, multi-frame blind deconvolution, and other methods.
Measuring optical resolution
A variety of measurement systems are available, and use may depend upon the system being tested.Typical test charts for Contrast Transfer Function (CTF) consist of repeated bar patterns (see Discussion below). The limiting resolution is measured by determining the smallest group of bars, both vertically and horizontally, for which the correct number of bars can be seen. By calculating the contrast between the black and white areas at several different frequencies, however, points of the CTF can be determined with the contrast equation.
where
- is the normalized value of the maximum (for example, the voltage or grey value of the white area)
- is the normalized value of the minimum (for example, the voltage or grey value of the black area)
When the system can no longer resolve the bars, the black and white areas have the same value, so Contrast = 0. At very low spatial frequencies, Cmax = 1 and Cmin = 0 so Modulation = 1. Some modulation may be seen above the limiting resolution; these may be aliased and phase-reversed.
When using other methods, including the interferogram, sinusoid, and the edge in the ISO 12233 target, it is possible to compute the entire MTF curve. The response to the edge is similar to a step response
Step response
The step response of a system in a given initial state consists of the time evolution of its outputs when its control inputs are Heaviside step functions. In electronic engineering and control theory, step response is the time behaviour of the outputs of a general system when its inputs change from...
, and the Fourier Transform of the first difference of the step response yields the MTF.
Interferogram
An interferogram created between two coherent light sources may be used for at least two resolution-related purposes. The first is to determine the quality of a lens system (see LUPI), and the second is to project a pattern onto a sensor (especially photographic film) to measure resolution.NBS 1010a/ ISO #2 target
This 5 bar resolution test chart is often used for evaluation of microfilm systems and scanners. It is convenient for a 1:1 range (typically covering 1-18 cycles/mm) and is marked directly in cycles/mm. Details can be found in ISO-3334.USAF 1951 target
The USAF 1951 resolution test target1951 USAF Resolution Test Chart
1951 USAF resolution test chart is a resolution test pattern conforming to MIL-STD-150A standard, set by US Air Force in 1951. It is still widely accepted to test the resolving power of optical imaging systems such as microscopes, cameras and image scanners, although MIL-STD-150A was cancelled on...
consists of a pattern of 3 bar targets. Often found covering a range of 0.25 to 228 cycles/mm. Each group consists of six elements. The group is designated by a group number (-2, -1, 0, 1, 2, etc.) which is the power to which 2 should be raised to obtain the spatial frequency of the first element (e.g., group -2 is 0.25 line pairs per millimeter). Each element is the 6th root of 2 smaller than the preceding element in the group (e.g. element 1 is 2^0, element 2 is 2^(-1/6), element 3 is 2(-1/3), etc.). By reading off the group and element number of the first element which cannot be resolved, the limiting resolution may be determined by inspection. The complex numbering system and use of a look-up chart can be avoided by use of a newer layout chart, which labels the groups directly in cycles/mm and is available in the links below from Applied Image.
NBS 1952 target
The NBS 1952 target is a 3 bar pattern (long bars). The spatial frequency is printed alongside each triple bar set, so the limiting resolution may be determined by inspection. This frequency is normally only as marked after the chart has been reduced in size (typically 25 times). The original application called for placing the chart at a distance 26 times the focal length of the imaging lens used. The bars above and to the left are in sequence, separated by approximately the square root of two (12, 17, 24, etc.), while the bars below and to the left have the same separation but a different starting point (14, 20, 28, etc.)EIA 1956 video resolution target
The EIA 1956 resolution target was specifically designed to be used with television systems. The gradually expanding lines near the center are marked with periodic indications of the corresponding spatial frequency. The limiting resolution may be determined by inspection. The most important measure is the limiting horizontal resolution, since the vertical resolution is typically determined by the applicable video standard (I/B/G/K/NTSC/NTSC-J).IEEE Std 208-1995 target
The IEEE 208-1995 resolution target is similar to the EIA target. Resolution is measured in horizontal and vertical TV lines.ISO 12233 target
The ISO 12233 target was developed for digital camera applications, since modern digital camera spatial resolution may exceed the limitations of the older targets. It includes several knife-edge targets for the purpose of computing MTF by Fourier TransformFourier transform
In mathematics, Fourier analysis is a subject area which grew from the study of Fourier series. The subject began with the study of the way general functions may be represented by sums of simpler trigonometric functions...
. They are offset from the vertical by 5 degrees so that the edges will be sampled in many different phases, which allow estimation of the spatial frequency response beyond the Nyquist frequency
Nyquist frequency
The Nyquist frequency, named after the Swedish-American engineer Harry Nyquist or the Nyquist–Shannon sampling theorem, is half the sampling frequency of a discrete signal processing system...
of the sampling.
Random test patterns
The idea is analogous to the use of a white noiseWhite noise
White noise is a random signal with a flat power spectral density. In other words, the signal contains equal power within a fixed bandwidth at any center frequency...
pattern in acoustics to determine system frequency response.
Monotonically increasing sinusoid patterns
The interferogram used to measure film resolution can be synthesized on personal computers and used to generate a pattern for measuring optical resolution. See especially Kodak MTF curves.Multiburst
A multiburst signal is an electronic waveform used to test analog transmission, recording, and display systems. The test pattern consists of several short periods of specific frequencies. The contrast of each may be measured by inspection and recorded, giving a plot of attenuation vs. frequency. The NTSC3.58 multiburst pattern consists of 500 kHz, 1 MHz, 2 MHz, 3 MHz, and 3.58 MHz blocks. 3.58 MHz is important because it is the chrominanceChrominance
Chrominance is the signal used in video systems to convey the color information of the picture, separately from the accompanying luma signal . Chrominance is usually represented as two color-difference components: U = B' − Y' and V = R' − Y'...
frequency for NTSC video.
Discussion
It should be noted whenever using a bar target that the resulting measure is the Contrast Transfer Function (CTF) and not the MTF. The difference arises from the subharmonics of the square waves and can be easily computed.See also
- Image resolutionImage resolutionImage resolution is an umbrella term that describes the detail an image holds. The term applies to raster digital images, film images, and other types of images. Higher resolution means more image detail....
, in computing - Minimum resolvable contrastMinimum Resolvable ContrastMinimum resolvable contrast is a subjective measure of a visible spectrum sensor’s or camera's sensitivity and ability to resolve data. A snapshot image of a series of three bar targets of selected spatial frequencies and various contrast coatings captured by the UUT are used to determine the...
- Siemens starSiemens starA Siemens star is a device used to test the resolution of optical instruments, printers and displays. It consists of a pattern of bright "spokes" on a dark background, which radiate from a common centre and become wider as they get further from it...
, a pattern used for resolution testing - Square meters per pixelSquare meters per pixelSquare meters per pixel is the common unit of resolution for remote digital imaging of the surfaces of terrestrial objects of the Solar System, including the Earth.Some resolutions include:*ASTER : 15–90 m2/pixel....
- SuperlensSuperlensA superlens, super lens or perfect lens is a lens which uses metamaterials to go beyond the diffraction limit. The diffraction limit is an inherent limitation in conventional optical devices or lenses. In 2000, a type of lens was proposed, consisting of a metamaterial that compensates for wave...
External links
- Norman Koren's website includes several downloadable test patterns
- UC Santa Cruz Prof. Claire Max's lectures and notes from Astronomy 289C, Adaptive Optics
- George Ou re-created the EIA 1956 chart from a high-resolution scan.
- Do Sensors “Outresolve” Lenses?; on lens and sensor resolution interaction.