Image fusion
Encyclopedia
In computer vision
, Multisensor Image fusion is the process of combining relevant information from two or more images into a single image. The resulting image will be more informative than any of the input images.
In remote sensing applications, the increasing availability of space borne sensors gives a motivation for different image fusion algorithms.
Several situations in image processing require high spatial and high spectral resolution in a single image. Most of the available equipment is not capable of providing such data convincingly. The image fusion techniques allow the integration of different information sources. The fused image can have complementary spatial and spectral resolution characteristics. However, the standard image fusion techniques can distort the spectral information of the multispectral data while merging.
In satellite imaging, two types of images are available. The panchromatic image acquired by satellites is transmitted with the maximum resolution available and the multispectral data are transmitted with coarser resolution. This will usually be two or four times lower. At the receiver station, the panchromatic image is merged with the multispectral data to convey more information.
Many methods exist to perform image fusion. The very basic one is the high pass filtering technique. Later techniques are based on DWT, uniform rational filter bank, and laplacian pyramid.
The fusion methods such as averaging, Brovey method, principal component analysis (PCA) and IHS
based methods fall under spatial domain approaches. Another important spatial domain fusion method is the high pass filtering based technique. Here the high frequency details are injected into upsampled version of MS images. The disadvantage of spatial domain approaches is that they produce spatial distortion in the fused image. Spectral distortion becomes a negative factor while we go for further processing, such as classification problem. Spatial distortion can be very well handled by transform domain approaches on image fusion. The multiresolution analysis has become a very useful tool for analysing remote sensing images. The discrete wavelet transform
has become a very useful tool for fusion. Some other fusion methods are also there, such as Lapacian pyramid based, curvelet transform based etc. These methods show a better performance in spatial and spectral quality of the fused image compared to other spatial methods of fusion.
The images used in image fusion should already be registered
. Misregistration is a major source of error in image fusion. Some well-known image fusion methods are:
The SPOT
PAN satellite provides high resolution (10m pixel) panchromatic data. While the LANDSAT TM satellite provides low resolution (30m pixel) multispectral images. Image fusion attempts to merge these images and produce a single high resolution multispectral image.
The standard merging methods of image fusion are based on Red-Green-Blue (RGB) to Intensity-Hue-Saturation (IHS) transformation. The usual steps involved in satellite image fusion are as follows:
An explanation of how to do Pan-sharpening in Photoshop.
(CT), positron emission tomography
(PET), and single photon emission computed tomography
(SPECT). In radiology
and radiation oncology, these images serve different purposes. For example, CT images are used more often to ascertain differences in tissue density while MRI images are typically used to diagnose brain tumors.
For accurate diagnoses, radiologists must integrate information from multiple image formats. Fused, anatomically-consistent images are especially beneficial in diagnosing and treating cancer. Companies such as Nicesoft, Velocity Medical Solutions, Mirada Medical, Keosys, MIMvista, IKOE, and BrainLAB have recently created image fusion software for both improved diagnostic reading, and for use in conjunction with radiation treatment planning systems. With the advent of these new technologies, radiation oncologists can take full advantage of intensity modulated radiation therapy (IMRT). Being able to overlay diagnostic images onto radiation planning images results in more accurate IMRT target tumor volumes.
Computer vision
Computer vision is a field that includes methods for acquiring, processing, analysing, and understanding images and, in general, high-dimensional data from the real world in order to produce numerical or symbolic information, e.g., in the forms of decisions...
, Multisensor Image fusion is the process of combining relevant information from two or more images into a single image. The resulting image will be more informative than any of the input images.
In remote sensing applications, the increasing availability of space borne sensors gives a motivation for different image fusion algorithms.
Several situations in image processing require high spatial and high spectral resolution in a single image. Most of the available equipment is not capable of providing such data convincingly. The image fusion techniques allow the integration of different information sources. The fused image can have complementary spatial and spectral resolution characteristics. However, the standard image fusion techniques can distort the spectral information of the multispectral data while merging.
In satellite imaging, two types of images are available. The panchromatic image acquired by satellites is transmitted with the maximum resolution available and the multispectral data are transmitted with coarser resolution. This will usually be two or four times lower. At the receiver station, the panchromatic image is merged with the multispectral data to convey more information.
Many methods exist to perform image fusion. The very basic one is the high pass filtering technique. Later techniques are based on DWT, uniform rational filter bank, and laplacian pyramid.
Why Image Fusion
Multisensor data fusion has become a discipline to which more and more general formal solutions to a number of application cases are demanded. Several situations in image processing simultaneously require high spatial and high spectral information in a single image. This is important in remote sensing. However, the instruments are not capable of providing such information either by design or because of observational constraints. One possible solution for this is data fusion..Standard Image Fusion Methods
Image fusion methods can be broadly classified into two - spatial domain fusion and transform domain fusion.The fusion methods such as averaging, Brovey method, principal component analysis (PCA) and IHS
HSL color space
HSL and HSV are the two most common cylindrical-coordinate representations of points in an RGB color model, which rearrange the geometry of RGB in an attempt to be more intuitive and perceptually relevant than the cartesian representation...
based methods fall under spatial domain approaches. Another important spatial domain fusion method is the high pass filtering based technique. Here the high frequency details are injected into upsampled version of MS images. The disadvantage of spatial domain approaches is that they produce spatial distortion in the fused image. Spectral distortion becomes a negative factor while we go for further processing, such as classification problem. Spatial distortion can be very well handled by transform domain approaches on image fusion. The multiresolution analysis has become a very useful tool for analysing remote sensing images. The discrete wavelet transform
Discrete wavelet transform
In numerical analysis and functional analysis, a discrete wavelet transform is any wavelet transform for which the wavelets are discretely sampled...
has become a very useful tool for fusion. Some other fusion methods are also there, such as Lapacian pyramid based, curvelet transform based etc. These methods show a better performance in spatial and spectral quality of the fused image compared to other spatial methods of fusion.
The images used in image fusion should already be registered
Image registration
Image registration is the process of transforming different sets of data into one coordinate system. Data may be multiple photographs, data from different sensors, from different times, or from different viewpoints. It is used in computer vision, medical imaging, military automatic target...
. Misregistration is a major source of error in image fusion. Some well-known image fusion methods are:
- High pass filtering technique
- IHS transform based image fusion
- PCA based image fusion
- Wavelet transform image fusion
- pair-wise spatial frequency matching
Applications
- Image Classification
- Aerial and Satellite imaging
- Medical imaging
- Robot vision
- Concealed weapon detection
- Multi-focus image fusion
- Digital camera application
- Battle field monitoring
Satellite Image Fusion
Several methods are there for merging satellite images. In satellite imagery we can have two types of images- Panchromatic images - An image collected in the broad visual wavelength range but rendered in black and white.
- Multispectral images - Images optically acquired in more than one spectral or wavelength interval. Each individual image is usually of the same physical area and scale but of a different spectral band.
The SPOT
Spot
The common noun or verb spot has many meanings in English.Otherwise, Spot or SPOT may refer to:-Fiction:* Spot the Dog, a series of children's books adapted for animation* Spot , a character in 101 Dalmatians: The Series...
PAN satellite provides high resolution (10m pixel) panchromatic data. While the LANDSAT TM satellite provides low resolution (30m pixel) multispectral images. Image fusion attempts to merge these images and produce a single high resolution multispectral image.
The standard merging methods of image fusion are based on Red-Green-Blue (RGB) to Intensity-Hue-Saturation (IHS) transformation. The usual steps involved in satellite image fusion are as follows:
- Resize the low resolution multispectral images to the same size as the panchromatic image.
- Transform the R,G and B bands of the multispectral image into IHS components.
- Modify the panchromatic image with respect to the multispectral image. This is usually performed by histogram matchingHistogram matchingHistogram matching is a method in image processing of color adjustment of two images using the image histograms.It is possible to use histogram matching to balance detector responses as a relative detector calibration technique...
of the panchromatic image with Intensity component of the multispectral images as reference. - Replace the intensity component by the panchromatic image and perform inverse transformation to obtain a high resolution multispectral image.
An explanation of how to do Pan-sharpening in Photoshop.
Medical Image Fusion
Image fusion has become a common term used within medical diagnostics and treatment. The term is used when multiple patient images are registered and overlaid or merged to provide additional information. Fused images may be created from multiple images from the same imaging modality , or by combining information from multiple modalities , such as magnetic resonance image (MRI), computed tomographyComputed tomography
X-ray computed tomography or Computer tomography , is a medical imaging method employing tomography created by computer processing...
(CT), positron emission tomography
Positron emission tomography
Positron emission tomography is nuclear medicine imaging technique that produces a three-dimensional image or picture of functional processes in the body. The system detects pairs of gamma rays emitted indirectly by a positron-emitting radionuclide , which is introduced into the body on a...
(PET), and single photon emission computed tomography
Single photon emission computed tomography
Single-photon emission computed tomography is a nuclear medicine tomographic imaging technique using gamma rays. It is very similar to conventional nuclear medicine planar imaging using a gamma camera. However, it is able to provide true 3D information...
(SPECT). In radiology
Radiology
Radiology is a medical specialty that employs the use of imaging to both diagnose and treat disease visualized within the human body. Radiologists use an array of imaging technologies to diagnose or treat diseases...
and radiation oncology, these images serve different purposes. For example, CT images are used more often to ascertain differences in tissue density while MRI images are typically used to diagnose brain tumors.
For accurate diagnoses, radiologists must integrate information from multiple image formats. Fused, anatomically-consistent images are especially beneficial in diagnosing and treating cancer. Companies such as Nicesoft, Velocity Medical Solutions, Mirada Medical, Keosys, MIMvista, IKOE, and BrainLAB have recently created image fusion software for both improved diagnostic reading, and for use in conjunction with radiation treatment planning systems. With the advent of these new technologies, radiation oncologists can take full advantage of intensity modulated radiation therapy (IMRT). Being able to overlay diagnostic images onto radiation planning images results in more accurate IMRT target tumor volumes.
External links
- Investigations of Image Fusion, Electrical Engineering and Computer Science Department, Lehigh UniversityLehigh UniversityLehigh University is a private, co-educational university located in Bethlehem, Pennsylvania, in the Lehigh Valley region of the United States. It was established in 1865 by Asa Packer as a four-year technical school, but has grown to include studies in a wide variety of disciplines...
- Image Fusion Image Fusion Systems Research company
- Image fusion and Pan-sharpening Geosage
- http://www.math.hcmuns.edu.vn/~ptbao/LVTN/2003/cameras/a161001433035.pdf Z. Wang, D. Ziou, C. Armenakis, D. Li, and Q. Li, “A comparative analysis of image fusion methods,” IEEE Trans. Geosci. Remote Sens.,vol. 43, no. 6, pp. 81–84, Jun. 2005.