MULTI TIMES IMAGES FUSION BASED ON WAVELET THEORY

The development of new monitoring systems and the increasing interest of researchers in obtaining reliable measurements have leaded to the development of automatic monitoring moving objects. One way to ensure monitoring object is to use multi time s image fusion. Image fusion is a sub area of the more general topic of data fusion. Image fusion can be roughly defined as the process of combining multiple input images into an image, which contains the relevant information from the inputs. The aim of image fusion is to integrate complementary and redundant information from multiple images to create a composite that contains a better fused image than any of the individual source images. Main purpose of the former is to increase both the spectral and spatial resolution of images by combining multiple images. In this paper we tried to use this theory for moving object tracking, so with the usage of multi images that are obtained in different times and combination of them with this theory we identify the path of movement of moving object so this result could help us to implement automatic systems that that could monitor objects automatically without human interventation. So in this paper first we will discuss the principal of fusion and its famous method (wavelet theory) and all process that involved for doing a fusion process.


INTRODUCTION
A lot of low and high level image processing algorithms have to be developed to meet all the requirements of an intelligent monitoring system.The performance of the system will depend on the reliable recognition of moving object, their adequate description and the knowledge of how to combine these parameters with information from other sources to solve the problems of monitoring objects.So here we want to use image fusion method to identify movement path of a moving object so for doing this process we used steps like below: -Image acquisition in different times and preparation of them In this step we obtained images in different times so these sequence images will show the movement of object in different times then we tried to prepare them so in section.1 and 2 we discussed the principal of fusion for tracking moving object and in section.2I discussed the steps such as registration and sampling and histogram matching for preparation of input images.
-Using the best method for multi images fusion For fusion of multi images we used wavelet theory as an important theory for multi image fusion for tracking moving object, so saection.3will demonstrates the principal of fusion theory for integration of multi images and section.4demonstrates the principal of famous methods for fusion.Finally section.4 will show the result of fusion so fusion result will show us the path of moving object so the result could us to monitoring moving objects Figure .1 shows the steps of process for multi images fusion for tracking moving object Figure .1:The steps of process for multi images fusion for tracking moving object

Pan-sharpening
The goal of pan-sharpening is to fuse a low spatial resolution multispectral image with a higher resolution panchromatic image to obtain an image with high spectral and spatial resolution.The Intensity-Hue-Saturation (IHS) method is a popular pan-sharpening method used for its efficiency and high spatial resolution.However, the final image produced experiences spectral distortion.HIS stands for Hue Saturation Intensity (Hue Saturation Value), this method contain 3 steps: First: The low resolution RGB image is up sampled and converted to HSI space, Second: The panchromatic band is then matched and substituted for the Intensity band, Third: The HIS image is converted back to RGB space 3.2.PC Spectral Sharpening We can use PC Spectral Sharpening to sharpen spectral image data with high spatial resolution data.A principal component transformation is performed on the multi-spectral data.The PC band 1 is replaced with the high resolution band, which is scaled to match the PC band 1 so no distortion of the spectral information occurs.Then, an inverse transform is performed; the multi-spectral data is automatically resampled to the high resolution pixel size using a nearest neighbor, bilinear or cubic convolution technique .
-DWT (Discrete wavelet transform) It decomposes an image in to low frequency band and high frequency band in different levels, and it can also be reconstructed at these levels, when images are merged in this method different frequencies are proceeded differently, it improves the quality of new images so it is a good method for fusion at the pixel level.

RESULT AND DISCUS SION
Here we had a focus on multi time s image fusion; these images may be captured at different times.The object of the image fusion here is to retain the most desirable characteristics of each image to monitor moving object, We discussed different algorithms for data fusion at this paper but we had focus on Wavelet analysis for fusion of temporal images for monitoring moving object.The principle of image fusion using wavelets is to merge the wavelet decompositions of the multi times images using fusion methods applied to approximations coefficients and details coefficients.
-In first step ,we tried to select suitable wavelet form for fusion so in our experiment, seven types of wavelet families are examined: Haar Wavelet (HW), Daubechies(db), Symlets, Coiflets ,Biorthogonal , Reverse Biorthogonal , Discrete meyer(dmey) we tried to select the best form of wavelet based on correlation with original image so Daubechies(db1) was selected because of good result.
Then we tried to select level of decomposition based on wavelet theory, the maximum level to apply the wavelet transform depends on how many data points contain in a data set, so we examined selected level based on fusion result so we used decomposition in two levels, it could give us the high quality for fusion.The result was shown on Figure 4 that yellow circles on the picture, shows path of moving object, so fusion could lead us to better extraction of moving object for helping to have automatic monitoring system.
Figure2.The First image acquired in T1 Figure3.The Second image acquired in T2 Figure4.The fused image based on wavelet theory .

CONCLUSION
The main objective of this study was to overcome the present problems of automatic monitoring with multi time s image fusion.Wavelet theory was used in this study as a good method for fusion at the pixel level.It decomposes an image in to low frequency band and high frequency band in different levels, so we could integrate multi images for moving object tracking what s more during our process we tried to select the best form of wavelet and decomposition level for getting to the best quality for fusion of images.The result of this analysis will help us to implement automatic system that could monitor moving object without human intervention.
Figure4.The fused image based on wavelet theory .5.CONCLUSIONThe main objective of this study was to overcome the present problems of automatic monitoring with multi time s image fusion.Wavelet theory was used in this study as a good method for fusion at the pixel level.It decomposes an image in to low frequency band and high frequency band in different levels, so we could integrate multi images for moving object tracking what s more during our process we tried to select the best form of wavelet and decomposition level for getting to the best quality for fusion of images.The result of this analysis will help us to implement automatic system that could monitor moving object without human intervention.6. REFERENCES 1. Hall,D.,2001.Handbook of multisensor data fusion, CRC Press.2. Nahin, Pokoski,1980.Multi sensor Data Fusion.3. Kale,V.,2010.Performance Evaluation of Various Wavelets for Image Compression of Natural and Artificial Images.4.Wassai,A.,2011,Arithmetic and Frequency Filtering Methods of Pixel-Based Image Fusion Techniques ,India.
Combining data from multiple inaccurate sensors, which have an individual probability of correct inference of less than 0.5, does not provide a signi cant overall advantage 2. Combining data from multiple highly accurate sensors, which have an individual probability of correct inference of greater than 0.95, does not provide a signi cant increase in inference accuracy 3. When the number of sensors becomes large, adding additional identical sensors does not provide a signi cant improvement in inference accuracy 4. The greatest marginal improvement in sensor fusion occurs for a moderate number of sensors, each having a reasonable probability of correct identi cation -Different levels of data fusion 1.Pixel-level fusion: At the lowest level, uses the registered pixel data from all image sets to perform detection and discrimination functions.2. Feature-Level Fusion: combines the features of objects that are detected and segmented in the individual sensor domains 3. Decision-Level Fusion: Fusion at the decision level (also called post-decision or post-detection fusion) combines the decisions of independent sensor detection/classi cation paths by Boolean (AND, OR) operators or by a heuristic score (e.g., M-of-N, maximum vote, or weighted sum).