Application of Fusion with Sar and Optical Images in Land Use Classification Based on Svm

As the increment of remote sensing data with multi-space resolution, multi-spectral resolution and multi-source, data fusion technologies have been widely used in geological fields. Synthetic Aperture Radar (SAR) and optical camera are two most common sensors presently. The multi-spectral optical images express spectral features of ground objects, while SAR images express backscatter information. Accuracy of the image classification could be effectively improved fusing the two kinds of images. In this paper, Terra SAR-X images and ALOS multi-spectral images were fused for land use classification. After preprocess such as geometric rectification, radiometric rectification noise suppression and so on, the two kind images were fused, and then SVM model identification method was used for land use classification. Two different fusion methods were used, one is joining SAR image into multi-spectral images as one band, and the other is direct fusing the two kind images. The former one can raise the resolution and reserve the texture information, and the latter can reserve spectral feature information and improve capability of identifying different features. The experiment results showed that accuracy of classification using fused images is better than only using multi-spectral images. Accuracy of classification about roads, habitation and water bodies was significantly improved. Compared to traditional classification method, the method of this paper for fused images with SVM classifier could achieve better results in identifying complicated land use classes, especially for small pieces ground features.


INTRODUCTION
With the development of remote sensing technology, there are more and more multi-source remote sensing data of the same area could be obtained, especially the multi-spectral images and SAR images.As we all known, the optical image can reflect spectral information of surface features, and the SAR image can reflect back scattering intensity information of the different surface features and combined surface features To adequately take advantage of the spectral images and the SAR images, multisensor data fusion technique has been proposed in remote sensing image processing and Information Classification on Remote Sensing Image, which is the gist of this paper.So far, there are a number of remote sensing image fusion methods, especially for fusing the multi-spectral image and panchromatic image.All the standard image fusion methods such as IHS, PCA, Brovery transformation often result in serious spectral distortion.The emergence of the hybrid image fusion such as Wavelet Transformation solved the problem of the spectral distortion in the image fusion.Now the Wavelet Transformation has been widely used in various image fusion because of its better fusion effect.Based on the achievements summed up by our predecessors, this paper fuses the domestic airborne SAR images and SPOT5 images by all kinds of fusion methods.Furthermore, contrasts the fusion precision and selects the best fusion data to be classified based on SVM.

Principal component analysis
Principal component analysis (PCA), which first PCA transforms multi-band images, and enhances other single-band image by gray stretch which makes the mean and variance of gray to consistent with the first principal component image and replace the first.After PCA inverse transformation back to the original space, generate a multi-band image fusion, generating a multi-band of fusion image .The principal component images was Calculated out by following formula: Where K is the ordinal number of principal component, pc k for No. k principal components, i is the ordinal band, n is the number of SPOT band, di for the SAR image pixel values, for the eigenvector matrix of the i-th row in the k-th column of elements.

Fusion of high-pass filter (HPF)
HPF uses high-pass filter to suppress low-frequency spectral information and to enhance high-frequency spatial information from high-resolution image.Then combines the processed highresolution image with low-resolution image to increase lowresolution image's spatial resolution.This method can be used on fusing any kinds of bands.The formula is:  (2) In the formula, means the fusion of pixel values, Bljkmeans the low-resolution multi-spectral pixel value, FBHjk means highfrequency filtered images for the transportation value.

Wavelet transform (WT)
Wavelet transform is a global transformation, which has a good positioning capability in the time domain and frequency domain, high-frequency component of a gradual refinement of the time domain and space steps, it can focus on the image to be processed in any detail.At first, it wavelet transform the remote sensing image data which involved the fusion to decompose all the images into high-frequency information and low-frequency information.Second, it makes inverse wavelet transform to generate fusion image by combining high-frequency component of high resolution image with low-freguency component of low resolution image.

study area and data source
Study area is located in Songpan County Hengduan Mountains region.the main features in the region include buildings, roads, green space, farmland and so on.The used data were the Airborne SAR-X and SPOT data.The main parameters of the data were in Table 1.
Table 1 The Main Parameters Of Data Sources

Data Processing
To ensure accurate analysis, the relevant data was processed before data fusion.The data preprocessing includes geometric correction, cropping, noise reduction and so on.First, the SPOT geometric correction of and SAR image orthorectification were made based on the topographic map.Second, the corrected images were cropped.Third, the airborne SAR speckle noise was eliminated by using Gamma adaptive filtering method to ensure the precision error within 0.5 pixel before data fusion.

Image Fusion
After correlation analysis of the pre-processed data, this paper selects three RGB visible band combinations to fuse with the SAR data.Then data fusion had been maken by the methods such as PCA, HPF and WT transform.The fusion data were shown in Figure 1.All the fusion images have high spatial resolution and rich detail information.However, each fusion data of different methods has its own has its own characteristics: The above-described methods has its own characteristics, based on data fusion PCA transform texture, edges clearer, more appropriate to distinguish the boundary features in the extraction of information, but in true color image retention on the existence of a gap.HPF transform fusion compared to the original multi-spectral SPOT image, resolution is improved a lot, but compared to the fusion of information maintained in the spectrum than the inferior.And after wavelet transform fusion images in the same well preserved multi-spectral images of the spectral information, but also to keep the details of SAR images, SPOT images and the original color and consistent, compared to the other four transformations integration to obtain a better convergence results.

Evaluation of image fusion 4.2.1 Entropy:
Entropy is a measure of image information richness is an important indicator of the size of the entropy reflects the amount of information contained in the image number, under normal circumstances, the greater the entropy of fusion images, the amount of information it contains, the more the better integration .Image entropy H is defined as.
Where, L is the image of the gray series, the number of pixels for the gray value of the total number of pixels image ratio.Table .2Entropy of Image 4.2.2Definition: Clarity to an objective assessment of image texture features information and spectral characteristics of information, the higher the image resolution the better the quality index, blending the best results.In which the different bands of the image analysis showed that the fusion of the wavelet transform highest resolution, more than in 26.84, while the principal component analysis and high-pass filtering methods are less than 20, significantly less than the wavelet transform method.Table 3 Definition of Image

Support Vector Machine
Support Vector Machine (Support Vector Machine, referred to as SVM) is in the statistical theory developed on the basis of a new pattern recognition method, to solve the small sample, nonlinear and high dimensional pattern recognition problems show many unique advantages.Based on statistical learning theory, VC dimension and structural risk minimization theory based on the principle, and can be solved in practical applications there is a small sample, nonlinear, high dimension and local minima problems.In pattern recognition, regression estimation, probability density function estimation and other fields has been widely used.The texture recognition can be seen as a different texture and characteristics of the non-linear approximation is good cooked.The basic idea of SVM is to find a separating hyperplane, so the two classes of training samples in the sample can be separated and far away from the plane as much as possible, as shown in Figure .The figure, H is the optimal classification surface, Ha and Hb m is the distance between the classification space.The problem has a unique extremum, lagrange multiplier method with standard solvers.There will be a part of the solution is not zero, the corresponding sample is support vector.Solving optimization problems and the actual calculation of classification plane, simply calculate the function.Commonly used functions are polynomials, Radius Basis Function, Sigmoid, etc.These core functions have been proved suitable for the vast majority of non-linear classification problems.

Fig.4 Optimum separation plane
Based on evaluation of data fusion results of analysis, the wavelet analysis of the best fusion of data quality, cut experimental plots of the data fusion using SVM classification, classification of the original image and classification results as shown (5,6), fused image accuracy of 93.86%, to better distinguish between the cultivated land, forest, settlements and roads, classification accuracy of 93.86%, much higher than before the integration of the classification accuracy of 85.90% of data, to better achieve the western classification of surface features to provide a better classification.

Fig. 5 Fig. 6
Fig.5 The original image of Classification in study area International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B1, 2012 XXII ISPRS Congress, 25 August -01 September 2012, Melbourne, Australia

Table 2 ,
the maximum entropy International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B1, 2012 XXII ISPRS Congress, 25 August -01 September 2012, Melbourne, Australia of wavelet transform, and principal component analysis and high-pass filter is smaller for smaller wavelet transform.