ENHANCING CONTRAST OF IMAGES TO IMPROVE GEOMETRIC ACCURACY OF A UAV PHOTOGRAMMETRY PROJECT

In recent years, Unmanned Aerial Vehicles (UAVs) have become popular tools in mapping applications. In such applications, the image motion, bad lighting effects, and poor texture all directly affect the quality of the derived tie points, which in turn imposes constraints on image extraction and may lead to a low accuracy point cloud. This paper proposes a contrast enhancement technique to improve the accuracy of a photogrammetric model created using UAV images. The luminance component (Y) in the YIQ color space is normalized using the sigmoid function, and the low contrast images are enhanced using the Contrast-Limited Adaptive Histogram Equalization (CLAHE) on the luminosity component. To evaluate the proposed method, three-dimensional models were created using images acquired by the Phantom 4 Pro UAV in three distinct places and at altitudes of 20, 40, 60, 80, and 90 meters. The results showed that enhancing the contrast of images increased the number of tie points and reduced reprojection error by approximately 10%. It also improved the resolution of the digital elevation model by approximately 2cm/pixel while greatly improving the texture and quality with respect to that developed using the original images.


INTRODUCTION
Today, given the relevance of three-dimensional models in a wide variety of computer vision applications, a large number of academics are studying the issues of reconstructing threedimensional models from two-dimensional images (Alasal et al., 2018). Without a doubt, approaches for three-dimensional reconstructing using images have advanced beyond the shadow of laser scanning (Luhmann et al., 2020). Thus, by combining automated computer vision algorithms with trustworthy and precise photogrammetric methods, successful solutions for automatic and accurate three-dimensional reconstruction of picture data sets are created (Demetrescu et al., 2020;Pepe and Costantino, 2020;Qin and Gruen, 2021). Meanwhile, the rapid growth of unmanned aerial vehicles (UAVs) in recent years and their ability to provide high resolution and accuracy information has improved UAV photogrammetry projects. Moreover, their versatility in data acquisition, as well as the combination of different sensors and the use of three-dimensional model production algorithms such as Structure from Motion (SfM) and Multi View-Stereo (MVS) (Moons et al., 2009;Skarlatos and Kiparissi, 2012) has been used in a variety of applications such as surveying and forestry (Chang et al., 2020;Fakhri and Latifi, 2021), archaeology (Jacq et al., 2021), civil engineering (Lv et al., 2021), and documentation (Godinho et al., 2020). These advancements have transformed drone systems into standard platforms for collecting three-dimensional data (Jarzabek-Rychard and Karpina, 2016;Yao et al., 2019). To create a three-dimensional model of images, the scene's objects can be modelled as "active" or "passive" (Alasal et al., 2018). Three-dimensional modelling was accomplished by using * Corresponding author the active technique by modifying illumination conditions, controlling camera angles, and also utilizing the camera's predetermined calibration parameters (Fakhri and Fakhri, 2019). However, in passive techniques, the quality of the images may be compromised due to the lack of control over lighting conditions and the utilization of the solar energy source (Arroyo-Mora et al., 2021;Revuelto et al., 2021). As a result, optimizing image acquisition quality prior to performing three-dimensional reconstruction operations becomes a critical requirement in autonomous three-dimensional reconstruction approaches. In general, extensive research has been conducted to improve the 3D model's quality. The research conducted in this field can be classified into two broad categories. In the first view, preprocessing is used on the images to remove any negative conditions that may affect the matching results (Ballabeni et al., 2015;Bellavia et al., 2015;Gaiani et al., 2016;Maini and Aggarwal, 2010;Verhoeven et al., 2015). Researchers typically undertake pre-processing on the photos when reviewing the aforementioned methods, such as applying initial adjustments. Or, by employing more complex filters, the quality of the 3D model created by automated image-based methods can be improved. In comparison to the original images, these methods significantly improve the correlation quality and the external orientation accuracy of images and point clouds. However, this approach lacks control over the distribution and quality of selected tie points within the image, and it is incapable of extracting points from a large number of repeats, causing the other significant aspects to become uncontrollable. In the second view, picture matching efficiency has been enhanced in the feature selection stage by identifying image characteristics with a stricter threshold. By examining the methods available in this approach (Dymczyk et al., 2016;Hartmann et al., 2014;Wu, 2013), classifications can be made based on the identification of image features using a more stringent threshold, which increases computational speed when fewer features but higher contrast are used. These methods have several limitations, including the elimination of a large number of suitable points as the threshold is increased, the uncertainty of features with lower resolution in the location, the requirement for high-quality training data to teach classification algorithms, and a disregard for multiplicity. Highlighted features on the image, and so forth. Thus, by comparing the research conducted in the two approaches above, it can be concluded that in the methods used to improve image matching conditions, additional corrections such as image contrast enhancement, noise removal, image content enhancement, and color conversion to Grayscale are discussed, all of which can contribute significantly to improving image matching conditions by creating color balance in images and enhancing coherence. As a result, this study investigates the effect of contrast-enhanced images acquired by UAV platforms in low-light conditions prior to producing a three-dimensional model; thus, the primary objective of this research is to determine the effect of contrastenhanced images on the preparation of a three-dimensional model from the perspective of image matching and point cloud density, as well as the effect on triangulation accuracy, check point accuracy, digital elevation model accuracy and orthophoto mosaic quality. Section 2 details the suggested algorithm's method and flowchart. Section 3 presents the results and evaluations of the proposed algorithm's implementation, followed by a conclusion.

PROPOSED METHOD
Two tasks (Figure 1.) are accomplished in this paper: a) The proposed algorithm for image contrast enhancement and its comparison to existing approaches. b) Construction and evaluation of a three-dimensional model. According to the flowchart above, in order to study the influence of image contrast enhancement on the three-dimensional model generated using UAV photogrammetry, it is necessary to first enhance the low-contrast images in the pre-processing stage using the proposed technique. The three-dimensional point cloud and other photogrammetric products such as the digital elevation model and orthophoto mosaic are then generated using contrastenhanced images. Finally, the relevant assessments will be reviewed in order to assess the suggested algorithm's effectiveness and the three-dimensional model generated using various criteria.

The proposed algorithm for image contrast enhancement
The proposed method for enhancing the contrast of images collected by UAV photogrammetry is based on research (Lal et al., 2015). To enhance the contrast of the images in the preceding research (Lal et al., 2015), the RGB color space was transformed to YIQ (Equation 1), a technique that requires two processing stages. The luminance component (Y) is normalized in the YIQ color space using the sigmoid function (Equation 2), and the resulting component (YP) is applied using the adaptive histogram equalization (AHE) method (Pizer et al., 1987). The images are then subjected to the auto-contrast enhancement method in the following stage.
To develop the study's main idea (Lal et al., 2015), after normalizing the luminance component (Y) with the sigmoid function and in accordance with research results (Lestari and Luthfi, 2019), which indicate that the Contrast-Limited Adaptive Histogram Equalization (CLAHE) (Zuiderveld, 1994) approach outperforms AHE. CLAHE is applied on the luminance component (Y). The images are then enhanced using the autocontrast enhancement approach (Lal et al., 2015). On the images, a Bilateral filter (Tomasi and Manduchi, 1998) was employed to remove noise produced by the CLAHE approach. The Bilateral filter is a nonlinear filter that smooths the image's edges and reduces noise; it preserves the image's main edges and results in a smooth image with edges and noise reduction (Paris et al., 2009;Tomasi and Manduchi, 1998). Equation 1: (Recommendation, 2005;Standard, 2003) and (1) Where In denotes the component of luminance (Y).

Evaluation of the proposed algorithm for contrast enhancement
To evaluate and compare the proposed contrast-enhancement algorithm's performance, several commonly used algorithms such as HE (Pizer et al., 1987), CLAHE, and AMCE (Lal et al., 2015) are analyzed using a variety of evaluation criteria. These are the following criteria: Equation 3: Shannon entropy (Shannon, 1948).
Where I is the original image (reduced), pi denotes the probability that the value of i occurs in image I, in other terms, the variable p is computed from the gray values of the image histogram (for example, Pi comprises the values of the i level), L = 2q denotes the number of various grayscale values, as well as q bits per pixel. Equation 4: Standard deviation (SD) (Román et al., 2017).
Where k denotes the numerical value of the pixel in the original image (reduced), I, L -1 denotes the maximum grayscale, A (I) denotes the average grayscale intensity of the image, and p (k) denotes the probability of the value of k occurring. Equations 5 and 5-1: Peak signal-to-noise ratio (PSNR) and Mean Squared Error (MSE) (Hore and Ziou, 2010).
( , ) = 10 × 10 The peak signal-to-noise ratio is defined as M × N according to the size of the original image (reduced) and enhanced image. Equation 6: Absolute Mean Brightness Error (AMBE) (Phanthuna, 2015).
Where I and IE represent the original image (reduced) and enhanced, respectively, A (I) and A (IE) represent the mean brightness of the aforementioned images. Equations 7 and 7-1: The linear blur index (Kaufmann, 1975).
Where M × N are the dimensions of the original image (reduced), I (u, v) are the grayscale values of the image pixels (u, v) and L -1 is the maximum grayscale value of the image I. Equation 8: Colorfulness (CM) (Susstrunk and Winkler, 2003).
Where in the above equations σα and σβ are the standard deviations α and β, respectively, which are similarly their average, μα and μβ.
This criterion is used to distinguish the original (reduced) image from the enhanced image.

Construction and evaluation of a three-dimensional model
After applying the proposed contrast enhancement method to the images, it is necessary to perform aerial triangulation and calculate the camera's interior orientation parameters after generating a sparse point cloud. The constructed model is evaluated in this stage by selecting a set of points as a control point and another set as a check point. The check points are used to compare the accuracy of models generated in vertical and horizontal modes. We calculated the error using relations (10) and (11) by comparing the difference between the horizontal coordinates of the check points and the three-dimensional points produced by the model via the two-dimensional Euclidean distance, as well as the difference between the vertical coordinates of the model points and the height of the check points (17).
Where xm, ym, and zm represent the three-dimensional coordinates of the model's points, xch, ych, and zch represent the threedimensional coordinates of the check points, and EPl and EAl represent the horizontal and vertical errors, respectively. Additionally, various photogrammetric outputs are analyzed and evaluated by constructing and comparing quantitatively and qualitatively the digital elevation model and orthophoto mosaic to modes with contrast reduction.

EXPERIMENTAL RESULTS AND DISCUSSION
To evaluate the effect of the proposed algorithm, images taken by multi-rotor drones in various parts of Iran ( Additionally, images were captured using DJI's Phantom 4 Pro UAV (Figure 3.). As stated in Table 1, this UAV is controlled by a controller and equipped with a non-metric camera .  To begin with data analysis, a 3D model of low contrast images must be constructed. Thus, considering the high contrast of the images of the Qazvin and Karaj regions, a contrast reduction method was applied. Ultimately, using Halvan data with low image contrast, a 3D model was constructed. A 3D model was constructed using the proposed contrast-enhancement algorithm on the aforementioned images in the second stage. Figure 4 illustrates a sample of low-contrast, original and improved images are taken from a height of 90 m, together with their histogram.

Low contrast
Original Enhanced Figure 4. A histogram and images with reduced contrast and enhanced.
As illustrated in the Figure 4, the proposed algorithm enhances the image histogram and enhances the contrast of the images. Triangulation and the construction of a 3D model were performed using Agisoft Metashape v1.5.5 software offered by Agisoft LLC. Additionally, the Python 3.10.1 programming language was utilized to implement the proposed algorithm and reduce the contrast of reference images.

Evaluation of results
The evaluation and interpretation of the obtained results are divided into two stages. The first section compares the proposed method for enhancing the contrast of images to current methods using assessment criteria. The second portion will analyse and assess the effect of enhancing the contrast of images on the accuracy of three-dimensional modelling when compared to the low contrast mode.

Evaluating the performance of a method for enhancing the contrast of images
To assess the performance of the image contrast improvement method in the first stage, the effect of the proposed method is compared to that of the original and reduced modes using a variety of evaluation criteria (Table 2).  Table 2, applying the proposed method to improve the contrast of images on the data set used resulted in a significant improvement in image quality when compared to the reduced and original modes. As a result, the proposed method will be evaluated and compared to several existing methods in the following step. 50 frames from various data sets will be chosen for this purpose, and the results will be presented in Table   3 .  Table 3. Comparison of the proposed method's performance to that of other commonly used contrast enhancement methods.

CEF
Based on the evaluations contained in Table 3, the following conclusions are drawn:

▪
Entropy criterion: Because the criterion above quantifies the content of image information and indicates the degree of uncertainty or unpredictability associated with the information included in an image, the more image information present, the higher the numerical value of the output and the greater its quality. The proposed approach and AMCE have demonstrated superior performance to alternative methods. ▪ Standard Deviation Criterion: Given that the criterion above quantifies the image information if the numerical value of the criterion above is greater in the enhanced image than in the low contrast image, the contrast enhancement algorithm performed better. Due to the image's low contrast of 14.1731, the proposed approaches of HE and AMCE enhanced the image's contrast. ▪ Peak Signal-to-Noise Ratio Criterion: Due to the fact that the preceding criterion checks an image's signalto-noise ratio, the lower the image noise, the higher the result of this criterion. As a result, the proposed method and CLAHE result in the least amount of distortion in enhanced images. ▪ Absolute Mean Brightness Error Criterion: Because the aforementioned criterion quantifies the average brightness of the processed image, a low numerical value indicates that the image's average brightness is preserved. As a result, the proposed method and AMCE perform the best. ▪ Linear Blur Index Criterion: This criterion (γ) is used to evaluate the performance of contrast enhancement images, with a lower numerical output value indicating a higher performance of the contrast enhancement algorithm. According to the proposed method, AMCE and HE exhibit the best opacity. ▪ Colorfulness Criterion: As noted previously, CM and CEF criteria are only applicable to color images (RGB); thus, they cannot be employed in the HE and CLAHE approaches, which utilize images in the form of a grayscale band. This metric quantifies the quantity of color in an image, with a bigger numerical output value indicating more color detail. As a result, the proposed method is the most efficient. ▪ Color Enhancement Factor Coefficient: This criterion is a spectral component related to color saturation change; the higher the output's numerical value, the better. As a result, the proposed method is the most efficient.
In general, it is impossible to say which of the other methods presented performs better in all evaluation criteria, but the proposed method performs well in the majority of them; thus, it can be concluded that the proposed method provides more detail than UAV-based images and also better preserves the image's brightness.

Evaluating the effect of enhancing image contrast on the accuracy of modeling
Target targets were selected to assess the modeling's accuracy and their 3D coordinates measured prior to imaging operations in the research locations. As a result, 50% of the points primarily positioned in the project's corners were designated as check points. It should be noted that Agisoft Metashape software was used to generate a sparse point cloud and perform other steps associated with the construction of a 3D model. Images taken at 20 m, 40 m, 60 m, 80 m, and 90 m were used to generate the sparse point cloud and perform other steps associated with the construction of a 3D model. Thus, to generate a sparse point cloud, software-changeable parameters, in this case, the maximum tie and key points, as well as a search for key points, were done pixel by pixel. Figure 5 illustrates a sparse point cloud generated by the software in both low-contrast and enhanced modes from heights of 80 and 90 m.
D. Images with enhanced contrast C. Images with reduced contrast B. Images with enhanced contrast A. Images with a low contrast Point cloud produced from Qazvin area at an altitude of 90 m Point cloud produced from Helwan area at an altitude of 80 m By comparing the density of tie and key points extracted from low contrast and enhanced contrast images, it was discovered that the density of points in the enhanced contrast images is greater than the density of points in the low contrast images. Figure 6 illustrates how the algorithms extracted the tie and key points from a contrast-enhanced image taken at an altitude of 90 m.
A. Image with reduced contrast B. Image with enhanced contrast Figure 6. An illustration of the tie points extracted from a 90-meter-altitude image.
When utilizing contrast-enhanced images, as seen in the red boxes above, the density and increase of tie points are greater than when using low-contrast images. As a result, a comparison was made based on determining the number of tie points and the amount of reprojection error in each of the models produced in the above two cases, and it was discovered that after contrast enhancement and the average reprojection error, tie points in each model increased by about 6 to 10%. It has decreased by approximately 8% to 11%. Figure  There is an increase of around 10% in the number of tie points and a reprojection of about 10% in all models built from contrastenhanced images (orange axis) compared to models made from lower contrast images (blue axis) as can be seen in the graphs above. Control points and check points were also added to the generated points cloud in order to study the effect of increasing image contrast on modelling accuracy. Calculations will be made. Figure  The orthophoto mosaic generated by the enhanced images has a better texture than the lower contrast mode, as shown in the Figure 9, which can improve the cartograph's accuracy in sketching numerous features in more detail. On two samples of Halvan images, the effect of enhancing image contrast on increasing the quality and accuracy of the orthophoto mosaic produced to execute the feature drawings is demonstrated in Figure 10.   The number of tie points in all models produced using images with enhanced contrast is approximately 10% higher than the low contrast mode, and the reprojection error is reduced by around 10%, as shown in Table 4. In the generated of DTM models, there is also resolution enhancement approximately 2cm/pixel. Of course, the amount of horizontal and vertical error between.

CONCLUSION
Due to the variety of applications for UAV photogrammetry, such as mapping and the construction of three-dimensional models, we are occasionally required to acquire images in lowlight settings. Due to the scope of UAV-assisted projects, these conditions may not be applicable to the entire region. As a result of the shadow, one section of the object will receive more light while another will receive less. This disturbs the point cloud generation process and results in an inadequate texture for the overall model produced. As a result, the influence of contrast enhancement on images in dark and low contrast areas has been investigated in this article. Because many techniques are offered in histogram equalization methods in order to increase the grayscale range of the image. To illustrate the proposed algorithm, the conventional contrastenhancement algorithms with the greatest performance on various images were used in this research. However, multiple error criteria were used to illustrate the proposed method's performance to show that the new algorithm provided significantly higher accuracy. Thus, by applying the proposed technique to low contrast images from three different study locations and five different flight altitudes, photogrammetric products such as dense and sparse point clouds, digital elevation models, and orthophoto mosaic were generated in both cases.
The results indicated that the number of tie points extracted after using the proposed contrast-enhancement technique to lowcontrast images increased by approximately 10%, increasing the density of the point cloud. Contrast enhancement of the images also results in a relative gain of approximately 2 cm/px in the resolution of the digital elevation model. Additionally, reprojection error was reduced by approximately 10%, although calibration parameters and check point error did not differ significantly between images with low contrast and images enhanced by the algorithm.

REFERENCES
A. Ballabeni, F.I. Apollonio, M. Gaiani, F. Remondino, 2015. Advances in Image Pre-processing to Improve Automated 3D Reconstruction, International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences.