EELGRASS BED MAPPING WITH MULTISPECTRAL UAV IMAGERY IN ATLANTIC CANADA

Eelgrass (Zostera marina L.) is a marine angiosperm that grows throughout coastal regions in Atlantic Canada. This study aimed to assess the capability of UAV multispectral imagery to map the presence of eelgrass beds within two estuaries in Atlantic Canada (Souris River and Richibucto River). The images were mosaicked using Agisoft and calibrated in reflectance. The corrected images were classified using a non-parametric supervised classifier (Random Forests). The input features of the classification were the UAV band reflectance and associated bathymetric ratios and vegetation indices. The resulting maps were compared with sonar data. The overall validation accuracy for presence/absence was 91.30% with the Souris image and 86.92%% with the Richibucto images. The limitations of the study are also presented.


INTRODUCTION
Eelgrass (Zostera marina L.) is an angiosperm species belonging to the seagrass family and growing in brackish and salt waters. They provide vital ecological functions, including stabilizing sediment, fish habitat, influencing current dynamics, and contributing significant amounts of biomass to food webs (Heck et al., 1995). As worldwide, eelgrasses have been declined in Atlantic Canada (DFO, 2009). It is essential to have an accurate method to map the eelgrass bed distribution to monitor eelgrasses properly. Sonar or bathymetric lidar data can be used, but their acquisition is challenging and expensive (Kenny et al., 2003;Webster et al., 2015). An alternative is to use aerial photographs or optical satellite imagery. Satellite imagery provides extensive coverage and does not require data interpolation (Forsey et al., 2020). However, they can only be acquired under clear sky conditions and are costly when acquired by commercial satellites. Unmanned aerial vehicles (UAV) imagery is more flexible and cost-effective imagery that has the additional advantage of having a higher spatial resolution than the airborne or satellite imageries (Ventura et al., 2018). So far, only a few studies have used UAV RGB images for mapping eelgrass beds (Kobnar, Iken, 2018;Duffy et al., 2018;Nahirnick et al., 2019a;2019b;Aarts et al., 2020;Svane et al., 2021;Krause et., 2021).
This study tests UAV multispectral imagery for mapping eelgrass bed distribution in two contrasting estuaries located in Atlantic Canada, i.e., the Souris River and Richibucto River estuaries. It expands on Gallant et al. (2021), who tested UAV multispectral imagery for mapping eelgrass bed distribution in the Souris River estuary. Like in Gallant et al. (2021), images will be classified with Random Forests (RF), which is a non-parametric supervised classifier (Wask, Braun, 2009) that was shown to outperform the maximum likelihood classifier (MLC) in eelgrass studies (Aarts et al., 2020). Such as in Clyne et al. (2021), and in contrast to Gallant et al. (2021), the classification will be done by considering single band reflectance images and associated vegetation indices and bathymetric ratios. Such as in Gallant et al. (2021), the resulting classified images will be compared to sonar data acquired almost at the same time as the UAV imagery. By doing so, we will meet one of the objectives of this study which is to assess whether sonar data are suitable for validating eelgrass bed distribution maps derived from UAV imagery.

Study area
This study used sonar and UAV data acquired over two river estuaries in Atlantic Canada. The first one is located just above the mouth of the Souris River, Prince Edward Island, Canada ( Figure 2). The second one is inside a closed bay just at the mouth of the Richibucto River ( Figure 2). Both watersheds are made of forested, agricultural, and wetland areas. Both river estuaries have calm water ideal for eelgrass growth because they are separated from the bay by a causeway and a beach. The water turbidity allows eelgrass beds to grow at a maximal depth of approximately 2.5 m. Both river estuaries have a seafloor made primarily of sand, with a deep navigation channel having faster water velocity. The sand type is very different between the two estuaries. In Souris, the sand is rich in ferric iron oxide leading to reddish colour, while in Richibucto, the sand is yellow. The Richibucto River estuary has oyster cages that are detectable on the UAV imagery and will be considered in the classification. Seaweed will also be considered in the image classification for the Souris River estuary because they are present in that estuary.

Sonar data
For both estuaries, sonar tracks were collected with a Biosonics MX Echosounder (BioSonics, Inc., Seattle, WA, USA) by the Southern St. Lawrence Coalition on Sustainability in partnership with Fisheries and Oceans Canada ( Figure 3). The sonar data was acquired on August 6th, 2019, in Souris and on September 1st and 2nd, 2020, in Richibucto. The sonar was attached to the side of a small boat, approximately 30 cm below the water surface.
The transducer was mounted with pipe and clamps. During acquisition, the boat speeds at a maximum of 4 knots (7.5 km/h). The MX Echosounder collects data using a single beam at a frequency of 204.8 kHz, with an 8.5° conical collection angle. Pulse length for data collection was 0.4 ms with a ping rate of 5 Hz. The device has a range resolution of 1.7 cm and a general vertical positional accuracy of 1.7 cm +/-0.2% of depth. The GPS on the device offers a positional accuracy of < 3m (95% typical) and a GPS update rate of 1 s. It produces "csv" files that were converted to shapefiles within ArcMap to be used to validate the classified image (see Section 2.6). The sonar data were validated against in-situ GoPro pictures only for the Souris River estuary.  The band characteristics are given in Table 1 for the MicaSense RedEdge narrowband camera and Table 2 for the MicaSense RedEdge MX Dual Camera Imaging System. The images were taken when the eelgrass was fully developed in both cases. The camera was calibrated before the image acquisition by using a Spectralon panel. The camera and UAV were connected to mission planner software to control the flight altitude given in Table 2 for both estuaries. There was a 70% overlap between adjacent images. Each UAV image has a spatial resolution close to 7 cm. The environmental conditions for each image acquisition are given in Table 3.

Pre-Classification Image Processing
Figure 4 presents a flowchart describing the method used to process the UAV images. The individual UAV images corresponding to the same band and site were first mosaicked with Agisoft Metashape (Agisoft LLC, St. Petersburg, Russia). The resulting mosaics were calibrated in reflectance for each band, using the Spectralon reflectance panel images taken on acquisition day. Following Clyne et al. (2021), additional layers were added to the band reflectance images in the classification to bolster the potential separability between the classes. They included the vegetation indices and bathymetric ratios listed in Table 4 for the Souris image. For the Richibucto image, we computed the vegetation indices and bathymetric ratios listed in Table 5. The bathymetric ratios are based on a ratio decay algorithm that evaluates satellite-derived bathymetry (Stumpf et al., 2003) Figure 4. Flowchart presenting the methodology used for producing and validating all classified images.

Image classification
All the imageries were classified with RF, a supervised nonparametric classifier that requires delineating training areas for each class over each image [30]. The training areas were delineated by photointerpretation over various RGB composites made with the image. Table 6 lists the number of training polygons per class for each image. For the Souris image, 572 polygons were delineated for the five following classes: Eelgrass, Seaweed in Shallow Water, Seaweed in Deep Water, Sand Floor, and Deep Water (Table 6). For the Richibucto image, the number of polygons varies as a function of the site and we considered the four following classes were considered: Eelgrass, Sand Floor, Oyster Cages, and Deep Water (Table 6). For both images, each training polygon has a size of 5 by 5 pixels. The training data were only used to train the classifier but not to validate the classification, given that the classified images were validated against sonar data.    Table 4).
Ln (Blue2) / Ln (Green1) (*) The variables are described in Table 4 Table 6 The training areas were used to compute class spectral signatures to calculate the J-M distance between class pairs (Richards, Jia, 2006). The closer the J-M distance to 2, the better the spectral separability between the two classes. The training areas were then used in RF, which can handle both Gaussian and non-Gaussian data because it does not consider the data distribution parameters (Breiman, 2001). The algorithm used for this study was the allpolygon version developed in the R x64, version 4.1.0 package (Liaw, Wiener, 2018). The all-polygon version has the advantage of taking account of the actual class size and was already shown to outperform the sub-polygon version (Byatt et al., 2019). RF has the additional advantage of producing a "Mean Decrease Accuracy" variable importance plot that ranks the degree of usefulness of the input features in the classification (Byatt et al., 2019;Wask, Braun, 2009;Liaw, Wiener, 2018;Gislason et al., 2006;Strobl et al. 2008).

Accuracy assessment
For each classification, we first computed the average and overall classification accuracies, Kappa coefficient, and individual class User's and Producer's accuracies derived from a confusion matrix (expressed in pixel numbers) that compares the training areas with the equivalent class in the imagery following (Congalton, 1991). However, the classification accuracy is based on training areas and does not assess the mapping accuracy.
There is the need to compare the resulting classified image with an independent dataset. For such an assessment, we randomly selected 207 sonar points for the Souris image ( Figure 7a) and 986 points for the Richibucto image (Figure 7b). We considered only two classes for the validation ("Eelgrass present" and "Eelgrass absent") since the study's goal was to map the eelgrass bed extent. The sonar points were also classified into two different classes. For the Souris classified image, both the "Eelgrass" and "Eelgrass+Seaweed" classes were categorized as "Eelgrass Present". All the other classes were classified as "Eelgrass Absent". At each sonar point, the class was extracted from the classified image using the "Extract Values to Points" tool of ArcMap (ESRI, 2020). A confusion matrix and associated accuracies were then computed in R (R Development Core Team, 2016).

Class Spectral Separability
The J-M distances computed with all the band reflectance between the class pairs are presented in Table 7 for the Souris image and Table 8 for the Richibucto image. For the Souris image, the average J-M distance was 1.96, indicating an excellent spectral separability between the classes. The lowest J-M distance (1.86) occurred between the "Seaweed in deep water" and "Deep Water" classes, probably because both classes are related to deep water. The highest J-M distance (1.99) occurred between the Eelgrass and Deep Water or Sand Floor class. For the Richibucto images, the average J-M distances are higher than 1.932, indicating an excellent spectral separability between the classes. For Sites 4 and 5, the lowest J-M distance occurred between the "Eelgrass" and "Deep water" classes, while for Site 6, the lowest J-M distance occurred between the "Eelgrass" and "Sand Floor" classes. The highest J-M distance occurred between the "Oyster Cages" and "Deep water" classes for Sites 5 and 6 but between the "Sand Floor" and "Eelgrass" or "Deep Water" classes for Site 4.

Classification
When applying the RF classifier to a combination of the original band reflectance, associated vegetation indices and bathymetric ratios, we achieved an overall classification accuracy (OA) of 99.0% and a Kappa coefficient of 0.99 with the Souris image, indicating an excellent classification accuracy (Table 9). The classification accuracy is better with the Richibucto image (Table  10), with an OA of 99.5% and a Kappa coefficient of 0.99. For the Souris image (Table 9), the lowest User's class accuracy (UA) occurred for the "Eelgrass" class (97.9%), while the lowest Producer's accuracy (PA) occurred for the "Deep Seaweed" class (97.1%). For the Richibucto images (Table 10), the lowest UA (98.9%) and the lowest UA (97.2%) occurred for the "Oyster Cages" class. The resulting classified images are presented in Figure 4 for Souris and Figure 5 for Richibucto.

Validation
All the images were compared to sonar data categorized into two classes ("Eelgrass Present" and "Eelgrass Absent"). For the Souris image, the classification with the original UAV band reflectance, associated vegetation indices and bathymetric ratios show an overall validation accuracy of 91.3% and a kappa coefficient of 0.57 (Table 12). The highest PA (94.57%) and UA (95.6%) occurred for the "Eelgrass Absent" class. The "Eelgrass Present" class had a PA of 65.2% and a UA of 60%. The classified image correlates well with the sonar track (Figure 7). For the images of Richibucto, we achieved an overall validation accuracy for the presence/absence of eelgrass of 86.9% and a Kappa coefficient of 0.73 (Table 13). The highest PA (88.1%) is for the "Eelgrass Present" class, while the highest UA (91.3%) is for the "Eelgrass Absent" class. The classified images correlate well with the sonar tracks ( Figure 8).
Table12. Confusion matrix (in GPS sonar points) and associated accuracies when the UAV classified image of Souris is compared to the sonar data (*).

DISCUSSIONS
This study has shown the potential of applying the RF classifier to UAV multispectral images to produce eelgrass bed distribution maps in the Souris River (Prince Edward Island) and Richibucto River (New Brunswick) estuaries. Following Clyne et al. (2021), several vegetation indices and bathymetric ratios were added to the classification of both images. The training areas for both classifications were created by air photo interpretation, and the subsequent image classification is thus highly dependent on this step. Using only the band reflectance images, we achieved a mean J-M distance of 1.96 for the Souris image and higher than 1.93 for the Richibucto images, indicating a good class spectral separability for the two estuaries. These mean J-M distances were comparable to the value of 1.98 for Forsey et al. (2000), who used a Worldview-2 image to map eelgrass beds in New Brunswick. They were higher than 1.84 obtained by Clyne et al. (2021), who used a Landsat-8 OLI image to map eelgrass beds in James Bay. For the Souris image (Table 7), the lowest J-M distance (1.86) occurred between the "Deep seagrass" and "Deep water" classes, probably because both classes are related to deeper water. The highest J-M distance (1.99) occurred between the "Eelgrass" and "Deep water" or "Sand floor" classes. The mean J-M distances were lower for the Richibucto images (Table 8). The lowest values occurred between the "Eelgrass" and "Deep water" classes in Sites 4 and 5 but between the "Eelgrass" and "Sand floor" classes in Site 6. The highest J-M distance occurred between the "Oyster cages" and "Deep water" classes in Sites 5 and 6 but between the "Sand floor" and the "Eelgrass" or "Deep water" classes in Site 4.
We achieved an overall classification accuracy equal to or higher than 99.0% with both images (Tables 9 and 10). These accuracies are slightly higher than those obtained by Gallant et al. (2021) with the Souris UAV image. The confusion matrix showed that the largest confusion was between the "Eelgrass" and "Shallow seaweed" classes for the Souris image (Table 9), such as Gallant et al. (2021). For the Richibucto images (Table 10), the largest confusion was between the "Eelgrass" and "Deep water" classes in Sites 4 and 5 or with the "Sand floor" class in Site 6.
The resulting maps were compared with sonar data. The overall validation accuracy for the eelgrass presence/absence obtained with the independent sonar dataset was 91.3% with the Souris image (Table 12) and 86.9% with the Richibucto image (Table  13). These accuracies were comparable to 90.8% of Gallant et al. (2021), which used a UAV RGB image on the same area. It agrees with other studies which applied a Support Vector Machine classifier to UAV RGB image that was segmented with an object-based image analysis procedure ( Gallant et al. (2021) with the UAV RGB image. We explain this importance by the reddish colour of the sand floor related to the high content of ferric iron oxide in the surface material of this area. The most important reflectance across all the sites is the Green2 and Green1 reflectances for the Richibucto images. Given that the image was acquired in Richibucto at low tide, the importance of the green band is probably linked to the presence of emerged eelgrass beds.
For the Souris image, amongst all the vegetation indices, the most important is NNIR. NG-2 seems to be very important across all sites for the Richibucto images, particularly for sites 4 and 6. We can explain this result because some of the eelgrass beds can emerge given the low tide in Richibucto or because the water is shallow enough to detect eelgrass, such as in Souris.
Because we did not explicitly apply a water column correction to the reflectance, such as in Leblanc et al. (2020), the addition of bathymetric ratios in the classification allows considering the influence of the water column to some extent the classification. Indeed, the bathymetric ratios play a more significant role in the classification than several vegetation indices. The Blue/Green ratios are among the most important variables for the Souris and the Richibucto image classifications.

CONCLUSIONS
This study shows the potential of applying the RF classifier to the classification of UAV multispectral images for mapping eelgrass beds. Following Clyne et al. (2021), some vegetation indices and bathymetric ratios were added to the image classification. We achieved an overall image classification accuracy of 99.0% and more for the study areas. The confusion matrix showed that the largest confusion is between the "Eelgrass" and the "Shallow seaweed" classes for the Souris image. For the Richibucto images, the biggest confusion is between the "Eelgrass" and "Deep water" classes for Sites 4 and 5 and between the "Eelgrass" and "Sand floor" classes for Site 6. All the classified images produced in this study were cross-validated with sonar data. The overall validation accuracy for the presence/absence of eelgrass obtained with the independent sonar dataset was 91.3% with the Souris image and 86.9% with the Richibucto images. The accuracies for both estuaries were comparable to previous studies using UAV RGB  Our study tested UAV imagery for mapping the distribution of eelgrass beds in two contrasting estuaries in Atlantic Canada. Further work is needed to test this methodology in other estuaries of Atlantic Canada. While having promising results, there is still some confusion between eelgrass beds and seaweed. Further investigation is needed to reduce this confusion. The resulting maps were only presence/absence eelgrass maps, and additional work is necessary to map eelgrass bed coverage or biomass, such as in Konar et al. (2018) and Svane et al. (2021). Also, the study occurs in areas with only one seagrass species, and further work is needed to test the method in areas with multiple seagrass species to produce species maps, such as Traganos and Reonartz (2018) and Kovacs et al. (2018). In this research, we only used UAV images acquired on the same day; further work is needed to test whether the use of multi-temporal UAV imageries will produce better results. Given the small pixel size for the UAV images and the high number of input features in the classification, applying the method to a high number of estuaries could lead to a high volume of data. The small pixel size of the UAV images is also suitable to apply an OBIA before classification, such as in Nahirnick et al. (2019a;2019b), although Duffy et al. (2018) showed that an unsupervised classification performs better than OBIA methods. Also, by contrast to Leblanc et al. (2020), and like the other UAV-based eelgrass studies (Duffy et al., 2018;Konar et al., 2018;Nahirnick et al.,2019a;2019b;Krause et al., 2021;Svane et al., 2021), no water column correction was performed on the image. Additional work is needed to test whether a water column correction method such as the one of Lyzenga (1981) will improve the classification. Finally, our UAV image was acquired under clear sky conditions, but there is also the need to test whether cloudy sky images will be suitable.