INTEGRATION OF SEMANTIC 3D CITY MODELS AND 3D MESH MODELS FOR ACCURACY IMPROVEMENTS OF SOLAR POTENTIAL ANALYSES

High-resolution 3D mesh models are an inexpensive and increasingly available data source for 3D models of cities and landscapes of high visual quality and rich geometric detail. However, because of their simple data structure, their analytic capabilites are limited. Semantic 3D city model contain rich thematic information and are well suited for analytics due to their deeply structured semantic data model. In this work an approach for the integration of semantic 3D city models with 3D mesh models is presented. The method is based on geometric distance measures between mesh triangles and semantic surfaces and a region growing approach using plane fitting. The resulting semantic segmentation of mesh triangles is stored in a CityGML data set, to enrich the semantic model with an additional detailed geometric representation of its surfaces and a broad range of unrepresented features like technical building installations, balconies, dormers, chimneys, and vegetation. The potential of the approach is demonstrated on the example of a solar potential analysis, which estimation quality is significantly improved due to the mesh integration. The impact of the method is quantified on a case study using open data from the city of Helsinki.


INTRODUCTION
Today, 3D city models are available in different representations.Both semantic 3D city models and 3D mesh models are established tools for digitally describing the physical environment.Their characteristics, usage scenarios, and acquisition methods, however, are different.

Semantic 3D city models
Semantic 3D city models on the one hand describe the spatial, visual, and thematic aspects of the most common objects of cities and landscapes by decomposing and classifying them according to a semantic data model.The realworld physical objects are represented in an ontological structure by thematic classes with attributes including their aggregations and interrelations.The international standard CityGML is an open data model and encoding specification for representing and exchanging semantic 3D city models adopted by the Open Geospatial Consortium (OGC) in 2012 (Kolbe, 2009, Open Geospatial Consortium, 2012).According to the extensive review of (Biljecki et al., 2015) there are at least 29 different use cases and more than 100 application scenarios known for semantic 3D city models ranging from purely visualization tasks to complex analytic systems.

3D mesh models
3D mesh models on the other hand contain purely geometric and appearance information on the objects they describe.The physical world is mapped by a mesh structure of polygons (usually triangles) with texture images.Individual objects in the model can easily be recognized by the human eye, but cannot be distinguished by computer systems.Depending on their spatial and texture image resolution, 3D mesh models offer up to photo realistic visual quality.Hence, an important use case for 3D mesh models is the visualization of the cityscape.High-resolution 3D mesh models are currently being used in Google Maps on large scale and in several regional projects, like the Helsinki Reality Mesh model (City of Helsinki, 2018).OBJ (Object file) is an ubiquitous 3D format that has achieved wide support in 3D modeling and visualization software since its development in the 1980s by Wavefront Technologies.In fact, OBJ is one of the most popular 3D formats and is commonly used for storing and exchanging 3D mesh models.The OBJ standard defines a geometry definition file format designed for the requirements of 3D modelling and computer graphics.Complex geometries like Bèzier, B-spline, Cardinal and Taylor surfaces are described in the standard, but are rarely supported in practice.Most software products and OBJ datasets available only support triangles or polygons.As (Biljecki and Arroyo Ohori, 2015) presented in their work, the conversion between CityGML and OBJ is not difficult, but generally involves loss of information, as several modelling concepts of CityGML are not supported by OBJ.To mitigate this issue and preserve the semantic information from CityGML in an OBJ file, a concept using OBJ materials is introduced in their work.

Comparing semantic 3D city model and 3D mesh models
As mentioned before, 3D mesh models and semantic 3D city model have different modelling characteristics and usage scenarios.Because of their aforementioned simple data structure, the analytic capabilities of 3D mesh models are limited.The model elements have no stable unique identifier like CityGML's GM-LID and cannot carry attributes, which are an essential model element for analytic tasks.As shown in Figure 1, their visual quality and geometric degree of detail however, are superior to semantic 3D city models.The creation of semantic 3D city model is  (Kada and McKinley, 2009, Haala and Kada, 2010, Mc-Clune et al., 2016).In contrast, the production of 3D mesh models, works almost completely automatic.Extensive models can be derived inexpensively as a side product of regular acquisition campaigns of aerial images of cities applying photogrammetric methods (Hirschmüller, 2008).Due to the different characteristics and use cases of both model types, cities have started to use both types of models simultaneously.For instance, the City of Helsinki employs both models and provides the data sets as Open Data.An example scene containing a snapshot from both models is shown in Figure 1.Both models can be viewed online using a WebGL-based 3D web client.
While the so called "Reality Mesh Model" offers better visual quality and rich geometric details like vegetation and building installations, it allows no user interaction and provides no additional information on the model elements.It can be utilized for discovering the city, design projects, and planning performance stages for city events.The semantic "City Information Model" can be viewed with or without textures and allows the interactive selection of individual buildings to display various thematic attributes.Based on the model, the "Helsinki Energy and Climate Atlas" and the "Solar Energy Potential Model" have been created, which can be explored in separate web clients (City of Helsinki, 2018).However, both model types are standing separate next to each other and are not linked by any means.

Idea of this work
The key idea presented in this paper is to integrate both representations to a) enrich a CityGML LoD2 model with data from the 3D mesh model in order to improve the geometric resolution and include model elements like vegetation and building parts on roofs and facades like chimneys, dormers, or balconies, that are not explicitly represented in the CityGML data set and b) map semantic information from the CityGML model onto the polygons of the mesh model to create a semantic 3D mesh model allowing for instance to highlight or interact with specific building parts in the mesh.We have developed a method for matching parts of the 3D mesh to semantically classified building surfaces of a CityGML LoD2 model.The method is based on distance measures of 3D triangles to 3D polygons, fitting planes, and region growing.
An important use case for 3D city models is the estimation of solar irradiation on buildings, which is required for the planning of photovoltaic (PV) and solar thermal (ST) building installations and can be utilized for building energy demand estimation and the dimensioning of cooling systems.The added value of the integration of 3D mesh and semantic 3D city models is demonstrated on the example use case of a solar potential analysis for building roofs and facades of CityGML 3D city models (Willenborg et al., 2018).While carried out solely on a semantic 3D building model significant impact factors for shadowing like vegetation and building parts like balconies, dormers, and technical building installations are neglected in the analysis.To overcome this issue and facilitate a more realistic estimation of solar irradiation we present a method to enrich the semantic 3D city model for such unrepresented model elements.We show how the integration of semantic 3D city models with 3D mesh models can help in significantly improving the accuracy of solar potential analysis.The influence of the method is demonstrated and quantified using an example area from the Open Data from Helsinki.
Figure 1.Snapshot of a residential area in Helsinki.The geometric structure (triangle mesh) of the 3D mesh model can be observed top right, while the textured mesh is displayed top left.The bottom image shows the same scene in the semantic 3D city model in CityGML LoD2 with surfaces colored according to their thematic classes (wall/roof).

RELATED WORK
2.1 Integration of 3D mesh and semantic 3D city models At the time of this literature review, only one other similar approach for the integration of semantic 3D city models and 3D mesh models was found.In a recent Master's Thesis at TU Delft the bidirectional enrichment of Multi View Stereo Mesh models (MVSM) and semantic 3D city models was explored.The introduced method relies on distance measures between faces of both models and heuristics rules to perform a semantic segmentation of the mesh triangles in roof, wall, road, terrain, or uncertain.The use case however, is different to the approach presented in this paper.The method is mainly used to transfer textures of the mesh model to the semantic model (Tryfona, 2017).In this work we present a different method and put focus on the integration of unrepresented model elements from the mesh model into the semantic model.

Solar potential analysis
The estimation of solar potential in the urban environment using GIS tools and standards coupled with numerical radiation algorithms has been an active topic of research for several years now.As can be seen from extensive review on solar potential analysis tools of (Freitas et al., 2015), a large variety of different The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-4/W10, 2018 13th 3D GeoInfo Conference, 1-2 October 2018, Delft, The Netherlands approaches exists.They use different input data, GIS and radiation models, interfaces, and provide their results in various representations.Input data for topography vary in their dimension (2D, 2.5D, 3D) and data source like LiDAR, photogrammetry or satellite imagery.The meteorological data used in the models originates from different sources as well, like e.g.ground or satellite measurements.Depending on the radiation model used, solar irradiance is computed as direct beam, diffuse, or reflected radiation or a combination of them.The result figures are represented in different dimensions and describe different levels of potential (physical, geographical, technical, economic, social).In their study, where (Freitas et al., 2015) compared more than 20 solar analysis tools, they found that besides the quality of topographic and meteorological input data the accuracy of many tools is limited because they consider flat surfaces and relevant structures like chimneys or air-conditioning units are left out.Vegetation is frequently completely excluded or simplified as solid shadow caster.
The solar irradiation analysis tool presented by (Willenborg et al., 2018) used in this work suffered from both of these restrictions before the mesh integration proposed in this work.However, the mesh model is integrated as solid shadow caster.The error introduced here depends on several factors like the type of vegetation and its specific transmissivity at certain levels of foliation and is therefore hard to quantify.According to (Konarska et al., 2014) the average transmissivity for direct solar radiation for foliated urban trees ranges from 1.3 to 5.3% and for defoliated trees from 40.2 to 51.9%.Hence, further research is still needed regarding the representation of trees and light passing through their canopies in 3D models.

INTEGRATION OF SEMANTIC 3D CITY MODELS AND 3D MESH MODELS
In this section the proposed method for the integration of 3D mesh models and semantic 3D city model is explained in detail.
The following methods have mainly been developed in the course of the Master's thesis by (Pültz, 2018).

General methodology and workflow
The developed method generally consists of two pre-processing steps, the main matching and segmentation process, and finally two post-processing steps.The process has been implemented using a combination of FME and Python libraries, that are called for more complex tasks from the workflow.The input data are CityGML LoD2 building models and an OBJ mesh model from a corresponding area, which have been clipped manually in advance from both data sets.The output of the process is a CityGML dataset containing the LoD2 buildings of the input data and previously unrepresented model elements from the mesh model, as GenericCityObjects.
For pre-processing, both models need to be transferred into a common coordinate reference system and, if necessary, the x, y, z offset as wells as rotational differences between both models need to be removed.The perfect overlapping of both models is a vital precondition for all subsequent steps.Second, topology information for the triangle mesh is generated.In this work a simple geometric approach testing for overlapping triangle edges was implemented.Each triangle is assigned a unique identifier (UUID) and a list of neighboring triangle UUIDs.However, both the topology model and its generation are currently rather inefficient.In the future a more powerful topological data structure like for instance the Winged-edge mesh structure is planned to be included (Baumgart, 1975).

Pre-selection of candidate triangles
Before applying the subsequent matching and segmentation steps, we found that it is recommended to limit the mesh triangles to a subset of possibly matching candidate triangles for a semantic surface due to performance reasons.To obtain this so called region of interest (ROI) of the mesh model the semantic surfaces are buffered and extruded to form a volume, that limits the space in which candidate triangles are likely to be located as depicted in Figure 2. As parts of the mesh may be located behind the semantic surface or inside a building, the extrusion needs to be carried out in both positive and negative direction of the semantic surfaces' normal.Both the size of the buffer and the extrusion lengths were set as global constants for the whole process, which can lead to over/under selection in certain situations, e.g. for very slim building parts.This could be improved by dynamically adapting the parameters for each semantic surface in the future based on heuristics.After this pre-selection step each ROI consists of one semantic surface and a subset of mesh triangles, that are forwarded to the subsequent matching and segmentation steps.
Figure 2. Example of extruded volumes for ROI selection for a single building.

Distance measures between triangles and surfaces
The general approach to match single triangles of a ROI to their corresponding semantic surface is to compute a distance measure between both of them.If this distance measure falls within a certain threshold the triangle is registered as a match of the semantic surface.The general geometric situation for a single mesh triangle can be observed in  instance, roof overhangs or building structures, which appear in the mesh but not in the semantic model are partially matched, as they are in range of the distance threshold to the surface plane and have a similar orientation as well (see blue circle).The same applies for some regions, where vegetation touches the buildings.The other way around some triangles that should be matched are neglected, because their orientation differs too much.The quality of the matching this distance measure produces strongly depends on the selected thresholds for the distance d and the orientation difference θ and requires further assessment.
As this distance measure is based on simple vector calculus its computational complexity is low.Hence, for a future implementation it could be used as an alternative or additional pre-selection step for the identification of candidate triangles.
The second distance measure introduced in (Pültz, 2018) is based on the volume between a mesh triangle and its projection to the surface plane.An illustration of the geometric situation is given in Figure 5.The volume is defined by the volume sum of the prism clamped between P low and Pproj and the pyramid formed by Porig and P low .As this pyramid does not have a square base surface, its computation is complex.As a simplification half of the volume of the prism between P high and P low is used.Since the volume of the pyramid only contributes a minor fraction to the total volume, this approximation seems reasonable.If the mesh triangle intersects the surface plane, the volume calculation becomes more difficult.In this case the current implementation produces inconsistent results and requires further investigation.However, the resulting volume cannot be directly used as a dis- The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-4/W10, 2018 13th 3D GeoInfo Conference, 1-2 October 2018, Delft, The Netherlands tance measure, as its size depends on the size of the mesh triangle Porig and its orientation relative to the surface plane s.To mitigate this issue the total volume is scaled by two factors to account for triangle size and orientation.To counteract the increasing volume size caused by increasing triangle size the triangle area is scaled by computing the median of the area of all triangles in the mesh and used this as reference value for scaling the individual triangles before the volume computation.Hence, the area of large triangles is reduced and the area of small triangles is increased for the computation of the volume distance measure.
The second scaling factor prevents that the total volume decreases with increasing tilt of the mesh triangle Porig.In this work the ratio between the area of the mesh triangle Porig and its projected counterpart Pproj was used.This factor evaluates to one, if the mesh triangle is parallel to the surface plane.With increasing orientation difference, the factor increases and therefore counteracts the decreasing volume measure.The results of the volume distance measure are displayed in Figure 6 in comparison to the point-to-plane distance measure.It can be observed, that the results in general are promising and more accurate than with the point-to-plane distance measure in many cases.Building parts that are missing in the semantic model and roof overhangs are less misclassified.However, there are some problem with this distance measure as well.For instance, the matched mesh contains many holes, especially at roof tops and at the gutters or where walls and roof surfaces meet.These errors are partially caused by the inconsistent implementation for triangles, that intersect the surface plane.
Other sources of errors may be the proposed scaling factors of the volume measure, an inappropriate selection of the threshold, or the approximation of the complex pyramid volume.In summary, this distance measure must be described as experimental in its current implementation.There is more research to be done regarding both the volume calculation and especially with regard to the scaling factors.Currently, this distance measure is not suitable for a semantic mesh segmentation on its own like the pointto-plane method Moreover, the computation of this measure is significantly more costly than the point-to-plane approach due to the more complex geometric situation.As for the point-to-plane method, the output of this measure need improvement in the subsequent processing.

Semantic mesh segmentation with region growing
As the two introduced distance measures are not sufficient for a semantic mesh segmentation a region growing approach has been added.The general idea is to use the output of the two introduced distance measures at conservative thresholds as an input seed for a region growing that subsequently heals existing holes or shortcomings on the border of the partially segmented mesh.Thereby, the objective of the region growing is not to completely fill all holes to create a closed mesh surface and maximize the mesh extent at the border, but to incorporate all mesh triangles that have not successfully been classified before.For instance, dormers or chimneys on a roof surface should be excluded from the segmented mesh region, but should be enclosed as closely as possible at the same time.This is required for the given use case solar potential analysis, where such unrepresented model elements should be integrated as additional shadow casters.
In the first step of the region growing approach the mesh topology created in the pre-processing is used to identify holes and the borders of the seed mesh.Second, using neighboring relations candidate triangles from holes first and the borders second are processed one after another.The mesh subset of candidate triangles is thereby limited by the ROI introduced in section 3.2.As shown in Figure 7, a plane is fitted through the current seed of segmented triangles.Based on this plane, a series of tests is performed to verify if the triangle represents the semantic surface in the mesh, or not.First, the point-to-plane distance of the candidate triangle and the orientation difference between them is tested.
Second, a new seed plane including the candidate triangle is computed.This temporary seed plane is now tested against the seed plane from the last iteration step and the plane of semantic surface.The candidate triangle is discarded if the orientation differ- The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-4/W10, 2018 13th 3D GeoInfo Conference, 1-2 October 2018, Delft, The Netherlands ence between a) the temporary seed plane and the plane of the last iteration, or b) the orientation difference between the temporary seed plane and the semantic surface plane exceeds a certain threshold.If all tests are passed, the triangle is added to the region of segmented triangles and the next candidate triangle is selected for the next iteration.When no more candidate triangles exist, the algorithm terminates.The biggest challenge of the region growing approach is the determination of appropriate thresholds for the tests discussed above.The thresholds used in the case study have been determined by testing and best experience.Further investigations are required here to develop a fully automatic computation of the thresholds.
The region growing and plane fitting algorithms have been implemented using functions from the SciPy and NumPy Python libraries for linear algebra.The plane fitting algorithm takes all vertices of the seed triangles and assigns them a new z coordinate.
The set of points returned can subsequently be used to compute the plane.The plane is optimized to have the smallest perpendicular offset to the initial point set.

City model enrichment with mesh geometries
When the segmentation process has completed, the initial set of mesh triangles is categorized into two groups and integrated in the CityGML semantic 3D city model.Triangles, that have been matched and segmented are stored as additional geometric representation of their corresponding semantic surface as LoD3 Multi-Surface.Hence, the mesh representation is now integrated in the semantic model.It can be used with all benefits of the semantic data model and is available for e.g.analytic tasks or interactive 3D web visualization.The additional geometric representation of the wall and roof surfaces opens a whole set of possible applications, that require further investigation.Moreover, it is planned to extend the approach to more thematic classes, like, for instance roads in the future.The remaining mesh triangles, which have not been classified as wall or roof surface, are incorporated in the CityGML data set as well.They can either be stored as Generic-CityObject or as Relief feature and enrich the semantic model for unrepresented features like building installations and vegetation.The current strategy for integrating the 3D mesh in CityGML is a workaround, as the standard currently does not offer an explicit representation for 3D meshes.

CASE STUDY: SOLAR POTENTIAL ANALYSIS
One possible use case that could benefit from the 3D mesh integration is the estimation of solar energy production potentials in cities based on semantic 3D city models.City model data sets have become increasingly available in the recent years, but most datasets only contain LoD2 buildings, even if the representation of building installations or vegetation objects is supported by the data model.Hence, a significant number of shadow casting objects is missing in the models that lead to an overestimation of the potential solar energy.
To evaluate the impact of the mesh integration a case study on a small residential area in Helsinki containing 29 buildings surrounded by vegetation was performed.A solar potential analysis for roofs and facades based on the work of (Willenborg et al., 2018) was performed with and without using the introduced mesh integration.For the simulation the CityGML LoD2 building model and the mesh regions, that did not match a building surface were used.Figure 10 shows a snapshot of the results.In the bottom left image the unsegmented mesh elements, that have been added to the semantic model can be observed.In comparison to the textured mesh model (top left) it becomes visible that vegetation, facade elements like balconies, and roof installations like dormers or chimneys have been integrated well in general.However, depending on the geometric situation and the quality of the 3D mesh model, especially some building installations are not fully captured, like the dormers on building B14.When comparing the solar potential analysis result textures of the simulation runs with (bottom images) and without the mesh integration (top right) areas that receive significantly less radiation can be identified.

Quantification of the mesh integration impact
To quantify the impact of the mesh integration on the analysis results, the two simulation runs with and without mesh integration have been compared to each other for roofs, facades, and the  Results without mesh integration and building numbers.Bottom left: Results with mesh integration, mesh is displayed.Bottom right: Results with mesh integration, mesh is faded out.
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-4/W10, 2018 13th 3D GeoInfo Conference, 1-2 October 2018, Delft, The Netherlands entire buildings.The charts in Figure 9 list the absolute yearly global solar irradiation on and facades for each building in the test area with and without mesh integration and the percentage of overestimation of the results when no mesh integration is used.In general, the impact of the mesh integration is much bigger on facades, as they are more shaded by vegetation and building installations than roofs.For instance, the irradiation on the facades, of building B17 is overestimated by ∼ 75% due to surrounding vegetation, while building B20 is affected by vegetation and building installations resulting in ∼ 50% overestimation (see Figure 10).In comparison, the roofs of building B14 are overestimated by 10% because of the dormers.The buildings B1, B2, and B3 are small sheds that are almost fully covered by vegetation, when the mesh integration is used.This leads to an extreme overestimation of solar irradiation on both roofs and facades, which can be neglected, as the absolute radiation sum of these buildings is small compared to the residential buildings.
The overestimation of the annual global solar irradiation for the entire test area is ∼ 8% for roofs, ∼ 38% for facades and ∼ 22% for buildings.
Figure 8 shows the distribution of the percentage of overestimation without mesh integration for individual buildings, wall, and roof surfaces.Compared to the evaluation per building in Figure 9 one can observe that the variations for individual wall and roof surfaces are much bigger.Depending on the adjacent vegetation situation and the amount and extent of building installations especially the irradiation on walls can be overestimated by 150% or more for individual walls, and 25% or more for individual roofs.This makes it clear that the results depend strongly on the vegetation structure and architectural features of the buildings.It needs to be taken into account that the presented results were taken from a small test area in a suburban residential area.In order to develop more general figures more case studies of bigger extent are required.

CONCLUSIONS AND OUTLOOK
This work shows that the integration of 3D mesh models with semantic 3D city models is feasible and opens the door for a variety of new applications.The integration allows existing weaknesses of both models to be mitigated.The mesh model is enriched with the thematic information of the city model and the city model is complemented by the detailed geometric representation of the mesh model.The introduced approach uses a combination of geometric distance measures between mesh triangles and semantic surfaces and a region growing method using plane fitting for a semantic segmentation of the 3D mesh.Both segmented and nonsegmented mesh regions are persistently stored in the semantic model.The segmented mesh elements supplement the corresponding city model surfaces with a detailed geometric representation.The unsegmented regions of the mesh contain a multitude of features that are not yet mapped in semantic city models available today.
The results achieved with the developed approach are promising, but there are also some questions that need to be examined in greater depth.In addition to the quality of the segmentation, which is not always satisfactory, the automatic selection of suitable thresholds for the distance measures and the region growing method must be further improved.The computational performance of the approach must also be improved in order to carry out more extensive case studies.Overall, the current implementation must be described as experimental.
The case study 'solar potential analysis' has shown that the integration of 3D mesh models with semantic 3D city models has the potential to significantly improve existing analysis methods based on semantic 3D city models.In this context, it will be necessary in the future to also investigate how the transparency of vegetation for sunlight can be integrated.

Figure 4 .
Figure 3. Test area from the Helsinki dataset.The top left image shows the CityGML building model embedded in the mesh model, at the top right the textured 3D mesh model is displayed.The bottom images show the results of the point-to-plane distance measure at a distance threshold d = 1m with (bottom left) and without the LoD2 building model (bottom right).The triangle colors indicate the angle difference between the normal vectors of the triangles and semantic surface in one ROI (green < 30 • , yellow 30 − 60 • , red > 60 • ).

Figure 4 .
Figure 4. Point-to-plane distance d between triangle Porig and surface plane s.

Figure 5 .
Figure 5. Geometric situation for the distance measure based on the volume between the mesh triangle Porig and its projection Pproj onto the surface plane s.

Figure 6 .
Figure 6.Comparison of the results of the point-to-plane (bottom) and volume distance measure (top).The point-to-plane results are colored according to Figure 3.The volume distance measure results are colored as follows: green < 0.4 m 3 , yellow 0.4 − 0.6 m 3 , red > 0.6 m 3

Figure 7 .
Figure 7. Plane (green) fitted through vertices of the triangle seed (orange) with the corresponding semantic surface (blue) in the back.The current candidate triangle is highlighted red.

Figure 10 .
Figure 10.Sample buildings from the case study.Top left: Textured mesh model.The other images show the semantic model with solar potential analysis result textures (blue to red → low to high irradiation).Top right:Results without mesh integration and building numbers.Bottom left: Results with mesh integration, mesh is displayed.Bottom right: Results with mesh integration, mesh is faded out.
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-4/W10, 2018 13th 3D GeoInfo Conference, 1-2 October 2018, Delft, The Netherlands still difficult and requires a considerable amount of manual work.The generation of LoD2 CityGML building models however, is now working almost fully automatic, if building footprints are available