Road segmentation based on optical images is still challenging due to the caused by occlusion, object blockage, and shadows. To address these challenges, integrating LiDAR data, which provides valuable geometric relationships for extracting road information, comes to the forefront as an optimal solution. The fusion of 2D and 3D information from optical images and LiDAR provide potential advantages for road segmentation, especially in cases where optical satellite images lack comprehensive road information. In this study, we present a novel approach to enhance the road segmentation performance of a deep learning model. The method involves the fusion of high-resolution optical satellite images and LiDAR point clouds through a feature-wise strategy. The satellite images were generated via Static Map API of Google Maps platform using python-based tool. As the LiDAR point cloud, airborne LiDAR data collected by U.S. Geological Survey over Florida, USA, were used. We calculated eigenvalue-based and geometric 3D property-based features from the LiDAR data and combined them with the optical satellite images to train an end-to-end deep residual U-Net architecture. The combined feature set was used for training the deep residual U-Net model with different sizes of ResNet backbones, hence the consistency of the proposed approach was tested in varying architectural structure of U-Net model. The proposed strategy involves concatenating the high-level features generated from the optical images through U-Net architecture together with the LiDAR-based features before the final convolution layer of the model. The results demonstrated that the fusion strategy significantly improved the prediction capacity of the U-Net models regardless of which ResNet backbone was adopted. By combining optical images and LiDAR point cloud data, the deep learning model achieved improved prediction performance and ensured the accurate representation of road geometry in areas where the road pixels are blocked by tree cover and shadows of the objects in scenery. Obtained results demonstrate the potential of the fusion strategy and the benefit of combining data from various sources in developing more reliable road segmentation models that can accurately detect road pixels even in challenging areas.
Add the publication’s full text or supplementary notes here. You can use rich formatting such as including code, math, and images.