The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume XLIII-B2-2020
https://doi.org/10.5194/isprs-archives-XLIII-B2-2020-135-2020
https://doi.org/10.5194/isprs-archives-XLIII-B2-2020-135-2020
12 Aug 2020
 | 12 Aug 2020

IMPROVING DISPARITY ESTIMATION BASED ON RESIDUAL COST VOLUME AND RECONSTRUCTION ERROR VOLUME

J. Kang, L. Chen, F. Deng, and C. Heipke

Keywords: Stereo Matching, Disparity Refinement, Residual Cost Volume, Reconstruction Error

Abstract. Recently, great progress has been made in formulating dense disparity estimation as a pixel-wise learning task to be solved by deep convolutional neural networks. However, most resulting pixel-wise disparity maps only show little detail for small structures. In this paper, we propose a two-stage architecture: we first learn initial disparities using an initial network, and then employ a disparity refinement network, guided by the initial results, which directly learns disparity corrections. Based on the initial disparities, we construct a residual cost volume between shared left and right feature maps in a potential disparity residual interval, which can capture more detailed context information. Then, the right feature map is warped with the initial disparity and a reconstruction error volume is constructed between the warped right feature map and the original left feature map, which provides a measure of correctness of the initial disparities. The main contribution of this paper is to combine the residual cost volume and the reconstruction error volume to guide training of the refinement network. We use a shallow encoder-decoder module in the refinement network and do learning from coarse to fine, which simplifies the learning problem. We evaluate our method on several challenging stereo datasets. Experimental results demonstrate that our refinement network can significantly improve the overall accuracy by reducing the estimation error by 30% compared with our initial network. Moreover, our network also achieves competitive performance compared with other CNN-based methods.