CrackUDA: Incremental Unsupervised Domain Adaptation for Improved Crack Segmentation in Civil Structures
Kushagra Srivastava
Damodar Datta Kancharla
Rizvi Tahereen
Pradeep Kumar Ramancharla
Ravi Kiran Sarvadevabhatla
Harikumar Kandath
[Paper]
[Dataset]
[Code]


Abstract

Crack segmentation plays a crucial role in ensuring the structural integrity and seismic safety of civil structures. However, existing crack segmentation algorithms often encounter challenges in maintaining accuracy when faced with domain shift across datasets. To address this issue, we propose a novel deep network that employs incremental training with unsupervised domain adaptation (UDA) using adversarial learning, while preserving accuracy on the source domain. Our approach leverages an encoder-decoder architecture, consisting of both domain-invariant and domain-specific parameters. The encoder learns shared crack features across all domains, ensuring robustness to domain variations. Simultaneously, the decoder's domain-specific parameters capture domain-specific features unique to each domain. By combining these components, our model achieves improved crack segmentation performance. Furthermore, we introduce BuildCrack, a new crack dataset comparable to sub-datasets of the well-established CrackSeg9K dataset in terms of image count and crack percentage. We evaluate our proposed approach against state-of-the-art UDA methods using different sub-datasets of CrackSeg9K and our custom dataset. Our experimental results demonstrate a significant improvement in crack segmentation accuracy and generalization across target domains compared to other UDA methods - specifically, an improvement of 0.65 and 2.7 mIoU on source and target domains respectively.

BuildCrack dataset was captured by imaging building facades using a drone-mounted camera from different angles and distances. BuildCrack has images with low contrast, occlusions, and shadows, which challenge the model’s robustness.


Overview

In order to do crack segmentation based on pictures of a buildings taken by a drone:

  • We propose CrackUDA, a novel incremental UDA approach that ensures robust adaptation and effective crack segmentation.
  • We demonstrate the effectiveness of CrackUDA by achieving higher accuracy in the challenging task of building crack segmentation, surpassing the state-of- the-art UDA methods. Specifically, CrackUDA yields an improvement of 0.65 and 2.7 mIoU on the source and target domains, respectively.
  • We introduce BuildCrack, a new building crack dataset collected via a drone.


Approach

Overview of our proposed architecture. In step 1 we train our network, M1, using the labeled source dataset S for binary segmentation. In step 2, decoder D1 and φs1 are frozen, and a new set of domain-specific parameters φs2 are added and we call this model M2. An alternating training strategy is followed in which we first train for binary segmentation on the source domain followed by adversarial training on both source and target domains.


Results

Examples of cracks in building structures and the predicted crack location by CrackUDA (our approach). espite the small footprint of cracks, diversity of appearance and presence of distractors, our incremental unsupervised approach localizes cracks in a precise manner.

Comparison of mIoU scores on the validation set of CrackSeg9K and BuildCrack (target dataset) with state-of-the-art UDA methods. * approaches did not converge for our setting. Our approach achieves the best generalization performance.

Qualitative results for CrackSeg9K validation set for CrackUDA and FADA.


Previous Work

Our previous work includes estimating following building parameters:

  • ROI detections (like window/storeys) and their count
  • Frontal and top-view layout estimation
  • Scale estimation
  • ROIs from a global scene context etc.
More details about above features can be found here.
  • Distance between Adjacent Buildings
  • Plan Shape and Roof Area Estimation
  • Roof Layout Estimation
More details about above features can be found here.

Contact

If you have any question, please reach out to any of the above mentioned authors.