Deep Convolutional Neural Network for flood extent mapping using unmanned aerial vehicles Data

  • Authors: Gebrehiwot, Asmamaw; Hashemi-Beni, Leila; Thompson, Gary; Kordjamshidi, Parisa; Langan, Thomas
  • Publication Year: 2019
  • Publication Series: Scientific Journal (JRNL)
  • Source: Sensors
  • DOI: 10.3390/s19071486

Abstract

Flooding is one of the leading threats of natural disasters to human life and property, especially in densely populated urban areas.  Rapid and precise extraction of the flooded areas is key to supporting emergency-response planning and providing damage assessment in both spatial and temporal measurements. Unmanned Aerial Vehicles (UAV) technology has recently been recognized as an efficient photogrammetry data acquisition platform to quickly deliver high-resolution imagery because of its cost effectiveness, ability to fly at lower altitudes, and ability to enter a hazardous area. Different image classification methods including SVM (Support Vector Machine) have been used for flood extent mapping. In recent years, there has been a significant improvement in remote sensing image classification using Convolutional Neural Networks (CNNs). CNNs have demonstrated excellent performance on various tasks including image classification, feature extraction, and segmentation. CNNs can learn features automatically from large datasets through the organization of multi-layers of neurons and have the ability to implement nonlinear decision functions. This study investigates the potential of CNN approaches to extract flooded areas from UAV imagery. A VGG-based fully convolutional network (FCN-16s) was used in this research. The model was fine tuned and a k-fold cross-validation was applied to estimate the performance of the model on the new UAV imagery dataset. This approach allowed FCN-16s to be trained on the datasets that contained only one hundred training samples, and resulted in a highly accurate classification. Confusion matrix was calculated to estimate the accuracy of the proposed method. The image segmentation results obtained from FCN-16s were compared from the results obtained from FCN-8s, FCN-32s and SVMs. Experimental results showed that the FCNs could extract flooded areas precisely from UAV images compared to the traditional classifiers such as SVMs. The classification accuracy achieved by FCN-16s, FCN-8s, FCN-32s, and SVM for the water class was 97.52%, 97.8%, 94.20% and 89%, respectively.

  • Citation: Gebrehiwot, Asmamaw; Hashemi-Beni, Leila; Thompson, Gary; Kordjamshidi, Parisa; Langan, Thomas. 2019. Deep Convolutional Neural Network for flood extent mapping using unmanned aerial vehicles Data. Sensors. 19(7): 1486-. https://doi.org/10.3390/s19071486.
  • Keywords: remote sensing, convolutional neural networks, floodplain mapping, fully convolutional network, unmanned aerial vehicles, geospatial data processing
  • Posted Date: May 8, 2020
  • Modified Date: May 11, 2020
  • Print Publications Are No Longer Available

    In an ongoing effort to be fiscally responsible, the Southern Research Station (SRS) will no longer produce and distribute hard copies of our publications. Many SRS publications are available at cost via the Government Printing Office (GPO). Electronic versions of publications may be downloaded, printed, and distributed.

    Publication Notes

    • This article was written and prepared by U.S. Government employees on official time, and is therefore in the public domain.
    • Our online publications are scanned and captured using Adobe Acrobat. During the capture process some typographical errors may occur. Please contact the SRS webmaster if you notice any errors which make this publication unusable.
    • To view this article, download the latest version of Adobe Acrobat Reader.