Computer vision-based systems for environmental monitoring applications

dc.contributor.authorPorto Marques, Tunai
dc.contributor.supervisorBranzan Albu, Alexandra
dc.date.accessioned2022-04-12T19:21:59Z
dc.date.available2022-04-12T19:21:59Z
dc.date.copyright2022en_US
dc.date.issued2022-04-12
dc.degree.departmentDepartment of Electrical and Computer Engineeringen_US
dc.degree.levelDoctor of Philosophy Ph.D.en_US
dc.description.abstractEnvironmental monitoring refers to a host of activities involving the sampling or sensing of diverse properties from an environment in an effort to monitor, study and overall better understand it. While potentially rich and scientifically valuable, these data often create challenging interpretation tasks because of their volume and complexity. This thesis explores the efficiency of Computer Vision-based frameworks towards the processing of large amounts of visual environmental monitoring data. While considering every potential type of visual environmental monitoring measurement is not possible, this thesis elects three data streams as representatives of diverse monitoring layouts: visual out-of-water stream, visual underwater stream and active acoustic underwater stream. Detailed structure, objectives, challenges, solutions and insights from each of them are presented and used to assess the feasibility of Computer Vision within the environmental monitoring context. This thesis starts by providing an in-depth analysis of the definition and goals of environmental monitoring, as well as the Computer Vision systems typically used in conjunction with it. The document continues by studying the visual out-of-water stream via the design of a novel system employing a contrast-guided approach towards the enhancement of low-light underwater images. This enhancement system outperforms multiple state-of-the-art methods, as supported by a group of commonly-employed metrics. A pair of detection frameworks capable of identifying schools of herring, salmon, hake and swarms of krill are also presented in this document. The inputs used in their development, echograms, are visual representations of acoustic backscatter data from echosounder instruments, thus contemplating the active acoustic underwater stream. These detectors use different Deep Learning paradigms to account for the unique challenges presented by each pelagic species. Specifically, the detection of krill and finfish is accomplish with a novel semantic segmentation network (U-MSAA-Net) capable of leveraging local and contextual information from feature maps of multiple scales. In order to explore the out-of-water visual data stream, we examine a large dataset composed by years-worth of images from a coastal region with strong marine vessels traffic, which has been associated with significant anthropogenic footprints upon marine environments. A novel system that involves ``traditional'' Computer Vision and Deep Learning is proposed for the identification of such vessels under diverse visual appearances on this monitoring imagery. Thorough experimentation shows that this system is able to efficiently detect vessels of diverse sizes, shapes, colors and levels of visibility. The results and reflections presented in this thesis reinforce the hypothesis that Computer Vision offers an extremely powerful set of methods for the automatic, accurate, time- and space-efficient interpretation of large amounts of visual environmental monitoring data, as detailed in the remainder of this work.en_US
dc.description.scholarlevelGraduateen_US
dc.identifier.bibliographicCitationT. P. Marques, A. B. Albu, P. O’Hara, N. Serra, B. Morrow, L. McWhinnie, and R. Canessa, “Size-invariant detection of marine vessels from visual time series,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 443–453, 2021en_US
dc.identifier.bibliographicCitationT. P. Marques, A. Rezvanifar, M. Cote, A. B. Albu, K. Ersahin, T. Mudge, and S. Gauthier, “Detecting marine species in echograms via traditional, hybrid, and deep learning frameworks,” in 2020 25th International Conference on Pattern Recognition (ICPR), pp. 5928–5935, IEEE, 2021en_US
dc.identifier.bibliographicCitationT. P. Marques, A. B. Albu, and M. Hoeberechts, “Enhancement of low-lighting underwater images using dark channel prior and fast guided filters,” in ICPR 3rd Workshop on Computer Vision for Analysis of Underwater Imagery (CVAUI), IAPR, 2018.en_US
dc.identifier.bibliographicCitationA. Rezvanifar, T. P. Marques, M. Cote, A. B. Albu, A. Slonimer, T. Tolhurst, K. Ersahin, T. Mudge, and S. Gauthier, “A deep learning-based framework for the detection of schools of herring in echograms,” arXiv preprint arXiv:1910.08215, 2019en_US
dc.identifier.bibliographicCitationT. P. Marques and A. B. Albu, “L2uwe: A framework for the efficient enhancement of low-light underwater images using local contrast and multi-scale fusion,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 538–539, 2020en_US
dc.identifier.bibliographicCitationT. Porto Marques, A. Branzan Albu, and M. Hoeberechts, “A contrast-guided approach for the enhancement of low-lighting underwater images,” MDPI Journal of Imaging, vol. 5, no. 10, p. 79, 2019en_US
dc.identifier.bibliographicCitationD. McIntosh, T. P. Marques, A. B. Albu, R. Rountree, and F. De Leo, “Movement tracks for the automatic detection of fish behavior in videos,” arXiv preprint arXiv:2011.14070, 2020.en_US
dc.identifier.bibliographicCitationT. P. Marques, M. Cote, A. Rezvanifar, A. B. Albu, K. Ersahin, T. Mudge, and S. Gauthier, “Instance segmentation-based identification of pelagic species in acoustic backscatter data,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 4378–4387, June 2021en_US
dc.identifier.bibliographicCitationA. Slonimer, M. Cote, T. P. Marques, A. Rezvanifar, S. Dosso, A. B. Albu, K. Ersahin, T. Mudge, and S. Gauthier, “Instance segmentation of herring and salmon schools in acoustic echograms using a hybrid u-net,” in 2022 19th Conference on Robots and Vision (CRV), IEEE, 2022.en_US
dc.identifier.bibliographicCitationD. McIntosh, T. P. Marques, A. B. Albu, R. Rountree, and F. De Leo, “Tempnet: Temporal attention towards the detection of animal behaviour in videos,” in 2022 26th International Conference on Pattern Recognition (ICPR), IEEE, 2022en_US
dc.identifier.bibliographicCitationD. McIntosh, T. P. Marques, and A. B. Albu, “Preservation of high frequency content for deep learning-based medical image classification,” in 2021 18th Con- ference on Robots and Vision (CRV), pp. 41–48, IEEE, 2021en_US
dc.identifier.urihttp://hdl.handle.net/1828/13856
dc.languageEnglisheng
dc.language.isoenen_US
dc.rightsAvailable to the World Wide Weben_US
dc.subjectComputer Visionen_US
dc.subjectEnvironmental Monitoringen_US
dc.subjectDeep Learningen_US
dc.subjectMachine Learningen_US
dc.subjectObject Detectionen_US
dc.subjectInstance Segmentatonen_US
dc.subjectSemantic Segmentationen_US
dc.titleComputer vision-based systems for environmental monitoring applicationsen_US
dc.typeThesisen_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Marques_Tunai_Porto_PhD_2022.pdf
Size:
19.41 MB
Format:
Adobe Portable Document Format
Description:
PhD thesis
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2 KB
Format:
Item-specific license agreed upon to submission
Description: