An adaptive video event mining system for an autonomous underwater vehicle

Raúl Arrabales Moreno, Colin Flanagan, Daniel J.F. Toal

Research output: Contribution to conferencePaperpeer-review

Abstract

This paper presents the results obtained in the development of an adaptive architecture for automated event discovery in sub-sea recorded video footage. The Video Marking System that has been built is the first step of the development of a vision system for an autonomous underwater vehicle. The principal aim of our work is to build an adaptive architecture that provides the vision system with the intelligence and robustness required to deal with the great variability found in underwater video. Different image processing techniques are embedded within the adaptive architecture; all of them running in parallel, adaptively parameterized, and assigned a confidence level by a voting system. Utilizing a simplified world model, the various outputs from the active image processing approaches are combined into an improved event description.

Original languageEnglish
Pages585-591
Number of pages7
Publication statusPublished - 2002
EventProceedings of the Artificial Neutral Networks in Engineering Conference:Smart Engineering System Design - St. Louis, MO, United States
Duration: 10 Nov 200213 Nov 2002

Conference

ConferenceProceedings of the Artificial Neutral Networks in Engineering Conference:Smart Engineering System Design
Country/TerritoryUnited States
CitySt. Louis, MO
Period10/11/0213/11/02

Fingerprint

Dive into the research topics of 'An adaptive video event mining system for an autonomous underwater vehicle'. Together they form a unique fingerprint.

Cite this