Practical Real-time Hyperspectral Video Stream Processing

Show simple item record

dc.contributor.advisor Andrews, M en
dc.contributor.advisor Watson, C en
dc.contributor.author Dunn, Robert en
dc.date.accessioned 2015-01-09T03:35:24Z en
dc.date.issued 2014 en
dc.identifier.citation 2014 en
dc.identifier.uri http://hdl.handle.net/2292/24047 en
dc.description.abstract Hyperspectral imaging allows quantitative evaluation of material composition and spatial distribution, and finds numerous applications in areas such as remote sensing and military reconnaissance. Hyperspectral images contain quantitative spectral information for each pixel and a useful model is to assume all pixels are linear combinations of so-called “pure” endmembers. Traditionally, hyperspectral algorithms have considered only static images and existing “real-time algorithms” process single frames without regard for sequential similarities or correlations. Recent advances suggest that hyperspectral video streaming hardware will be technically feasible in the near future. This will stimulate applications in novel areas such as real-time crime scene analysis. The difficulty in capturing and processing hyperspectral video sequences in real-time can be traced directly to the high-dimensionality of the data. To realise these applications and explore potential benefits, further work on hyperspectral video algorithms is required. This thesis focuses on hyperspectral video processing using linear mixing and geometric endmember determination. This broadly involves four steps: dimension reduction, endmember determination, abundance map generation and visualisation. An overarching aim of the work has been to exploit inter-frame redundancies to make hyperspectral video a reality. Methods to reduce the computational complexity of dimension reduction and endmember determination are presented. These methods rely on information obtained from previously captured hyperspectral frames and exploit temporal correlations inherent in the data. The widely-used convex combination of endmembers is shown to be ineffective in the presence of non-uniform illumination. An improvement to Abundance Guided Endmember Selection (AGES) is proposed. This improvement, termed Shadow-corrected AGES (SAGES), uses a modified simplex that compensates for illumination variations and is shown to be more effective across a range of scenes. The problem of visualising changing abundance maps in the video stream is addressed and a novel method to display abundance information is presented. This method uses a reduced colour-space to visualise areas of interest, intelligently updates the display to reduce viewer fatigue, and caters for colour blind operators. Finally, a hardware implementation using CUDA-enabled devices is considered. AGES-like algorithms are shown as effective alternatives to existing real-time algorithms. Recommendations are made for further development of video streaming hyperspectral imaging systems. en
dc.publisher ResearchSpace@Auckland en
dc.relation.ispartof PhD Thesis - University of Auckland en
dc.rights Items in ResearchSpace are protected by copyright, with all rights reserved, unless otherwise indicated. Previously published items are made available in accordance with the copyright policy of the publisher. en
dc.rights.uri https://researchspace.auckland.ac.nz/docs/uoa-docs/rights.htm en
dc.rights.uri http://creativecommons.org/licenses/by-nc-sa/3.0/nz/ en
dc.title Practical Real-time Hyperspectral Video Stream Processing en
dc.type Thesis en
thesis.degree.grantor The University of Auckland en
thesis.degree.level Doctoral en
thesis.degree.name PhD en
dc.rights.holder Copyright: The Author en
dc.rights.accessrights http://purl.org/eprint/accessRights/OpenAccess en
pubs.elements-id 472368 en
pubs.record-created-at-source-date 2015-01-09 en
dc.identifier.wikidata Q112905080


Files in this item

Find Full text

This item appears in the following Collection(s)

Show simple item record

Share

Search ResearchSpace


Browse

Statistics