Where academic tradition
meets the exciting future

Unsupervised Understanding of Location and Illumination Changes in Egocentric Videos

Alejandro Betancourt, Natalia Díaz Rodríguez, Emilia Barakova, Lucio Marcenaro, Matthias Rauterberg, Carlo Regazzoni, Unsupervised Understanding of Location and Illumination Changes in Egocentric Videos. Pervasive and Mobile Computing 8, 1–13, 2017.

Abstract:

Wearable cameras stand out as one of the most promising devices for the upcoming years, and as a consequence, the demand of computer algorithms to automatically understand the videos recorded with them is increasing quickly. An automatic understanding of these videos is not an easy task, and its mobile nature implies important challenges to be faced, such as the changing light conditions and the unrestricted locations recorded. This paper proposes an unsupervised strategy based on global features and manifold learning to endow wearable cameras with contextual information regarding the light conditions and the location captured.
Results show that non-linear manifold methods can capture contextual patterns from global features without compromising large computational resources. The proposed strategy is used, as an application case, as a switching mechanism to improve the hand detection problem in egocentric videos.

BibTeX entry:

@ARTICLE{jBeDxBaMaRaRe17a,
  title = {Unsupervised Understanding of Location and Illumination Changes in Egocentric Videos},
  author = {Betancourt, Alejandro and Díaz Rodríguez, Natalia and Barakova, Emilia and Marcenaro, Lucio and Rauterberg, Matthias and Regazzoni, Carlo},
  journal = {Pervasive and Mobile Computing},
  volume = {8},
  publisher = {Elsevier},
  pages = {1–13},
  year = {2017},
  ISSN = {1574-1192},
}

Belongs to TUCS Research Unit(s): Embedded Systems Laboratory (ESLAB)

Publication Forum rating of this publication: level 2

Edit publication