With the rise of (digital) cameras that are head mounted (or worn on the body), there is also the opportunity to automatically recognize hand movements and gestures. This is not only useful for physical therapy but can also simplify the control of digital applications. At the TU Eindhoven Alejandro Betancourt was promoted cum laude based on research that should make this possible.

The images from cameras that are worn on the head or body (something like Google Glass, although that is not on the market yet) often also show the hands of the wearer. There already are techniques that recognize these hand as such, but they do not recognize what these hands are 'saying'.

To make this at all possible, many hurdles had to be overcome, such as the great changes in light intensity when walking around and the unambiguous identification of the right and left hand (not as easy as it looks). Finally, the computing resources of devices such as Google Glass are reasonably limited. To solve all these problems, the PhD student used, amount other things, self-learning ('machine learning') techniques.

The thesis has the title 'EgoHands, A Unified Framework for Hand-Based Methods in First Person Vision Videos'. A copy in pdf format can be downloaded here.