We track the jumping person's face and recognize a typical pattern when gathering momentum. This enables to detect when the person is in the ballistic phase, i.e. not touching the ground. With several frames we estimate the ballistic trajectory and predict the highest point. As long as there is enough time, more measurements are taken to refine the estimate. Finally, the camera is triggered in time, taking the trigger delay into account.
Our approach is implemented as an Android app and deployed
on multiple smartphones. The user launches our app,
frames the shot via the displayed preview and then simply
needs to click on a “start” button. Then, the processing is
performed on low-resolution images in real-time and in an
automatic manner, and captures a high-resolution picture of
the jump. In case the face detection does not work, e.g. for
a person not directly facing the camera, the user can tap on
the smartphone screen to manually select the face or another
textured part of the jumping person that is then tracked automatically.
More than 70% of the tested sequences have
a temporal error of less than 40ms. It shows that, in practice,
despite potential noise sources, our
method can still estimate the triggering time accurately.