MediaDiver: Viewing and Annotating Multi-View Video
Gregor Miller, Sidney Fels, Abir Al Hajri, Michael Ilich, Zoltan Foley-Fisher, Manuel Fernandez and Daesik Jang
MediaDiver: Viewing and Annotating Multi-View Video We propose to bring our novel rich media interface called MediaDiver demonstrating our new interaction techniques for viewing and annotating multiple view video. The demonstration allows attendees to experience novel moving target selection methods (called Hold and Chase), new multi-view selection techniques, automated quality of view analysis to switch viewpoints to follow targets, integrated annotation methods for viewing or authoring meta-content and advanced context sensitive transport and timeline functions. As users have become increasingly sophisticated when managing navigation and viewing of hyper-documents, they transfer their expectations to new media. Our proposal is a demonstration of the technology required to meet these expectations for video. Thus users will be able to directly click on objects in the video to link to more information or other video, easily change camera views and mark-up the video with their own content. The applications of this technology stretch from home video management to broadcast quality media production, which may be consumed on both desktop and mobile platforms.

Presented at CHI in Vancouver, May 2011.

    author = {Gregor Miller and Sidney Fels and Abir Al Hajri and Michael Ilich and Zoltan Foley-Fisher and Manuel Fernandez and Daesik Jang},
    title = {MediaDiver: Viewing and Annotating Multi-View Video},
    booktitle = {Proceedings of the 30th Conference on Human Factors in Computing Systems Extended Abstracts},
    series = {CHI EA'11},
    pages = {1141--1146},
    month = {May},
    year = {2011},
    publisher = {ACM},
    address = {New York City, New York, U.S.A.},
    isbn = {978-1-4503-0268-5},
    location = {Vancouver, British Columbia, Canada},
    doi = {},
    url = {}