Workshop on Developer-Centred Computer Vision
Gregor Miller and Sidney Fels
Workshop on Developer-Centred Computer Vision The OpenVL team is organising the first workshop on Developer-Centred Computer Vision

The majority of research in computer vision is focused on technology and systems which advance the state-of-the-art, however there is very little focus on how we can make the state-of-the-art useable by the majority of people. Recently there has been an increased interest in "Vision for HCI", and how we use computer vision to interact with the world. We propose a parallel theme of "HCI for Vision" for this workshop, looking at how to provide accessible computer vision targeted towards general software developers. We would like to explore ideas which take existing vision methods and present them in a manner where users with varying degrees of vision knowledge may use them.

There has been a relatively recent surge in the number of developer interfaces to computer vision becoming available: OpenCV has become much more popular, Mathworks have released a Matlab Computer Vision Toolbox, visual interfaces such as Vision-on-Tap are online and working, and specific targets such as tracking (OpenTL) and GPU (Cuda, OpenVIDIA) have working implementations. Additionally, in the last six months Khronos (the not-for-profit industry consortium which creates and maintains open standards) has formed a working group to discuss the creation of a computer vision hardware abstraction layer (CV HAL).

Developing methods to make computer vision accessible poses many interesting questions and will require novel approaches to the problems. This one day workshop will bring together researchers in the fields of Vision and HCI to discuss the state-of-the-art and the direction of research. There will be peer-reviewed demos and papers, with three oral presentation sessions and a poster session. We invite the submission of original, high quality research papers and demos on accessible computer vision. Areas of interest include (but not limited to):

  • Higher-level abstractions of vision algorithms
  • Algorithm/Task/User level API design
  • Automatic/interactive algorithm selection based on human input
  • Automatic/interactive task selection based on human input
  • Interpretation of user input such as descriptions, sketches, images or video
  • Case-studies on developer-centred computer vision
  • Visual development environments for vision system construction
  • Evaluation of vision interfaces (e.g. through user studies)