Lorentz Center - Collecting, Annotating & Analyzing Video Data from 30 Oct 2017 through 3 Nov 2017
  Current Workshop  |   Overview   Back  |   Home   |   Search   |     

    Collecting, Annotating & Analyzing Video Data
    from 30 Oct 2017 through 3 Nov 2017


Collecting, annotating and analyzing video data

In this workshop we want to bring together researchers from different disciplines who will all benefit from developing a tool that allows video recordings of social interaction to be stored, transcribed and annotated and that can facilitate multi-disciplinary analysis of social interaction. Such a tool does not currently exist. The KNAW (Royal Dutch Academy of Science) has recently put this proposed “tool” on the Dutch national scientific agenda for large research facilities as one of the facilities that should be in place in the Netherlands in 2025. The proposed tool will partially allow transcription and annotation of video data to be done automatically so that larger amounts of data can be analyzed than is currently the case. Currently, social interaction is often transcribed manually by one or two researchers at audio level only. The proposed tool will increase transparency and reliability of social interaction research by facilitating annotation of various ‘layers’ of social interaction such as gestures, gaze, movement, and behavioral aspects.

In order to make this tool possible, we need to find out what type of questions various types of researchers have that should be answerable by this tool in the near future. Furthermore, we need to find out what types of automatic analysis techniques are becoming usable and sufficiently accurate on real-world data, including speech recognition and computer vision. In this workshop we want to:

1) bring together researchers from the various disciplines to establish common ground and share state-of-the-art insights;

2) list what features are needed for all the various fields and methodologies represented at the workshop to make the tool useful, workable and technically possible;

3) have generated a list of paper titles and abstracts from workshop participants that can be put together in a special issue concerning the current state and future of recording, storing, transcribing, annotating and using (digital) video recorded data;

4) have listed possible small and large-scale projects for which we can apply for funding internationally that will contribute to the ADVANT tool.