Automated algorithmic description (AAD) uses existing machine-vision techniques to automate specific aspects of description such as camera motion, scene changes, face identification, and the reading of printed text. Such events could be identified by computer routines that automatically add annotations to the video. This would allow such things as the automated announcement of scene changes, or the use of text-to-speech for the reading of on-screen text.
Preliminary work in this project involves identifying the types of information most easily extracted from video using these techniques, as well as understanding from focus groups and user feedback, how best to present the information. Despite the speed of modern computers, it is likely that the visual processing for AAD would be done in advance, with tagged information stored in a separate descriptive stream. The DVX server is an ideal repository for storage and retrieval of descriptive tags, and will be used for demonstration and evaluation of AAD techniques.