The opening of the Multimedia Studio and the increased discussion of podcasting, and other rich media is raising numerous questions about our future ability to index, search and recombine those files in the way we can remix text today. That’s some time in the future, but there are some interesting experiments that, though crude, are helping to prove that we’ll get there.
One interesting experimental project is at Berkeley where they are using closed captioning to allow textual searches to be linked to multiple webcast lectures. (The beta server seems to be down as I’m writing this; but it worked well when I tried it earlier.) I’ve been told that preparing the caption file for a 55 minute lecture takes about 10 minutes.
Automatic Sync Techonolgies, who is listed as one of Berkeley’s partners in this project, raises an interesting issue for those of us who are thinking of jumping into course casting a big way:
ADA and Section 508 requires captioning for most broadcast and distributed video content. Webcasts fall under Section 508 and are subject to these captioning requirements. If you are making a webcast publicly available, then it should be captioned.
If we have to prepare the text files for the closed captioning, then at least primitive searching would appear to be possible fairly soon.