Associate Professor of Film and Media Studies Mark Williams and John Bell, lead applications developer in Dartmouth’s Academic Commons, have been awarded a two-year research and development grant from the National Endowment for the Humanities to build a cross-platform tool to help scholars study historically important films and television programs being preserved and digitized in archives around the world.
Williams is the director of the College’s Media Ecology Project (MEP), whose mission is to make rare and endangered media digitally accessible to scholars. He and Bell, the digital architect of MEP—in collaboration with the University of Maine’s Virtual Environmental and Multimodal Interaction (VEMI) Lab—will use the funds to build a semantic annotation tool (SAT), part of a suite of complementary, open-source research applications MEP is developing to let scholars create, annotate, save, and share time-based clips of historical media—anything from a few key seconds of footage to an entire film.
Williams says these searchable annotations, which can include rich, user-generated information, will make it possible to “ask new questions that you wouldn’t have been in a position to ask before.”
“The great thing is we both see this as an interdisciplinary project—it’s not a film project that needs tech support or a tech project that’s searching for content to justify its existence,” says Bell. “It’s interdisciplinary work that draws ideas from many fields to create something new.”
The films themselves will remain in their respective archives but be available for streaming, while the annotations will live on MEP’s servers. When users download SAT as a plugin to their media players, they will be able to access the clips and notes other users have generated about whatever they’re viewing.
These notes will help researchers search digitized films among many archives. MEP has pilot projects with film and television archives nationally and internationally. Among these are collaborations with the Library of Congress Paper Print Collection, a repository of thousands of paper versions of some of the earliest films ever made, what Williams calls the Rosetta Stone of film history; UCLA’s Film and Television Archive to preserve footage from the historic public television show In the Life; the Films Division of India, which is preserving state-sponsored films produced in India since 1947; and several other partnerships in various stages of development, among them WGBH in Boston, The American Archive of Public Broadcasting, the University of South Carolina, the University of Georgia, and archives in Italy, Sweden, Brazil, and the Netherlands.
As an example of the kind of research SAT might facilitate, Williams cites what he calls the mythology of how Hollywood’s star system formed. Researchers would be able to track the rise of early stars such as Florence Lawrence and Mary Pickford, whose careers began before film actors were routinely given named credits.
“Through the Paper Print Collection we will have access to an extraordinary number of Biograph films from that period,” he says, referring to the American Mutoscope and Biograph motion picture company, which produced films between 1895 and 1928.
“We can look for all of the scenes that feature Florence Lawrence and Mary Pickford and Marian Leonard and all of the people that we don’t necessarily recognize today but who were significant feature performers in Biograph films, and start to understand that mythology in a granular way,” he says. “We could create a research collection that includes films at MoMA or the Motion Picture Academy that the Library of Congress doesn’t have. By creating this capacity for search and playback across collections, you vastly add value both to scholarship and to the collections themselves.”
Another potential application of SAT: making visual media more accessible to people with limited sight. The VEMI Lab, with which Williams and Bell are partnering, specializes in studying and designing adaptive technology interfaces that assist visually impaired people with information access. Their long-term goal is to combine SAT with machine-vision tools that allow computers to “see” images to create annotations that will be able to describe scenes to visually impaired people.
“As closed captioning assists hearing-impaired television watchers, video annotations that are presented in a multimodal interface can also assist users who are visually impaired,” Williams says.
“A machine-vision program could scan a video, grab all of the places where it sees a particular face, or more generic objects in the scene, and then have an annotation that says this person is on camera right now, or there’s an airplane in the background,” says Bell. “The annotation tool itself is a foundational layer that will let us get to this.”
“You can imagine 21st-century capacities for scholarship and access that we never could have dreamed about in the analog era,” Williams says. “It’s very exciting.”