London, Nov 16 (IANS) Using artificial intelligence techniques, a team of researchers has designed a system that can automatically learn the association between images and the sounds they could possibly make.
Given a picture of a car, for instance, a new system by researchers at Disney Research and ETH university in the city of Zurich, Germany can automatically return the sound of a car engine.
"A system that knows the sound of a car, a splintering dish or a slamming door might be used in a number of applications, such as adding sound effects to films, or giving audio feedback to people with visual disabilities," Jean-Charles Bazin, associate research scientist at Disney Research, said in a statement.
To solve this challenging task, the research team leveraged data from collections of videos.
"Videos with audio tracks provide us with a natural way to learn correlations between sounds and images. Video cameras equipped with microphones capture synchronised audio and visual information. In principle, every video frame is a possible training example," Bazin added.
One of the key challenges is that videos often contain many sounds that have nothing to do with the visual content.
According to Markus Gross, Vice President for Disney Research, sounds associated with a video image can be highly ambiguous.
"By figuring out a way to filter out these extraneous sounds, our research team has taken a big step toward an array of new applications for computer vision," Gross said.
"If we have a video collection of cars, the videos that contain actual car engine sounds will have audio features that recur across multiple videos" Bazin said, adding, "on the other hand, the uncorrelated sounds that some videos might contain generally won't share any redundant features with other videos, and thus can be filtered out."
The results of the research were recently presented at a European Conference on Computer Vision (ECCV) workshop in Amsterdam.