These AI-Synthesized Sound Effects Are Realistic Enough to Fool Humans

Films are generally immersive experiences, made with the aim to impress their viewers with their engaging plotlines and dazzling special effects. While some sounds may be recorded at the time of filming, movies also rely on convincing sound effects — often made during post-production by someone known as a Foley artist — to fill in those all-important background noises like footsteps, rustling leaves or falling raindrops to create a sense of reality in a film. Not surprisingly, creating and integrating such sound effects is a time-consuming and costly part of any film budget.
Now, new work from a University of Texas at San Antonio research team shows that the Foley process can be automated — using artificial intelligence that can analyze motion in a given video, and then generate its own matching artificial sound effects.
A ‘Deep Sound Synthesis Network’
Dubbed AutoFoley, the team’s system uses deep learning AI to create what they call a “deep sound synthesis network,” which can analyze, categorize and recognize what kind of action is happening in a video frame, and then produce the appropriate sound effect to enhance video that may or may not already have some sound.
“Unlike existing sound prediction and generation architectures, our algorithm is capable of precise recognition of actions as well as inter-frame relations in fast-moving video clips,” explained the researchers in their paper, which was recently published in IEEE Transactions on Multimedia.
To achieve this, the AutoFoley system first identifies the actions in a video clip, then selects a suitable sound from a customized database that matches the action. AutoFoley then attempts to ensure that the sound matches the timing of the movements in each video frame. The first part of the system analyzes the association of movement and timing in video-frame images by extracting features like color, using a multiscale recurrent neural network (RNN) combined with a convolutional neural network (CNN). However, for faster-moving actions in video clips where there may be missing visual information between consecutive frames, an interpolation technique using CNNs and a temporal relational network (TRN) is utilized so that the system can preemptively “fill in” any missing gaps and link them smoothly, so that it can still accurately time the actions along with the predicted sound.

Diagram of the architecture of AutoFoley, showing the stages of sound prediction and sound generation.
Next, AutoFoley synthesizes a sound to correspond with the action identified from the video in the previous steps. To aid in its training, the team curated their own database of common sound effects, categorized in different “sound classes” that included things like rainfall, crackling fire, galloping horses, breaking objects, and typing.
“Our interest is to enable our Foley generation network to be trained with the exact natural sound produced in a particular movie scene,” said the researchers. “To do so, we need to train the system explicitly with the specific categories of audio-visual scenes that are closely related to manually generated Foley tracks for silent movie clips.”
Some of the sounds in the database were created by the team, while others were culled from online videos. All told, the researchers’ Automatic Foley Dataset (AFD) contains sounds from a total of 1000 videos from 12 different classes, with each video duration averaging about five seconds each. As seen and heard below, the resulting AI-synthesized audio as applied to sample video clips does sound pretty realistic.
https://youtu.be/uTSff5p-v1M
https://youtu.be/QZGqLlsNArg
https://youtu.be/c–LhOG8TRc
To test how convincing the results were, the research team presented the finalized videos with the AI-generated sound effects to 57 volunteers. Surprisingly, 73% of participants believed that the synthesized AutoFoley sounds were actually the original soundtracks — a significant improvement over comparable methods that also generate sound from visual inputs.
To improve their model, the researchers now plan to expand their training dataset so to include a wider variety of realistic-sounding audio clips, in addition to further optimizing time synchronization. The team is aiming to also boost the system’s computational efficiency so that it will be capable of processing and generating sound effects in real-time. With AI now able to generate rather convincing pieces of music, literature, informational texts, and even faked videos of politicians or famous works of art that are almost indistinguishable from the real thing, it was only a matter of time before machines fooled humans with their artificially created sounds as well.
Read more in the team’s paper.
Images: Eduardo Santos Gonzaga via Pixabay; University of Texas at San Antonio