Earlier this year, researchers from two universities and Google published a new AI-powered technique they developed called “Depth-Aware Video Frame Interpolation” or DAIN, and it’s simply mind-blowing. The tech can interpolate a 30fps video all the way to 120fps or even 480fps with almost no visible artifacts.
The team behind this breakthrough was led by Wenbo Bao from Shanghai Jiao Tong University, and included computer scientists from the University of California Merced, and Google. Together, they used the power of deep convoluted neural networks to significantly improve the quality and capability of video frame interpolation, to the point where you’d be hard-pressed to spot any artifacts.
You can see the technology at work in the stop motion video up top, which has been up-framed from 15fps to 60fps without any visible artifacts whatsoever.
For a more extreme example, check out the video below. The original footage (left) is just 30fps. Using DAIN, it’s been transformed to 120fps (middle) and even 480fps (right), taking normal footage and creating super-slow motion shot using nothing more than AI to create the intervening frames de novo.
The method works by using a “depth-aware flow projection layer,” which creates and considers a depth map and “optical flow layer” for the video as it decides how to create the intervening frames. This allows the algorithm to more accurately predict the motion of various objects based on where they sit in the frame, and accounts for occlusions more accurately as well.
The result, as the researchers put it, “performs favorably against state-of-the-art frame interpolation methods on a wide variety of datasets.”
Here’s one more sample video posted by Bao himself, where DAIN is compared to other state-of-the-art frame interpolation methods when converting a video from 12fps, to 24fps, to 48fps. This video includes a combination of camera motion, a fast moving object, and slower moving objects as well:
Sure, if you watch closely enough you may see the occasional artifact or spot an imperfection in the interpolation, but they’re shockingly rare, even when the frame rate is being tripled, quadrupled, or more.
Admittedly, this paper was published at the very beginning of 2020, and we’ve actually already shared samples that took advantage of this technique to add frames to classic footage—see here, here, and here. But we’ve never dived into the technique itself or shown the results that are possible when you really crank this up to create super-slow motion video.
Check out the samples above to learn more about exactly how this method works and see the results for yourself. And if you want to dive even deeper, you can read the full research paper or download the latest DAIN build and try it out from this link.