Google unveils feature that lets users swap backgrounds on YouTube
Google has rolled out a new feature that lets users swap backgrounds on YouTube without requiring a green screen. The feature uses artificial intelligence and machine learning to replace the background of a video with pre-specified effects like one would add an Instagram filter to a photo. The feature, only applicable to YouTube Stories, is currently in beta-testing stage.
Google trained the software through a neural network architecture
Google calls the pro-level video-editing technique as 'mobile real-time video segmentation.' The new tool is able to replace the background in videos just like a green screen does through a convolutional neural network architecture. For this, the company trained the software on thousands of labeled images, helping it pick out the common features in them like heads, shoulders etc.
Google will use this technology in augmented reality services
"Our immediate goal is to use the limited roll out to test our technology on this first set of effects. As we improve and expand our segmentation technology to more labels, we plan to integrate it into Google's broader Augmented Reality services," the company said.
The new tool uses machine learning to become faster, smarter
Further, the video segmentation tool uses a series of optimization techniques to lower the amount of data it needs to crunch for demarcating the foreground from the background in a video. It also uses previous calculation (say, a sort of a cut out of your head) as raw material for the next image, reducing its load and making the software faster.
The segmentation engine is fast enough for video use
As a result, the feature is able to run a relatively accurate segmentation engine that is fast enough to be used in a video. It can compute 40 frames per second on the Pixel 2 smartphone and over 100 frames per second on iPhone 7.