YouTube uses AI to replace video background without green screen

YouTube uses AI to replace video background without green screen

The company is relying on AI and machine learning to replace the background in videos without the need for a green screen, and the interface looks as easy as slapping on a Snapchat or Instagram filter onto a video. If you're familiar with the green screen technique that lets creators insert whatever they want in the background, this feature should work the same, in real-time.

A few months ago, it was revealed YouTube was testing a feature similar to Stories in Instagram with select content creators. The technology has been integrated into stories, which is a lightweight video format now in limited beta for YouTube creators, though there are plans to integrate into more of Google's augmented reality services in the future. The Pixel 2 is sporting an AI-based portrait mode for still images.

The process to recognise the background of an image is hard, which makes it more complicated for videos. After identifying all of that, it separates that data from the background layer.

Google offered more details on the new tech on the company's Research Blog.

According to a report on Techcrunch changing the background of a video or image requires a device to recognise the foreground and background of the image or video.

Separating the background is an impressive feature in itself, but Google took it forward, by making a program to run on limited hardware on a smartphone rather than a desktop computer.

Google silently made the ability to download YouTube videos for offline view available in more countries than we had expected. Well, it looks like Google is now working to add an advanced video-editing tool which will allow Reel users to quickly change the background of your clips.

Google's new tech, however, does not mean that the end of the green screen is anyway near. In some cases, edges are filled with ugly halos along with other anomalies, notes TechTimes. The editing tool is already available for smartphones, but so far only in beta and for a small range of users. We wanted the app was easy and fast, so we initially set ourselves the task to get it to work 20-30 times faster than other segmentation models of photos.

Related:

Comments


Other news