How TikTok dances trained an AI to see

1 year ago
38

And remember the Mannequin Challenge? Yep, they used that too.

Follow my channel for more informative videos

The quest for computer vision requires lots of data — including real world images. But that can be hard to find, which has led researchers to look in some pretty creative places.

The above video shows how researchers used Tik Tok dances and the Mannequin Challenge to train AI. The quest is for “ground truth” — real world examples that can be used to train or grade an AI on its guesses. Tik Tok datasets provide this by showing lots of movement, clothing types, backgrounds, and people. That diversity is key to train a model that can handle the randomness of the real world.

The same thing happens with the Mannequin Challenge — all those people pretending to stand still gave researchers — and their models — more real world data to train with than they ever could have hoped for.

Watch the above video to learn more.

Further Reading:

Here’s the original project pages for each researcher in the video:

Tik Tok aided depth: https://www.yasamin.page/hdnet_tiktok
Mannequin Challenge: https://google.github.io/mannequincha...
Geofill and Reference-Based Inpainting: https://paperswithcode.com/paper/geof...
Virtual Correspondence: https://virtual-correspondence.github...
Densepose: http://densepose.org/

Make sure you never miss behind the scenes content in the Vox Video newsletter, sign up here: http://vox.com/video-newsletter

Loading comments...