February 4, 2025 12:44 PM
Credit: ByteDance / OmniHuman
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
ByteDance researchers have developed an artificial intelligence system that transforms single photographs into realistic videos of people speaking, singing and moving naturally — a breakthrough that could reshape digital entertainment and communications.
The new system, called OmniHuman, generates full-body videos showing people gesturing and moving in ways that match their speech, surpassing previous AI models that could only animate faces or upper bodies.
How OmniHuman Uses 18,700 Hours of Training Data to Create Realistic Motion
“End-to-end human animation has undergone notable advancements in recent years. However, existing methods still struggle to scale up as large general video generation models, limiting their potential in real applications,” the researchers wrote in a paper published on arXiv.
The team trained OmniHuman on more than 18,700 hours of human video data using a novel approach that combines multiple types of inputs — text, audio, and body movements. This “omni-conditions” training strategy allows the AI to learn from much larger and more diverse datasets than previous methods.