“I bring characters to life with computer brains.”

Sebastian Starke, A.I. Scientist
Rise of the machines

New EA tech will make future video games come to life like never before

Video game animation is getting more scientific with the use of machine learning to better animate movements in game. This technique will 1-up game development.

Updated: 
Originally Published: 

Some of Sebastian Starke’s fondest childhood memories are swinging from doorways, scaling walls, and brandishing a sword.

Starke first got hooked on video games — he only experienced those adventures through Prince of Persia — when he was just five or six.

He tells Inverse he fell headfirst into a lifelong passion for gaming that has followed him to his job as an A.I. Scientist at Electronic Arts (EA).

“I bring characters to life with computer brains,” Starke said.

In Starke’s world, motion capture is king, but the technology he’s developing could bring massive change to how video games are created.

How it works now — To bring the characters in a video game to life, actors dress up in skintight motion capture suits covered in sensors. They tediously play out cutscenes and execute perfect roundhouse kicks. It may seem like an initial step in making a video game, but every punch, friendly gesture, and hug has been organized and tagged by game developers much earlier.

It’s tedious work and it’s increasingly becoming less feasible: As motion capture technology improves its fidelity, the file sizes increase. Also, collecting every possible combination of movements with motion capture technology would be an impossible task, resulting in a video game that would dwarf current games (EA FIFA is around 50 GB; Rockstar’s Red Dead Redemption 2 is an epic 150 GB.)

“We don’t want to go into the motion capture lab and capture exponential variations of what you could do with the lower body, like walking or running, while doing other certain actions with the upper body,” Starke explains to Inverse.

The more unique moves you try to model, the harder to pre-program all their combinations by hand with mocap tech.

How it could work soon — Starke describes in a research paper presented in August at the computer graphics conference SIGGRAPH 2021 how machine learning could synthesize these characters’ movements better.

Could A.I. mostly make motion capture technology obsolete and resign those skintight bodysuits covered in sensors to the dustbin of history? Maybe not completely, but Starke’s technique could change how motion capture data is used — making for smaller file sizes but more fluid, natural character movement.

Neural animation layering, in layperson’s terms, is basically squishing two different animations together, so the character does them as a single movement. This enables game developers to recombine or modify a character’s motion after it’s been trained on motion capture data.

“[Before], if you want to add another action — such as being able to do another action while jumping at the same time, being able to open the door or sit on chairs, you would need to retrain the entire thing,” says Starke. “There [was] no way to incrementally add stuff.”

You can think of it kind of like a soft serve ice cream machine or a slot machine, where pulling a lever brings together a few of many different combinations of outcomes (in this case, movements.) But unlike these games of chance, the separate actions the team brings together to make a new hybrid animation aren’t random — although how the A.I. chooses to combine them may be.

Training their neural network on 20 hours of motion capture data taught their system how to anticipate different movements — for example, a punch or a shuffle step — and better blend different movements together for smoother animation.

How to spot a glitch — When rendered by hand, as opposed to machine learning, Starke says you’re more likely to see motion “artifacts” in the animation, i.e., glitches. This might look like a character defying physics in their movements or unnaturally contorting their body.

“Shooter games, like Fortnite and Counter-Strike, all have this thing [where] the upper body is aiming [and] moving but the lower body stays the same,” Starke says. “The hips stay at the same location, and the upper body is rotated.”

It could also help EA with their other franchises, including its extensive Star Wars adaptations, whether Jedi Fallen Order or the popular Battlefront fleet of games. And of course, EA currently produces some of the most popular sports games out there with its Madden series.

But the developer is no stranger to animation glitches, whether your Sims 4 character seems to have broken its back when turning around or you’re speaking with blank-eyed humans in BioWare's (a division of EA) 2017 Mass Effect: Andromeda. These glitches can be entertaining at times, but they take players out of the experience when they occur too often.

This new machine learning approach would not only help solve some of these glitches by giving users more control over the novel movements of their characters, Starke says.

A game-changer — In addition to helping players better immerse themselves in these games without breaking their suspension of belief, Starke says this approach may help games run more smoothly because they’ll be more compressed and require less computational time.

This may not be much of an issue for games accessed via the cloud, but it could improve performance for consoles running games on their own CPUs by making file sizes smaller overall and taking up less space on your console. It’s a game-changer.

“To solve something meaningful that will also have an impact, you need first to feel the pain.”

“Whenever a game company is shipping a game title, there is a certain amount of resources they can spend on producing a certain amount of animation,” Starke tells me. This means a game's animation can only take up so many megabytes of memory space, resulting in a lower-quality gaming experience.

“With neural networks, the compression factor is extreme,” Starke says. “You can think about going from 10 gigabytes of [available] data easily to 100 megabytes of data ... So the variety of movements can also be increased.”

Using machine learning, Starke and colleagues were able to more realistically blend different motions together in character animation.

Sebastian Starke

What’s next — Right now, this research is still experimental and hasn’t made its way into games. When it does, Starke says the technology will augment existing motion matching technology instead of standing on its own. But who knows, this technology may be fueling lightsaber fights on far-off planets before you know it.

In the meantime, Starke says it's the experience of encountering these problems himself in games — during his two to three hours of daily recreational gaming — that keeps him inspired to continue solving these problems at his day job.

“I think it's important to be a passion player if you research the game,” Starke says. “I often like to say, to solve something meaningful that will also have an impact, you need first to feel the pain that exists [in the gameplay] to find the elegant solution to that problem.”

This article was originally published on

Related Tags