• | 10:00 am

Nvidia may change gaming with two new generative AI technologies

Nvidia’s Ace and Neuralangelo will push video game creation and play to new heights, but Roll.ai will be extremely useful for anyone who wants amazing remote videos.

Nvidia may change gaming with two new generative AI technologies
[Source photo: Nvidia]

In the age of artificial intelligence, the newly $1 trillion Nvidia is killing it. This is not just the result of its long history of graphics processor development crystallizing in the perfect chips to power the generative AI Big Bang but also—perhaps, more importantly—pure AI research papers that have always felt like magic.

Fact is, the company probably knows AI better than most, thanks to research and development that goes all the way back to 2015. It’s not an exaggeration to say that this academic work has deeply impacted the course of computing history, ever since the 2018 introduction of Deep Learning Super Sampling (DLSS)—an AI technology that upsamples images with the resolution of a Gameboy into 4K images worthy (literally!) of The Mandalorian. Unknown to most, this was generative AI right in front of our eyes back when we didn’t know what “generative” meant, and we all thought AI was just in the brains of HAL 9000 and Terminators.

These next two items—Ace and Neuralangelo—illustrate this as beautifully as mirror-shine-dark-chocolate glazing freshly made donuts. The third item—video editing tool Roll.ai—is not from Nvidia, but it left me as astonished as a dog at a squirrel convention.

Ace makes non-player characters into a new AI species

This week, Nvidia unleashed Ace, which, according to the company, “is a new foundry for intelligent in-game characters powered by generative AI.” The technology allows non-player characters (NPC) to be smart, with their own personality and capacity to interact directly with human players, talking in real time with them, responding to their actual voice.

Here’s how it looks in real life:

This is a software platform that is aimed at “developers of middleware, tools, and games” so they can make their assistants and NPCs react naturally, both visually—moving and changing their facial expressions as they talk—and audibly, by generating speech. The demo was made with Convai—a company that develops conversational AI for virtual worlds—and shows the interaction with a non-player character named Jin, all rendered with ray tracing and scaled up with DLSS 3 in real time.

Neurangelo turns any video into high-definition 3D

Its name is so bad that it’s good, and Neuralangelo’s technology is so incredibly good that it makes the competition seem positively pedestrian. Luma is the current de facto standard for NeRFs, which stands for “neural radiance field”—basically an AI that can take a video and turn it into a full 3D scene like Neuralangelo does.

According to Nvidia Research, Neuralangelo doesn’t only generate lifelike virtual replicas of buildings, sculptures, and other real-world objects, but also captures unparalleled detail and textures: “Neuralangelo’s ability to translate the textures of complex materials—including roof shingles, panes of glass, and smooth marble—from 2D videos to 3D assets significantly surpasses prior methods. The high fidelity makes its 3D reconstructions easier for developers and creative professionals to rapidly create usable virtual objects for their projects using footage captured by smartphones.”

Indeed, once it makes it to actual apps, this will be extremely useful for everyone doing graphics, from video people to illustrators to game developers. Just watch:

Honestly, the level of three-dimensional detail shown in the video left me flabbergasted. I would guess it makes up the close-up details by imagining what the material may be up close in the physical world? I have no idea. But this is what I meant by magic.

The company says that this is “just one of nearly 30 projects by Nvidia Research to be presented at the Conference on Computer Vision and Pattern Recognition (CVPR).”

Roll.ai transforms regular video into amazing video

Roll.ai may not be as flashy as Ace and Neuralangelo, but to me, the results are equally as stunning. You have heard similar pitches before—“Now anyone can create professional, studio-quality remote videos in minutes, with just an iPhone.”—but this is the real thing, not a yawner.

The app uses AI to generate pans, dolly shots, and crane shots without any special prep or equipment, just by manipulating iPhone camera footage. Check it out:

It looks like the actual materialization of the old industry adage: We will fix it in post. The trick seems to be in analyzing the scene, adding depth to it, and then moving a virtual camera to simulate those shots, filling in the missing information (what is behind a subject as this virtual camera moves) with AI.

The software can also reframe subjects, relight them, edit automatically, and even allow you to edit by using text: It captures the dialog, and you can edit that text, cutting things at will, and the software does the rest.

It came out today, so I have to confess that I haven’t tried it yet. And that’s the good news: This is not just a cool demo, you can try it right now if you have macOS.

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

Jesus Diaz founded the new Sploid for Gawker Media after seven years working at Gizmodo, where he helmed the lost-in-a-bar iPhone 4 story. He's a creative director, screenwriter, and producer at The Magic Sauce and a contributing writer at Fast Company. More

More Top Stories:

FROM OUR PARTNERS