Twitter is suddenly awash with people showing off their creations using Adobe’s new Firefly art-generation tool, as well as the new generative AI Fill feature in Photoshop. People are using Firefly for all sorts of projects: extending the backgrounds of famous paintings, restoring missing limbs on classical statues, and embellishing famous album covers. (The end result isn’t always a success—just leave London Calling alone, it’s already perfect.) The launch of Adobe’s new AI features may have dispatched the threat that generative art tools like Midjourney pose to Photoshop’s dominance.
APPLE’S BIG XR ANNOUNCEMENT COULD HAVE AN AI TWIST
It’s looking more and more likely that, by this time next week, Apple will have announced a new mixed reality headset. The invitation to its Worldwide Developers Conference (WWDC) next week includes the slogan “code new worlds,” which seem to refer to the immersive digital environments that can be seen using a mixed reality headset.
At first look, it appears as if Apple’s long-anticipated announcement is being drowned out somewhat by the current generative AI hype. But Apple’s prospective new headset may have a lot more to do with generative AI than people think. That’s because the companies that help create the virtual- and augmented-reality experiences for the headset, especially gaming experiences, are very interested in generative AI. Imagine a game that creates environments and situations based on real-time decisions made by the player. Unity and Epic Games, which make the gaming engines that developers use to create games, have both been talking about such an approach to storytelling within games. Sources with knowledge have told me that Apple has been talking to, and working with, these companies for months or years on creating optimal experiences for the “reality” headset.
NVIDIA BECOMES A TRILLION-DOLLAR COMPANY
Researchers at Google and OpenAI get much of the credit for the current surge in large generative AI models that make apps like ChatGPT possible. But the AI boom owes just as much to dramatic increases in compute power over the past decade. Training large generative AI models often requires hundreds of servers working together, running day and night. Nvidia deserves plenty of credit for pushing the compute power of its graphics chips, which have become the go-to chip for training AI models.
The company is reaping the rewards of the AI boom, as models keep getting bigger and more companies want to run them in-house. On Tuesday, Nvidia joined the ranks of just seven tech companies with a market cap of more than a trillion dollars. Nvidia just announced a new platform that combines its own central processing units and graphics processing units on the same circuit board. The new Grace Hopper superchips bring about 30 times the computing power of Nvidia’s A100 chip, which currently runs a huge portion of the world’s biggest AI models. The company also announced a supercomputer that effectively combines 256 of these chips into a single GPU. The system produces an exaflop of compute power, which is more compute power than some countries use in a year. Nvidia says Microsoft, Meta, and Google will be the first companies to use it.
HUNDREDS OF SCIENTISTS SIGN STATEMENT SAYING AI IS AN “EXISTENTIAL THREAT“
Another week, another group of scientists and researchers telling us how dangerous AI is to the future of the human race. More than 350 AI researchers, computer scientists, and executives signed a statement issued Tuesday by the the Center for AI Safety warning that “the risk of extinction from A.I. should be a global priority alongside other societal-scale risks.” Interestingly, the signatories included the heads of some of the very companies that are rushing to develop AI systems that might lead to the advanced systems warned of in the statement—including the CEOs of OpenAI, Google DeepMind, and Anthropic, and executives from Google and Microsoft.
But the statement raised questions with a number of AI ethicists, safety experts, and skeptics. Some wondered: If these big companies are so worried about AI, why don’t they just suspend development of the systems? Others, including the well-known AI ethicist Meredith Whittaker, say that by framing the AI safety issue as a future problem caused by the eventual arrival of super-advanced AI systems, companies can distract attention from the privacy, employment, and copyright threats the systems pose right now—threats that regulators could and should act upon.