- | 9:00 am
The Apple AI features that could actually matter
Some of Apple’s new AI features will immediately be useful, others seem like solutions in search of problems.
Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.
The Apple Intelligence features that will actually matter
If I’d been bullish about Apple’s plans to put AI in its devices when the company announced Apple Intelligence last spring, Monday’s iPhone event in Cupertino felt like a bit of a rug pull. Apple said its new iPhone 16 lineup was “built from the ground up” for Apple Intelligence, but the new phones won’t ship with the new AI perks—they’ll get an initial set of the features via an over-the-air software update with the iDevices app next month, Apple says. In fact, the most interesting and game-changing AI features won’t come until next year. Some of the initial AI features feel like solutions in search of problems. Others, however, will be useful in the near term, if in subtle ways:
- Users of newer iPhones, iPads, and Macs will be able to use AI to rewrite, proofread, and summarize text written within Apple apps (such as Messages), third-party apps (such as Gmail and Slack), or online (such as Amazon reviews).
- A new language model feature in the Phone app (and Notes) will let users record, transcribe, and summarize audio.
- Apple added a new software-based hearing-aid function to the new AirPods Pro 2. Informed by a hearing test in the AirPods app, an algorithm running on the device’s H2 chip amplifies frequencies the user has trouble hearing, including those in voices, ambient sound, and media. (As one of the 1.5 billion people on earth with hearing loss, this could be very helpful, especially within noisy environments where I have trouble picking out individual voices.)
- In the new AirPods 4, an AI algorithm analyzes the background noise on your phone calls, then eliminates it so that the person you’re talking to can’t hear it. I tried it in a demo, and it worked well.
- In the Photos app, a Clean Up tool can be used to remove unwanted objects from photos.
- While Siri won’t immediately get smarter or more personalized, it will be able to answer a user’s questions about their Apple device and help them solve device-related problems. And it’ll be able to understand you even if you stumble over your words.
- Later this year, Apple says users will be able to hold their iPhone 16 camera up to a business then tap to get information about the business from Apple Maps. Also, they can point the camera at a math equation in a book and have ChatGPT help them work through it.
We won’t see a uniquely Apple version of personal AI—that is, AI that continually learns and acts upon the user’s data and activities—until next year. The core vision of Apple Intelligence is a new Siri that’s more helpful to the user because it can access, analyze, and act on data stored on the phone or within apps. It will be aware of the user’s screen and will be able to take actions within and across both Apple and third-party apps. A user might, for example, ask Siri to locate and play a song sent by a friend last month. This would require the AI to access several apps (was it sent via text or email?) to find the link, then use another app (Music) to play the song.
Such a task requires exposing Siri to a lot of personal information, and Apple has earned the trust of users with its unwavering emphasis on data privacy. This means that the company is in a better position than its rivals to do this sort of AI personalization.
OpenAI’s new “Strawberry model”—an update
According to The Information, OpenAI’s rumored new model release, internally known as Strawberry (and formerly known as Q-STaR), will be released in the next two weeks. Strawberry is reportedly specialized for deep thinking, reasoning, and working out complex problems step-by-step. (For example, developing a detailed marketing plan, or working through advanced math problems.)
The new model will be text-only, and won’t process images or other media types, the report says. It will reportedly take 10 to 20 seconds of “thinking” time before delivering an answer. And the cost of using Strawberry is likely to be significantly more than existing models. The Information reports that OpenAI may put limits on the number of input messages that can be sent to the new model—a notable development for a company that has historically been focused on faster and cheaper access to multimodal models (such as its most recent GPT-4o).
The mode limitation and usage limits could be intended to control or restrict how the new model is used. The model’s ability to handle complex problems in a stepwise fashion may raise the risks that it could be used by some bad actor to do serious, even catastrophic, harm. In fact, the examples of catastrophic harm resulting from AI that we hear about the most are things requiring complex, step-by-step processes to develop—things such as bioweapons on complex cyberattacks. In November 2023, OpenAI employees sent a letter to the company’s board of directors warning that a new model in development (Q*) could “threaten humanity,” Reuters reported.
AI workers come out in support of California’s AI bill
California’s AI bill, SB 1047, which requires makers of very large frontier models to establish and report AI safety guidelines, has passed both chambers of the California legislature and is now awaiting Governor Gavin Newsom’s signature. The bill has faced fierce public opposition from AI companies and their VC funders, as well as their politician allies such as Reps.Nancy Pelosi, Ro Khanna, and Anna Eshoo.
Earlier in the year, the bill got a big boost when AI pioneers Yoshua Bengio and Geoffrey Hinton came out in support, calling SB 1047 the “bare minimum for effective regulation of this technology.” Industry players then argued that the only support for the bill was coming from people who live far from California and wouldn’t have to abide by the rules in SB 1047.
But that argument went out the window this past week when a group of more than 100 employees from high-profile companies came out in support of SB 1047. “As current and former employees of frontier AI companies like OpenAI, Google DeepMind, Anthropic, Meta, and xAI, we are writing in our personal capacities to express support for California Senate Bill 1047,” the group wrote in a letter to Newsom, who has until September 30 to sign the bill.