• | 8:00 am

We just got another glimpse at Apple’s AI ambitions

While the new Watch caught all the buzz at yesterday’s Apple event, it’s the company’s focus on its so-called neural engine that should be causing the most chatter.

We just got another glimpse at Apple’s AI ambitions
[Source photo: Apple; Rawpixel]

Welcome to AI Decoded, Fast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. If a friend or colleague shared this newsletter with you, you can sign up to receive it every week here.

AI FIGURES HEAVILY INTO APPLE’S NEW PRODUCTS

While the carbon-neutral Watch caught all the buzz at yesterday’s Apple event, the company’s focus on its so-called neural engine should be getting just as much chatter. The “neural engine” is Applespeak for the assemblage of specialized compute cores that exist on Apple Silicon chips; it accelerates machine learning algorithms and should, effectively, make AI more feasible on Apple’s products. So it’s notable that the neural engine was mentioned throughout yesterday’s Apple event (including during talk of the new eco-minded Apple Watch). The neural engine marks a major divergence from competitors’ server-based approaches to AI; it’s the centerpiece of Apple’s on-device approach to running AI models; keeping AI contained to a device both helps contain private or sensitive information, and eliminates the time needed to send data to AI models up in the cloud. If Apple can push its silicon to deliver the compute power needed to run meaningful algorithms, it may reap the privacy and low latency benefits. Viewed from a higher perch, the neural engine highlights Apple’s use of AI: as a tool for its products, but one that doesn’t feel as Earth-shattering in the way its competitors’ offerings do.

Yesterday’s Apple event came on the heels of an Information scoop reporting that Apple has in fact been working on large language models for years, and company employees believe its LLM exceeds OpenAI’s GPT-3.5 model. Coupled with all the talk of the neural engine at the Apple event, we have further evidence that Tim Cook and co. are emerging as a dark horse candidate in the ongoing AI arms race.

FAST COMPANY’S INAUGURAL AI 20 LIST CONTINUES TO ROLL OUT ALL MONTH

Throughout September, Fast Company continues to showcase its AI 20 list, spotlighting the most influential people building, designing, regulating, and litigating AI. This morning’s entry finds Mark Sullivan talking to Hugging Face founder Clem Delangue:

“We’ve become this open platform that serves as an enabler, empowering all organizations—starting from individuals, individual researchers, independent researchers, small organizations, nonprofits, all the way up to big organizations—to host models, share models, collaborate on models, and then use models for their use case,” Delangue says. Such an approach allows companies to keep their sensitive data within their own walls instead of sending it out through an API to be processed by models hosted by another company. It’s clearly working as an enticement: Delangue says the platform now hosts more than 300,000 models, including some very influential ones, such as Meta’s Llama and Llama 2, and Stability AI’s Stable Diffusion image generator.

You can read the piece in its entirety here.

NVIDIA, ADOBE, AND OTHER COMPANIES TAKE AN AI PLEDGE

Adobe, Nvidia, IBM, Palantir, and Salesforce are among the companies that pledged to follow the White House’s voluntary agreement to follow ethical standards when it comes to building and deploying AI. The slate of tech firms follow on the heels of OpenAI, Microsoft, Amazon, Anthropic, Google, and Inflection AI, all of which have already signed the agreement, committing to share safeguarding intel with other AI companies as well as to a broad (and vague) promise to “help address society’s greatest challenges” with their technology.

Yet, as Gizmodo’s Kyle Barr points out, the news comes with a grain (or spoonful) of salt: Many of the companies who have signed the White House’s pledge are already using AI to engage in activities that are arguably not helping to address society’s greatest challenges—or worse, to exacerbate those challenges. Palantir, for example, helped build the data systems used by the U.S. Immigrations and Customs Enforcement, an agency not known to shy away from spying on people in the U.S. It’s fair, then, to wonder whether Palantir’s idea of helping society would match up with most people’s.

To AI watchdogs, a pledge is only as good as its signatories; it’ll take honest-to-God regulation to keep companies in line.

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

Max Ufberg is a senior staff editor on Fast Company's technology section. More

More Top Stories:

FROM OUR PARTNERS

Brands That Matter
Brands That Matter