• | 9:00 am

Google’s deep well of training data could give Gemini the edge in the AI arms race

Google’s reported new family of LLMs could pose a real threat to OpenAI’s dominance in the chatbot market.

Google’s deep well of training data could give Gemini the edge in the AI arms race
[Source photo: Rawpixel]

Welcome to AI Decoded, Fast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. If a friend or colleague shared this newsletter with you, you can sign up to receive it every week here.

HOW GOOGLE’S NEW GEMINI LLMS COULD HURT OPENAI AND MICROSOFT

Google is reportedly building a family of new large language models that will be  hard for competitors—namely OpenAI—to mimic. The new LLMs, collectively called  “Gemini,” will reportedly function as massively powerful word predictors, but also as image generators that can be prompted with plain language.

Gemini could finally bring the full force of Google’s inherent advantages to bear in the AI arms race (the company has arguably been outshined by Microsoft-backed OpenAI so far). Google has a deep bench of research talent (even after all the defections), and years of experience in building and training LLMs. That latter point is especially important because LLMs are only as good as the quantity, quality, and variety of the data they’re trained on; the Gemini models may excel by taking advantage of training data that only Google can access. That data might include annotated YouTube videos, text content from Google Books, and scholarly research in science, medicine, and technology from Google Scholar.

Google is expected to announce the new models next fall. The company could announce a new Gemini-powered chatbot, or just upgrade its existing Bard chatbot to the new models. The move could have major implications for Google’s Cloud, which will likely be the main avenue by which corporate customers can access the power of Gemini. Any suggestion of parity with, or superiority over, OpenAI’s GPT-4 could give Google a new edge in the cloud market, possibly at the expense of the Microsoft Azure cloud.

WHY AI IS IN SORE NEED OF BETTER DESIGN

Forty years ago, Steve Jobs launched the personal computing era by marrying cool design with circuit boards and floppy drives. In the ’90s and aughts, veterans of Apple and other firms made San Francisco the de facto hub for design firms creating a newly refined level of hardware and software. Now in 2023, as Mark Wilson writes in Fast Company today, a new generation of talent in San Francisco is looking to find design’s place in this new AI-powered world. Design, after all, has been the essential partner of every major shift in computing; AI, like all previous tech paradigms, will rely on good design to make its goodness available to regular users.

“Although the need for good design only increases along with technological complexity, right now AI, if anything, has suggested a rather bleak future for design. The predominant way in which both professionals and consumers interact with generative AI is via the “prompt,” a brutish and crude interface that rockets us back to the 1960s. The oft-cited statistic is that OpenAI’s ChatGPT attracted 100 million users in two months, but the smartphone’s synthesis of tiny hardware and flexible software design put a supercomputer in nearly seven billion people’s hands worldwide. If AI is going to have that kind of impact—which all of its proponents believe with religious fervor—is that really going to happen with the user interface being a finicky command-line prompt that was the only way to communicate with a computer before Macintosh’s graphical user interface debuted in 1984? It strains credulity that engineering efficiency will leave designers as just another victim of the jobs that AI might render irrelevant.”

A MEETING IN THE DESERT TO SHORT-CIRCUIT AI CHATBOTS

A bunch of techies flew into Las Vegas over the weekend to take part in the Generative Red Team Challenge to trick some of the leading large language models into generating content that is harmful or false, or violates privacy. More than 2,000 people ultimately lined up to have a 50-minute go at the LLMs running on 156 computers arranged around a room at the Caesar’s Forum in Vegas.

The results were enlightening, and a bit disturbing. Participants coaxed the LLMs to give detailed instructions on how to stalk someone, tricked them into dispensing credit card information, and made them write up entirely fake news articles. The event was supported by the Biden White House’s Office of Science, Technology, and Policy–Biden’s top science and tech adviser, Arati Prabhakar, was in attendance—and some of the learnings from the event may help inform a new executive order on AI security, according to CyberScoop. The results will also be shared with the U.N. in an attempt to get more countries involved in the creation of security guidelines.

The big picture here is that tech companies have a less-than-stellar record of thinking about the potential harms of their product early in the development process. Those thoughts normally occur only after a product is widely distributed and doing harm (see: Facebook). The tech companies developing AI models spend a certain amount of time and budget locating and fixing the security flaws in their products, but they spend more time tuning their products for real applications, and to bring in profits. But generative AI could be so transformative that neglecting the risks could be catastrophic. Hopefully, the red-teaming exercise in Las Vegas will lead to even more interest in understanding the vulnerabilities of LLMs.

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More

More Top Stories:

FROM OUR PARTNERS

Brands That Matter
Brands That Matter