• | 8:00 am

Google Bard takes a big step toward personalized AI

A Google announcement this week shows that the company has been sprinting to build more personalized AI.

Google Bard takes a big step toward personalized AI
[Source photo: Pawel Czerwinski/Unsplash]

Welcome to AI Decoded, Fast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. If a friend or colleague shared this newsletter with you, you can sign up to receive it every week here.

GOOGLE BARD TAKES AN IMPORTANT STEP TOWARD USEFULNESS

Google has been hustling to regain its early (pre-ChatGPT) lead in the development of AI models and tools. The company released its Bard chatbot to a limited number of users in March, allowing them to add the chatbot as an experimental feature within its core search product. Now the company is pushing Bard a step further by letting users give the chatbot access to their Gmail and Google Drive accounts. Users can also give Bard permission to pull videos from YouTube, as well as map information from Maps and Flights.

Giving an AI tool access to Gmail and document data might give some users pause. But Google chose to require a high level of intentionality from users in order to do so: Users must install an extension that allows Bard to connect to other Google services. The company also assures users that data accessed by the bot will not be seen by Google workers charged with providing reinforcement feedback to the Bard model. Nor will Google use the data for advertising purposes, it says.

Done safely, giving Bard access to users’ personal data is a very important step for Google. LLM chatbots à la ChatGPT know only what they’ve learned from the compressed version of the internet on which they’re trained. To become a useful personal assistant, a chatbot has to have some knowledge of the user and their plans, projects, and preferences. Personally, I’ve been waiting for this moment from the start of the AI chatbot boom. The first AI chatbots have been disappointing because they’ve not been able to learn much about me. Microsoft is giving its “copilots” the ability to mine data from its various productivity apps—but that’s within the realm of the workplace. By giving Bard access to users’ personal email and documents Google might gain the inside track in “personal AI.”

ZUCKERBERG’S NONPROFIT WILL BUILD A LARGE GPU CLUSTER TO STUDY CELLS USING AI

The Chan Zuckerberg Initiative (CZI) says it will build a large cluster of more than 1,000 Nvidia H100 graphics processing units (GPUs), which will be used to run large AI models in the research of cell behaviors. This large cluster will make Zuckerberg’s research nonprofit one of a small number of organizations with enough ready capital to buy the specialized servers needed to conduct meaningful AI research. In this case, the research will focus on developing ways of predicting the behavior of individual human cells in various conditions and time frames—a huge undertaking, given that a vast array of cellular traits must be quantified and represented in data form within the model.

Stephen Quake, head of science at CZI, tells me that large language models (LLMs) may be useful in studying cell behavior much as they’ve proven valuable for understanding the behaviors of cells’ protein sequences. An LLM might process millions or billions of data points expressing various cell traits or behaviors, which could in turn lead to the discovery of novel patterns within the data to then guide clinical insights. An LLM might be used to predict a human cell’s reaction to a medication, for example, or to understand what happens within the cells when a child is born with a rare disease. Quake says CZI will use the hardware to power its own cellular research projects, but will also allow other nonprofits to use the hardware for research that CZI supports.

YANN LECUN TESTIFIES BEFORE MARK WARNER’S SENATE INTEL COMMITTEE 

The Senate Intelligence Committee held a hearing on AI Tuesday, with the goal of understanding how the technology can be used both for and against U.S. intelligence. Related to that is the question of whether AI models should be developed in the open and published via open-source sites (such as Hugging Face) for all to see.

Yann LeCun, Meta’s chief AI scientist and one of the minds behind the deep neural networks that power the current AI boom, was the sole tech industry representative to testify. While LeCun acknowledged that some models should be kept under wraps, he insisted that, generally speaking, the risks of large AI models are best mitigated through total transparency and cooperation. “An open source foundation . . . gives more people and businesses the power to access and test state-of-the-art technology to identify potential vulnerabilities, which can then be mitigated in a transparent way by an open community,” LeCun told the committee.  “Rather than having dozens of companies building many different AI models, an open-source model creates an industry standard, much like the model of the Internet in 1992.”

The hearing comes as the White House, which has held a number of AI events itself, has been encouraging Congress to create meaningful regulations around the safe development and use of the technology.

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More

More Top Stories:

FROM OUR PARTNERS

Brands That Matter
Brands That Matter