• | 8:00 am

There’s still time for designers to get AI right

Asking thoughtful design questions can help us cut through the noise of a dystopian future and build the world we want with AI.

There’s still time for designers to get AI right
[Source photo: Halfpoint Images/Getty Images]

As an unprecedented 1.8 billion visitors per month begin to access ChatGPT, it is an understatement to say AI will have incredible reach and, with it, provide an incredible opportunity for us to design our future world. But what kind of world will we create?

Unfortunately, I believe this critical question has been largely unaddressed and overshadowed by the enormous competitive pressure for companies to build and release new AI technologies as quickly as possible. This race to be “first” can easily become a “race to the bottom” and create harmful products at a cost to us all.

I’ve talked to a dozen startups building applications on GPT-3 and other natural language processing tools, and asked detailed questions about data privacy, content moderation, or harmful bias. When I raise questions about their design cycles, I’ve found that they’ve rarely thought through the full implications of their choices in these critical areas, yet they fully believe that they’re ready to deploy their technology—or already have.

The release of GPT has unleashed a wave of enthusiasm to build the future with AI, but before we build the future, shouldn’t we collectively and intentionally design the future we want?

To understand the risks in practical terms, we can look at real world examples of issues that have arisen with new AI products—and critically appraise them as problems that might have been prevented by asking thoughtful design questions earlier in the development process.

Questions such as: Why are we rushing into building generative AI bots that reinforce harmful eating disorders and replace trained human experts? Do we really value art (and artists) so little that we believe “prompt engineering” can evoke the same introspection? How will our children develop creative and social-emotional learning skills when AI coaches commandeer moments for human connection?

In cases where companies intentionally create exploitative products, these types of problems are a “feature not a bug,” and need to be addressed by new consumer protection laws. However, I believe that there are many people who want to build great products that don’t harm people, but lack the education and tools. That’s a solvable “culture” problem, and it starts with asking ourselves: What does it mean to be a designer in the rapidly evolving landscape of AI? And how can we take this opportunity to create a more just world?

Here are a few perspectives to consider.

Let’s establish that anyone who holds power in building with AI (at any altitude) is a designer and is accountable.

When problems arise in new tech, they typically result from technical and strategic decisions made throughout the development process, and cannot be addressed by simply saying “that’s the legal team’s job.” Whether we design the user interface, create the model, build the back end, or legislate policy, anyone who holds power within this ecosystem is a designer and decision-maker.

It’s critical that companies recruit designers with a growth mindset who can take responsibility, own the prejudices and biases that influence their work, and foresee the implications and consequences of their design choices. Companies also need to model a culture of accountability for their teams at the highest level in order for their designers to make these shifts.

The Tay ChatbotTesla car crashes, and other AI-based fiascos are a result of harmful training data and a lack of safety features, yet they’re still sold as “glitches,” a euphemistic justification that has been critiqued by UCLA professor and internet studies scholar Safiya Noble. These need to be reframed as holistic problems created by designers, not siloed technical bugs.

Designers can adopt a set of values that helps them build with AI in a way that benefits everyone. 

How can we shift from prioritizing speed to prioritizing what we stand for? Values are critical to informing our behavior and decision-making. Do we believe in data privacy as a fundamental right, and therefore choose not to exploit user data? Do we believe that the web is a public utility, and therefore choose to build it open versus tiered? Without a value system, designers are lost and will inevitably make choices that harm people (even if unintentional).

It’s critical that companies communicate their value system with potential candidates when hiring—something that organizations like Mozilla do well. Students and professionals also need to identify and form a value system, and this should be an early and foundational exercise in their education. Value-sensitive design and the design justice movement provide helpful frameworks.

Many designers believe they simply create and execute code according to a set of requirements, but in reality, they are building our AI-powered world.

Designers can be social scientists, not just tech visionaries.

The disconnect between the development of AI and the real world has resulted in racial and gender profiling, the dissemination of misinformation, and a national emergency around youth mental health. Despite these high-stakes social crises, computing (and AI) is taught as a highly technical discipline, divorced from the humanities and social sciences. Designers who harness AI, but really all designers, need to understand human behavior, how human societies are structured, and how systems of oppression operate before they start building anything.

Because AI replicates real-world systems (and may become foundational to them), AI and computing education should be rooted in sociology, linguistics, political science, and gender studies. The introduction of texts like Automating Inequality by Virginia Eubanks and Race After Technology by Ruha Benjamin, could improve students’ ability to examine the structural inequities all around them, understand how the technologies they build might exacerbate these inequities, and anticipate the positive and negative externalities of their design choices. With this reframing, students might enter the real world and understand how to prevent digital redlining or the design of surveillance tech.

We’re at an inflection point where we can build with AI and other emergent technologies to shape our world for the better, but we need to choose to think and do differently. It starts with deep and thoughtful design around the world we truly want, and ensuring our roles, the tools we use, the way we learn, and the cultures we create emulate it.

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

Ariam Mogos is the emerging tech lead at the Stanford d.school. More

More Top Stories:

FROM OUR PARTNERS