NASA is taking generative AI to space. The organization just unveiled a series of spacecraft and mission hardware designed with the same kind of artificial intelligence that creates images, text, and music out of human prompts. Called Evolved Structures, these specialized parts are being implemented in equipment including astrophysics balloon observatories, Earth-atmosphere scanners, planetary instruments, and space telescopes.
The components look as if they were extracted from an extraterrestrial ship secretly stored in an Area 51 hangar—appropriate given the engineer who started the project says he got the inspiration from watching sci-fi shows. “It happened during the pandemic. I had a lot of extra time and I was watching shows like The Expanse,” says Ryan McClelland, a research engineer at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. “They have these huge structures in space, and it got me thinking . . . we are not gonna get there the way we are doing things now.”
Designers have used generative AI in some form for years, thanks to CAD software like Autodesk and nTopology. There’s even startups like Divergent3D, whose entire business model is based on generative AI design and manufacturing to create things like the rear frame for the Aston Martin DBR22 design concept vehicle. In an interview with Fast Company, Divergent3D claimed that it lightened the part by 40% and exceeded the crash performance of traditional parts in digital simulations.
The aerospace industry, not surprisingly, has much stricter requirements than the automotive industry. Its parts must be precisely engineered and have a much lower tolerance for error. McClelland discovered that commercial-grade tools could perhaps be up to the task of designing parts for critical space missions with a little adapting. “I’ve been basically pushing the commercial tools and the commercial processes to their limits,” he says. “Most people simply would not believe this could be made by that process until someone did it.”
WARP SPEED DESIGN
As with most generative AI software, NASA’s design process begins with a prompt. “To get a good result you need a detailed prompt,” McClelland explains. “It’s kind of like prompt engineering.” Except that, in this case, he’s not typing a two-paragraph request hoping the AI will come up with something that doesn’t have an extra five more limbs. Rather, he uses geometric information and physical specifications as his inputs.
During our interview, McClelland holds up a tangle of metal. “So, for instance, I didn’t design any of this,” he says, moving his hands over the intricate arms and curves. “I gave it these interfaces, which are just simple blocks [pointing at the little cube-like shapes you can see in the part], and said there’s a mass of five kilograms hanging off here, and it’s going to experience an acceleration of 60G.” After that, the generative AI comes up with the design. McClelland says that “getting the right prompt is sort of the skill set.”
McClelland admits that the parts “look somewhat alien and weird, but once you see them in function, it really makes sense.” Using generative AI to design the parts shaved off a third of their typical weight without sacrificing performance.
McClelland views AI as a design consultant. Or rather, an entire team of designers who can compress months of work into a few days or hours under the supervision of human engineers. In traditional mechanical design, McClelland explains, a designer will come up with a CAD model that fulfills some requirement. Then, an analyst will evaluate it, which might take a week. This process iterates until, finally, there’s a drawing. Finally, another expert inspects the design to make sure it’s ready for manufacturing.
“The generative AI tool compresses that process,” he says. It does all of it internally, on its own, coming up with the design, analyzing it, assessing it for manufacturability, doing 30 or 40 iterations in just an hour. “A human team might get a couple iterations in a week.”
McClelland stresses that this process cannotwork without human input. “You still have to apply human intuition,” he says, noting that generative AI tools tend to make parts too thin. “We can look at a tree branch and think, I’m not going to hang from that tree branch because it doesn’t look strong enough,” McClelland describes. That’s not something AI can do.
The parts have already been put to use across various missions, including Mars Sample Return and Excite. And across NASA, generative AI is being used in other proposals, McClelland says. “Somebody also took the generative AI design approach to create a model of a quadcopter that may go to Titan.”
Although it is hard to know exactly to what extent generative AI technology will become part of NASA’s design process, it’s undeniable that this way of designing will fundamentally change the way humans create machines. “You’re starting to see people exploring the design of electronics,” says McClelland. “You tell the AI what a circuit needs to do, and it autonomously comes up with a circuit. There’s even someone at Goddard that’s working on 3D printing circuits and wires, which are like the nervous system to my [Evolved Structure] bones.”
McClelland says it will take some time before these disparate AI-generated parts can be combined into one process. It’s not yet possible to write a prompt that says, “Build me a spaceship to go to Jupiter in 30 days” and have the AI design and print it out. Eventually, though, McClelland believes some version of that is where things are heading. He imagines this moment is in some ways akin to the human using the first tool. Indeed, to me, it feels like we are the hominids who found the monolith in 2001: A Space Odyssey. Generative AI is our new obsidian block, opening a hyper-speed path to a completely new industrial future. It only seems right that the first of its children to go to space looks like they are returning to their alien mothership.
Loading the player...
Fast Company's Most Creative People in Business list comes to the Middle East