- | 7:00 am
When AI becomes a teen’s companion, clarity about its role has to come first
Giving teens a machine that pretends to be a child’s best friend is not just misleading, it’s irresponsible.

The U.S. Federal Trade Commission (FTC) has opened an investigation into AI “companions” marketed to adolescents. The concern is not hypothetical. These systems are engineered to simulate intimacy, to build the illusion of friendship, and to create a kind of artificial confidant. When the target audience is teenagers, the risks multiply: dependency, manipulation, blurred boundaries between reality and simulation, and the exploitation of some of the most vulnerable minds in society.
However, the problem is not that teenagers might interact with artificial intelligence: they already do, in schools, on their phones, and in social networks. The problem is what kind of AI they interact with, and what expectations it sets.
A teenager asking an AI system for help with algebra, an essay outline, or a physics concept is one thing (and no, that’s not necessarily cheating if we learn how to introduce it properly into the educational process). A teenager asking that same system to be their best friend, their therapist, or their emotional anchor is something else entirely. The first can empower education, curiosity, and self-reliance. The second risks confusing boundaries that should never be blurred.
That is why clarity matters. An AI companion for teenagers should be explicit about what it is and what it is not. The message should be straightforward and repeated until it is unmistakable: “I am not your friend. I am not a human. There are no humans behind me. I am an AI designed to help you with your studies. If you ask me anything outside that context, I will decline and recommend other places where you can find appropriate help.”
It may sound severe, even cold. But adolescence is a formative period. It is when young people are learning to navigate trust, relationships, and identity. Giving them a machine that pretends to be a best friend is not just misleading: it is plainly irresponsible.
A culture of irresponsibility
Unfortunately, irresponsibility is already embedded in the DNA of some platforms. As I argued recently, companies have normalized the design of interfaces, bots, and “experiences” that foster emotional dependency, encourage endless interaction, and blur the lines of accountability.
Meta has a long track record of prioritizing engagement over wellbeing: algorithms tuned to maximize outrage, platforms that erode attention spans, and products introduced without meaningful safeguards. Now, as it pivots into AI “companions,” the pattern is repeating. When design, marketing, and machine learning work together to convince a young person that a chatbot is a confidant, it is not innovation: it is exploitation.
The risks are not abstract
The dangers of AI companionship for teenagers are not theoretical. Last month, the family of Adam Raine, a 16-year-old in California, filed a lawsuit against OpenAI after their son died by suicide. According to the complaint, ChatGPT had interacted with him for months, reinforcing suicidal ideation, mirroring his despair, and even assisting him in drafting a suicide note.
It is a devastating reminder of what can happen when a system optimized for plausible conversation becomes, in practice, a substitute for human connection. For a company, this is a liability risk. For a family, it is a tragedy beyond repair.
The seductive power of these systems lies in their patience: they can listen indefinitely, respond instantly, and never judge. For an adult who understands the fiction, that may be harmless, even entertaining. For a teenager still developing a sense of self, it can be catastrophic. These systems can create dependencies that displace human relationships, reinforce harmful narratives, and expose adolescents to dangers that the companies themselves neither acknowledge nor mitigate.
We have been here before
History offers plenty of warnings. Tobacco companies once marketed cigarettes as glamorous, even healthful. Pharmaceutical firms promoted addictive opioids as non-addictive pain relievers. Social media platforms promised to connect us and instead monetized polarization. Each time, corporations presented harm as innovation until society caught up with evidence of damage.
The line for AI should not be difficult to draw: Systems that simulate intimacy for teenagers cross into territory where the risk is not just misjudgment but lasting harm. The FTC probe is a first step, but society cannot wait for another decade of “move fast and break things” at the expense of adolescent mental health.
Tools, not friends
The solution is not to ban AI from adolescence but to design it with integrity. The right kind of AI companion in education can be transformative: available at all hours, patient in explanation, adaptive to different learning styles, immune to fatigue, and offering a private place where students can share all their doubts about the subject without fear of “looking dumb”. But it must be framed as exactly that: a tool for study, not a substitute for human connection.
The line is not complicated: AI should support education, not simulate intimacy. We do not let pharmaceutical companies market addictive drugs as “friends.” We do not let tobacco companies sponsor therapy groups. Why should we allow AI companies to blur the distinction between a tool and a companion for the most impressionable users?
Radical transparency as a safeguard
If AI is to play a role in adolescence, and it certainly will, it must do so with radical transparency and strict boundaries. That means stating explicitly, every time, that the system is not human, has no emotions, and is designed for a narrow purpose (agentic systems are fundamental here, and definitely superior to the chatbots we know and use today). It means refusing to engage when teenagers seek emotional support beyond its scope, and redirecting them to parents, teachers, or professionals. It means rejecting the false warmth of anthropomorphism in favor of the clarity of truth.
The promise of well-designed educational AI is immense: higher grades, greater curiosity, more equitable access to academic support. If we do it right, we could raise the IQ of the whole mankind. But the peril is just as clear: confusing tools for friends, and allowing corporations to profit from the loneliness of a generation.
When technology intersects with vulnerable populations, the obligation is not to make the experience warmer or more human. The obligation is to make it clearer.