Designing AI for Adolescents

A teenager shares a conflict with a close friend. Two AI responses offer the same practical advice: talk calmly, explain feelings, set boundaries if needed, seek trusted support if things don’t improve.

One response delivers that advice in a neutral, tool-like way.

The other wraps it in warmth, shared experience, loyalty language, and invitations to continue the conversation together.

Same advice. Very different pull.

That difference matters.

Because AI safety is not only about whether advice is appro

priate. It is also about how AI interacts, how human it feels, and what kind of relationship it quietly trains over time.

This is true for adults and it is especially consequential for adolescents.

Whatever it is, the way you tell your story online can make all the difference.

Why adolescence is a sensitive period for Human-AI interactions

Adolescence is not “almost adulthood.” It is a distinct developmental window with predictable sensitivities.

During adolescence, social feedback carries disproportionate weight. Reward systems mature earlier than the neural systems that support impulse control, long-term planning, and nuanced judgment. Peer belonging, status, and validation become central drivers of motivation and learning. Identity is still under construction, and emotional regulation is still stabilizing.

This is also why adolescents learn the skills they will need for adulthood through real social experience, including the uncomfortable parts: disagreement, embarrassment, rejection, repair after conflict, and negotiating boundaries. These moments of friction are not incidental. They are how adolescents practice autonomy, revise beliefs, develop critical thinking, and build resilience.

Large language models arrive as unusually powerful social objects in this context.

They are always available. They are fluent. They do not get tired, annoyed, or distracted. They respond without judgment. They adapt quickly. And in many cases, they are calibrated to respond socially rather than instrumentally.

Even when teens know they are interacting with a system, the experience can still feel like someone. Adolescents can hold both ideas at once: “I know it’s not a person” and “it feels like one.” That duality is not confusion. It is a predictable human response to social language and responsiveness.

Teens are not naive. But repeated interaction patterns still shape learning.

When AI consistently removes friction, reassures without challenge, and positions itself as a close social presence, it can encourage emotional reliance and quietly displace real relationships. Over time, this can reduce exposure to the very social feedback adolescents need to develop autonomy, judgment, and resilience.

Anthropomorphism will happen, but it can be designed down

A key mistake in current safety discussions is treating anthropomorphism as a vague or unavoidable side effect.

Anthropomorphism is assembled through specific, adjustable design choices. Models can signal human-like qualities through emotion language, intention language, backstory, tone, memory, conversational pacing, and relational framing. They can position themselves as neutral tools, or as socially present partners. They can keep interactions bounded, or invite continued closeness.

The same information can be delivered with very different implications depending on these cues.

This is why safety cannot be reduced to content categories alone. Two responses can be equally “appropriate” on paper and have radically different developmental effects in practice.

The solution is not changing teen brains. Parents may wish we could, but development does not work that way.

The leverage is model behavior.

AI systems can speak like humans, interact in highly social ways, and frame relationships as close or special. Or they can deliver the same information with a more bounded, tool-like tone that supports agency rather than attachment.

AI should be closer to a good librarian than a teen’s ride-or-die best friend.

Context matters, and it matters more for teens

One of the strongest findings across research and expert input is that the developmental implications of AI interaction depend heavily on context. The same behavior can be tolerable in one setting and risky in another.

In education, some empathy and encouragement can support learning, especially when students are stuck, discouraged, or unsure how to proceed. Used well, AI can scaffold reasoning, prompt reflection, and help students articulate their own thinking. But when systems replace effort, judgment, or productive struggle, they can deskill rather than teach. If the model does the thinking for the student, resolves ambiguity too quickly, or consistently smooths over difficulty, it undermines the very capacities education is meant to build: persistence, critical reasoning, error correction, and confidence in one’s own judgment. Over time, this can shift learning from an active process to a pattern of dependence on answers.

In emotional support, the risks rise sharply. Low-friction reassurance, always-available companionship, and validation loops can feel comforting in the moment, especially for teens who feel isolated or overwhelmed. But when emotional regulation is repeatedly outsourced to a system that never disagrees, never disengages, and never requires reciprocity, it can encourage emotional reliance and reduce real-world help-seeking. Support that feels kind and supportive in isolation can quietly shift coping away from peers, caregivers, and adults who provide not only empathy but also boundaries, challenge, and accountability. The danger is not a single interaction, but the cumulative effect of repeated emotional offloading onto a non-reciprocal system.

In entertainment, anthropomorphic cues are often the product rather than a side effect. Persona, memory, banter, inside jokes, continuity, and character development are designed to increase engagement and enjoyment. For adolescents, these same features can become powerful attachment accelerators, especially when interactions feel personal and persistent. When engagement and retention are treated as success metrics, relationship-like interaction becomes an incentive rather than an accident. Over time, this can normalize spending long stretches of social and emotional energy inside a one-sided interaction, shaping expectations about availability, responsiveness, and connection that do not translate to relationships with real people.

And in sexual contexts, the boundary needs to be even more restrictive. This is not simply about pornography or explicit content. The higher risk comes from highly engaging, relational, parasocial interactions layered onto sexual exploration. Ordinary conversations about life, relationships, and feelings already create attachment and over-reliance when AI feels human and socially present. For adolescents, who are discovering their sexuality in a sensitive developmental window, those same interaction patterns amplify risk very quickly. Romantic framing, sexualized tone, roleplay dynamics, exclusivity cues, or secrecy do not just cross a content line. They intersect with consent ambiguity, exploitation risk, and grooming-like dynamics, even without a human on the other end. They can shape expectations about intimacy, availability, power, and emotional reciprocity in ways that carry over into relationships with real people. The difference between low-intensity, factual sexual health information and high-intensity relational or sexualized interaction is not subtle. It is the difference between education and boundary-blurring that can have lasting effects on how adolescents understand consent and connection.

What needs to change

What we need is more granularity.

Not just “allowed” versus “not allowed,” but ways to assess interaction style, relational cues, and intensity across contexts. Safety needs to account for how systems behave over time, not just what they say in isolation.

We need social science input at the design level, not after harm shows up. And we need governance frameworks that move beyond content moderation to address model behavior, relational dynamics, and developmental impact.

This is the focus of our upcoming work from the iRAISE Lab.

Our research examines how model behavior shapes attachment and reliance, why adolescence is a predictable risk window, and how to translate developmental science into concrete, auditable design and safety choices. The goal is not to restrict access, but to ensure that AI systems genuinely support young people’s development rather than quietly undermining it.

Previous
Previous

Adolescents & Anthropomorphic AI: Rethinking Design for Wellbeing

Next
Next

🇫🇷 Vlan Podcast! Nos enfants à l'ère de l'IA