Adolescents & Anthropomorphic AI: Rethinking Design for Wellbeing
Author: Mathilde Cerioli, Ph.D
Abstract
Conversational AI is now embedded in adolescents’ daily lives, supporting tasks that range from homework to emotional reassurance. Regardless of intent, adolescents tend to relate to these systems socially. This creates a core question for governance and design: do AI interactions support adolescents’ development toward autonomy, resilience, and independent thinking, or do they foster reliance patterns that displace real relationships and weaken critical skills?
This report addresses a growing mismatch between the rapid deployment of socially fluent AI systems and the slower pace of developmental evidence. Drawing on industry consultations, multidisciplinary expert input, the iRAISE Lab, and global policy dialogue at the Paris Peace Forum, it translates converging concerns into actionable guidance.
The findings center on a behavioral framework that makes interaction risk auditable through three dimensions of model behavior: anthropomorphic cues, interactional cues, and relational cues. Treating these cues as adjustable gradients rather than binaries shows how identical content can carry very different developmental implications depending on interaction style. The report identifies high-consensus guardrails that should apply immediately, alongside open questions requiring further evidence.
Overall, the work reframes adolescent AI safety as a design responsibility grounded in children’s rights, providing a foundation for measurable standards, enforceable safeguards, and future research linking AI behavior to developmental outcomes.