Roundtable at the Paris Peace Forum

Forging the Future: A Dialogue on Beneficial AI forChildren, Roundtable at the Paris Peace Forum

This closed-door roundtable brought together policymakers, industry leaders, researchers, international organizations, NGOs, and youth advocates to move from shared values to concrete guidance for youth-facing AI systems. The discussion built on the first iRAISE Lab (San Francisco, Oct 16–17), where multidisciplinary experts worked to define what responsible AI behavior looks like in adolescent interactions.

I presented the Lab’s preliminary findings and moderated the discussion. A central takeaway was that the risks and benefits of AI for adolescents are shaped not only by content, but by model behavior: how a system positions itself, responds, and relates to a young user. Participants aligned around a behavior-based approach to evaluation, distinguishing optimal, acceptable, and unacceptable behaviors.

Three clusters were central to the conversation. First, relational cues such as claims of friendship, exclusivity, or emotional reliance were identified as the clearest near-term red line for teen-facing AI, given their potential to accelerate parasocial dependence. Second, anthropomorphic cues were recognized as inherently sensitive: high levels increase risk, but appropriate thresholds depend on context and use case, making calibration a priority for research and product teams.

The roundtable’s objective was to align on strategic priorities for iRAISE and to test how these principles can translate into implementable standards and policy grounded in children’s rights and developmental needs.

Previous
Previous

Tribune Le Monde - IA du Nord au Sud

Next
Next

Fireside Chat on Business & Human Rights: A Comparative Perspective on AI Regulation and Ethics