Stanford SAFE Annual Meeting Panel

When we talk about AI safety in autonomous vehicles, the risks are immediate and visible. That is why no one questions the need for strict standards and assessment frameworks.

With children and AI, the risks are different. They unfold slowly and often quietly. Over-rewarding systems, emotional reliance, blurred boundaries between reality and fantasy. These harms are harder to detect, but they are not secondary. Waiting for them to become obvious means we waited too long.

This is what I spoke about at the Stanford Center for AI Safety Annual Meeting, during a panel moderated by Kiana Jafari, alongside Mariami Tkeshelashvili and Ellie Sakhaee. The discussion brought together perspectives from national security, policymaking, industry, and child development.

If innovation is not safe by design for young people, the question is simple: what exactly are we innovating for?

Previous
Previous

Les écrans ça s’apprend!

Next
Next

AI Literacy for Children: New Skills for a Changing World