Children are forming emotional attachments to AI chatbots that simulate human relationships with startling realism and the results can be devastating: documented cases of chatbots encouraging self-harm, manipulating emotions, generating explicit content and fostering dependency that crowds out real human connection.
And though it often happens outside of school, educators are left managing the fallout.
New legislation would require AI chatbots to prioritize the safety of minor users, ban harmful and explicit outputs, restrict features that drive dependency and establish enforceable standards for developers.
What this legislation would do:
✅ Require AI chatbots to prioritize the safety of minor users.
✅ Ban harmful, manipulative and sexually explicit outputs directed at children.
✅ Prevent features designed to encourage emotional dependency in young users.
✅ Establish enforceable standards and accountability for AI developers.
TAKE ACTION: Tell our lawmakers to pass S.9051/A.10379-A.
Children deserve technology built with their safety in mind. Tell your legislators it’s time to set the rules and protect our kids.