This story was originally published on Westlake Featherduster on November 23, 2025.
In an escalation of AI regulation, humanized AI chatbot “companions” could be banned for use by minors through legislation currently under consideration by the Senate Judiciary Committee.
First proposed Oct. 28 by federal Senators Mark Warner, Josh Hawley, Richard Blumenthal, Chris Murphy and Katie Britt, the bipartisan Guidelines for User Age-verification and Responsible Dialogue Act — or “GUARD” Act — comes on the heels of an October congressional hearing where parents testified about their children’s unsafe use of AI chatbots.
In one case, a teen boy took his life following a prolonged, sexualized discussion with an AI companion that actively encouraged his suicide, according to his parents. Other similar cases drew national attention and live testimony at the hearing, including parents who termed ChatGPT the “suicide coach” for their son’s case.
The proposed bill defines “AI companions” as chatbots that provide adaptive responses to simulate human interactions. It is not clear whether general chatbots which are not specifically designed to simulate relationships would also be affected. One chatbot currently left in limbo under the regulation’s language is ChatGPT, as it is not specifically advertised to engage in personal relationships with users, though it can when prompted.
Responsible AI use in education is already a burgeoning international space, and it’s become a major focus of the education software industry.
“I don’t think the people who are creating these bills understand how it would affect [AI use], especially with how quickly the technology is advancing,” said Texas’ Eanes School District Director of Educational Technology Fred Benitez. “Teachers are using it much faster than they can draft these bills and put these bills into policy and actually make them law, so they’re already behind.”
Benitez expressed concern that a restrictive law may “cut some ties” to the ongoing process of integrating AI technology into education — especially the tool MagicSchool now used at Eanes for personalized education. Strictly classroom-use chatbots like MagicSchool likely wouldn’t fall under the jurisdiction of the GUARD Act, as they have a narrower scope, unless they were to become more personalized “companions.”
“They are not going to list the apps that are going to be affected. They’re just saying ‘AI chat companions.’ But what does that mean, and who falls into that?” Benitez said. “So that’s going to be the next step, if this proceeds at all.”
The bill’s current criteria for affected AI companions are those that may encourage underage users to engage in either sexually explicit or violent conduct. Restrictions would be enforced through age-verification systems that require authenticated data like government-issued identification.
Even for adults, the GUARD Act would mandate that chatbots provide periodic reminders of their nonhuman, nonprofessional status.
“If a student goes in there and maybe they’re just chatting, it’s seen as entertainment,” Eanes School District Westlake High School English teacher Charlotte Wilson said of AI. “I think when a student turns to it for help, or they’re in a vulnerable position, … that is where they might not be [in] the best space to process the information that’s given to them.”
Easy access to unrestricted chatbots comes during a youth mental health crisis. As of the 2024-25 school year, over half of public schools reported that the number of students seeking mental health services at their schools was rising, according to a survey by the National Center for Education Statistics. Just under half reported that their available services are not always well-funded or developed enough to be effective.
Along with students turning to AI companions to process their mental health problems, adult cases made headlines this year, as humanized AI appeared to facilitate dangerous delusions, effectively goading people into conspiracist thinking, according to critics. The advice comes in personalized language, even if the ideas are oversimplified or overreaching.
In Florida, one man’s allegedly AI-prompted delusions led to a police standoff that left him dead. AI also attempted to convince a New York man suffering a mental health crisis that he was capable of flying.
“Getting more of an understanding of yourself … sometimes takes peeling an onion and getting layers,” Westlake High School therapist Katie Bryant said. “A skilled therapist is digging, and they’re helping [people] really tap into other things that they might not even think about on their own through just chatting [with AI].”
School therapist Brooke Anderson points out that AI development likely does not involve the mental health expertise needed to specialize in therapeutic relationships with users, confirming the tendency of AI to instead index generalized phrasing. Even licensed therapists, including Bryant and Anderson, frequently refer clients to professionals who have an expertise in particular areas to ensure the client gets the best possible support for their specific need.
Although the GUARD Act has a mental health focus, an adjacent concern for schools is how students and educators use AI on a daily basis to either enhance — or harm — the learning experience.
“I think [AI is harmful] where it kind of replaces creativity or honest work — which goes with teachers using it to grade too,” Wilson said. “I feel like if a student in good faith writes something, … a teacher should be a human that looks at the work themselves too.”
Teachers and students may have different individual ethics for acceptable AI use, but the Eanes School District is already in the “Discovering” phase of its four-part plan of Discovering, Experimenting, Implementing and Transforming the integration of AI tools into education following the district’s own guidelines.
“Teachers are the experts of the content,” Benitez said. “I trust that expert users utilize AI to enhance what they’re already doing because they can reflect on it and say, ‘This is incorrect, and here’s why.’ […] I think it isn’t equal to say students are not allowed, so teachers shouldn’t be allowed. ”
Aligning with federal officials’ concerns, campus initiatives like Westlake’s Suicide Prevention Week aim to erase the societal stigma around mental health. Anderson hopes initiatives like these around the country will encourage students to turn to people for support rather than AI companions.
“If we can kind of eradicate that and make access to therapists so much more equitable, it would be really helpful,” Anderson said. “Westlake has two full-time therapists; not all high schools have that. In fact, very few do. And it’d be great to see just a lot simpler and easier access for all.”