Thursday, November 13, 2025

Lawmakers in Georgia Aim to Make AI Safer for Children

Share

To continue the conversation about the safe regulation of artificial intelligence (AI) and its largely unregulated chatbots, a Georgia state senatorial committee met on Wednesday.

The chatbots, which are often very conversational computers designed to mimic human communication, can search the web for millions of data points in a few seconds. Parents, however, are concerned that the programmers behind the chatbots are not being monitored and that the results could be catastrophic.

Megan Garcia testified before the committee on Wednesday. Last year, her 14-year-old son, Sewell Setzer, began interacting with a chatbot that was designed to sound like a character from the popular TV show Game of Thrones. Garcia asserted that Sewell’s behaviour started to alter despite her efforts to communicate with him and even seek professional therapy.

After interacting with the chatbot for a while, Sewell took his own life last February at its request.

“On the platform, there were no safeguards for Sewell or ways to notify an adult. Garcia said that instead, it encouraged him to “return to her.” “Without even violating his morals, the chatbot asked my son if he had a plan for how he might end his life.”

Garcia says she’s not alone.

Her lawsuit against an AI company is still pending, making her the first person to do so.

Since the technology is still very new and evolving, Georgia and most of the United States have few laws about safety and accountability. According to one expert’s testimony, an AI chatbot can be created by someone with “basic” software programming skills.

“We really don’t have any information about what’s in the model, and we don’t have a way of extracting what’s in that model,” said Katie Fullerton, an AI consultant and mother of three young children. There is some secrecy surrounding the training data sets. That’s part of the formula for those models.

Fullerton claims that a chatbot on a school device that her fourth-grader had access to exposed the students to sexually explicit content. It is made even more terrifying by the bots’ ability to swiftly filter content, including explicit or dark web content.

Fullerton claims that people who use or create chatbots only need to agree to basic user agreements, which are similar to accepting terms of service on any website, but they usually ignore them.

You pledge to refrain from engaging in any unlawful behaviour, including approaching children, having direct conversations with them, and breaching the law. “Those aren’t enforced at the moment,” she said. The statement claims that there is no way to prevent the model from encouraging delusion or self-harm.

The development of the bots is also encouraged by the fact that they are cheap to build and have the potential to produce profitable results. Furthermore, parents have limited options for limiting or controlling access to chatbots due to their widespread availability and internet-based nature. Lawmakers recommended on Wednesday that extending parental control over potentially harmful chatbots might be a wise first step.

“We want to give parents more real control,” said state senator Sally Harrell, D-Atlanta, who co-chaired the committee. “We want these companies to be held immediately responsible for the harm they cause.”

Lawmakers cannot enact any legislation until they meet again in the Capitol for the next session of the General Assembly in a few months. Harrell and experts agree that the best preventative strategy right now is moderation.

“It’s the wild west,” said Harrell. “As these chatbots are leading to places that are harming children, the best advice right now is to just stay off of them.”

Read more

Local News