Pennsylvania Moves to Regulate AI Chatbots to Protect Youth

Pennsylvania Governor Josh Shapiro is taking significant steps to regulate artificial intelligence chatbots in the state. He has directed state agencies to draft stricter regulations, citing concerns that these technologies can mislead and potentially harm children. This initiative could position Pennsylvania alongside several other states that are exploring similar measures to safeguard youth as their interaction with AI chatbots, such as ChatGPT and Meta AI, becomes more prevalent.

During his recent budget address, Shapiro emphasized the urgency of developing regulations, stating, “This space is evolving rapidly. We need to act quickly to protect our kids.” According to a report by Common Sense Media, a significant number of U.S. teenagers engage with chatbots, with one in three using them for social interaction, emotional support, and even romantic relationships. Shapiro expressed concern that without proper oversight, children may be at risk of emotional harm, especially as many may struggle to distinguish between AI interactions and conversations with real people.

The governor’s proposed regulations include requirements for age verification, parental consent, and a prohibition on chatbots from generating sexually explicit or violent content involving minors. He also advocates for companies to direct users who mention self-harm or violence to appropriate support resources and to regularly remind users that they are not conversing with humans.

Challenges surrounding the enforcement of these new regulations remain significant. Hoda Heidari, a professor specializing in ethics and computational technologies at Carnegie Mellon University, noted that while the overarching goals are widely agreed upon, the practical implementation of age verification and content moderation is complex. “The devil is in the details,” Heidari remarked, highlighting issues such as the ease of falsifying identification online.

Efforts to verify age online often raise concerns about privacy and data security. For instance, many websites utilize age gates that require users to input their birth dates, yet these can easily be bypassed when users simply enter any date to gain access. Heidari pointed out that ensuring chatbots do not produce harmful content is equally challenging, given the potential for users to prompt chatbots in ways that circumvent existing safeguards.

Shapiro has called on state lawmakers to develop legislation aimed at protecting children and other vulnerable users from risks associated with chatbot use. A bipartisan bill currently under consideration in the state Senate seeks to establish “age-appropriate standards” and provide safeguards against content that promotes self-harm or violence. Additionally, the bill would ensure that users are directed to crisis resources when high-risk language is detected.

Despite these initiatives, the enforcement of such regulations raises questions about accountability and penalties for companies that fail to comply. Heidari expressed skepticism about the feasibility of enforcing these requirements, stating, “These are the kinds of requirements that are going to be very hard to enforce.” However, she emphasized that the challenges should not deter agencies from pursuing regulatory frameworks altogether.

The rapid advancement of artificial intelligence and its tools has outpaced existing regulatory measures, creating a landscape reminiscent of a gold rush. Under the previous Trump Administration, there was a push against state-level regulations, with an executive order issued to prevent what were deemed excessive laws that could hinder innovation. This order established a national framework for AI and an AI litigation task force to challenge state laws misaligned with federal priorities.

In response to the lack of federal action, state governments, including California and New York, have begun to implement their own regulatory measures. California has introduced a comprehensive suite of legislation since 2024 aimed at enhancing transparency, safety, and accountability in AI systems.

As Pennsylvania develops its own regulatory framework, Heidari noted that a patchwork of state-level regulations could create confusion for AI companies trying to navigate these varying laws. “There are so many directions in which they’re being pulled,” she said, suggesting that larger states like California and New York could set the regulatory tone for the entire country.

Ultimately, Shapiro’s initiative may position Pennsylvania as a key player in the evolving landscape of AI regulation in the United States. Heidari commended the administration for taking a responsible approach, emphasizing the importance of collaboration with stakeholders and experts to ensure effective regulation. “Otherwise, there is a way in which regulation can just be paying lip service to certain political agendas without having any impact,” she remarked.