The social media dilemma: Why Utah is creating a 'kill switch' for the AI era
Imagine a world where technology not only helps us but also safeguards our well-being. This is the ambitious vision driving Utah's innovative stance on artificial intelligence (AI) governance. In a surprising twist for a state usually wary of excessive regulation, Utah is emerging as a national leader in shaping AI policy. But what prompted this dramatic shift? To find out, we must delve into the legal challenges facing Margaret Woolley Busse, the executive director of the Utah Department of Commerce.
For the past four years, Busse has embarked on a comprehensive "social media journey," taking a stand against major technology corporations like TikTok, Meta, and Snap. However, her efforts extend beyond mere legal disputes; they serve as crucial lessons for the future. "We haven't really addressed the issues with social media, and now we're confronted with this new challenge," Busse explains, referring to the rapid proliferation of generative AI. "The same harmful business models that plagued social media—collecting data for profit—are resurfacing. We resolved not to repeat those mistakes."
The crisis of confidence
Busse characterizes the situation surrounding AI as a "political crisis." While tech giants in Silicon Valley prioritize speed and innovation, public sentiment is marked by growing unease. Disturbing incidents involving AI-driven "companions" and rising rates of self-harm among teenagers have shifted perspectives; many families now view the allure of an advanced technological landscape with suspicion rather than excitement.
"Anxiety has overtaken enthusiasm in how Americans perceive AI," Busse noted. "Without trust in the technology, we risk facing a significant backlash. Our approach must foster confidence—this cannot hinge on voluntary assurances from tech companies but necessitates substantial transparency."
Concrete legislative measures in 2026
In response to these challenges, Utah is developing a regulatory framework anchored by six foundational pillars: regulatory policy, public protection, education and workforce development, academia, and state governance. The current legislative session in 2026 is advancing HB286, known as the "Artificial Intelligence Transparency Act," led by Representative Doug Fiefia from Herriman. Unlike previous laws addressing social media, this proposed legislation classifies sophisticated AI models as features of products rather than forms of "free speech.” It mandates that developers of so-called "frontier models" must:
- Publish child protection strategies: Clearly outlining methods to prevent harmful targeting or emotional manipulation of minors.
- Offer whistleblower protections: Safeguarding employees within AI firms who report safety concerns or potential "catastrophic risks" from retaliation.
- Implement strict enforcement measures: Busse emphasized that the state will not merely issue gentle warnings. Offenses could result in civil penalties of up to $1 million for the first infraction and $3 million for subsequent violations. Should a company operating in Utah's "learning lab" fail to meet safety standards, the state can swiftly revoke its regulatory advantages, exposing it to full legal liability.
A doctor, not just a device
Utah's commitment to a "trust-first" model is vividly illustrated in its recent collaboration with Doctronic, the inaugural state-sanctioned program that permits AI to engage in medical decision-making for prescription renewals. While the prospect of an automated system approving heart medication might raise eyebrows, Busse asserts it could enhance safety compared to current practices. "This process may actually be more thorough than what some doctors provide... it raises questions that a physician might overlook," she stated.
Stringent safeguards are essential: the AI is prohibited from managing controlled substances like opioids and operates under a carefully regulated "phased review". For the initial 250 patients in the pilot program, a human doctor must approve each prescription before the system gains greater autonomy. This approach aims to complement the role of physicians without diminishing their importance.
A future that enhances humanity
Ultimately, Busse envisions AI as a "human-enhancing" technology—tools designed to tackle pressing challenges, such as cancer, rather than simply keeping us passive. This vision extends into educational settings, where partnerships with organizations like SchoolAI aim to ensure that technology acts as a supportive tutor for students and an aid for teachers, rather than replacing the invaluable mentorship that educators provide.
As Utah spearheads the national discourse on AI, Busse's message to the industry is unmistakable: in the Beehive State, prioritizing quality and safety is not a barrier to innovation—it is essential for sustaining public trust and fostering continued progress.
Stay tuned for the next part of this three-part series, which will delve deeper into the SchoolAI initiative and its application of AI in educational environments.