Technology

Trump Unveils New AI Regulatory Plan: What’s In It and What’s Missing

The White House’s new policy outline for regulating artificial intelligence, released Friday, covers a lot of ground, but one thing is clear: President Donald Trump wants the federal government to set the rules. And those laws seem to fall far short of what consumers and privacy advocates argue are needed.

The productive AI revolution has been underway for years, and US regulations are fast becoming reality. This is despite increasing awareness of the harms and challenges of AI: the harmful effects of chatbots on mental health and child development, the widespread legal conflict over copyright protection, the dangerous spread of deepfakes and AI scams, to name a few.

Sen. Marsha Blackburn presented a new policy package, called the Trump America AI Act, to Congress on Thursday. The Tennessee Republican’s bill is an attempt to codify the vision based on Trump’s 2025 AI Action Plan, while considering other legal details and providing guidance on implementing new laws (or changing existing ones).

The AI ​​Atlas

Trump has maintained that the federal government should be responsible for regulating the AI ​​industry — and that requiring AI companies to comply with 50 sets of state laws will prevent the US from “winning” the global AI race. However, a proposal to temporarily ban states from regulating AI failed in July, when it was removed at the last minute from a major budget bill, known as the “Big Good Bill Act.”

Now, the White House is doubling down on its claim to be in charge, with few exceptions. The plan addresses the biggest concerns people have about AI: job losses, conflicts over creators’ rights, rapidly expanding infrastructure like data centers and the protection of vulnerable groups like children. But critics say it doesn’t go far enough to regulate the fast-growing AI industry.

“It’s easy to protect and hard to promote malicious AI programs,” Alan Butler, president and executive director of the Electronic Privacy Information Center, said in a statement. “The American people deserve better, and Congress must do better than this.”

The White House’s proposed new AI rules

The White House’s 2026 AI proposal says Congress should not create a new regulatory body to oversee AI regulations, but should let existing organizations and subject matter experts govern as they see fit.

Protecting children: This is one area where the federal government cannot prevent the states from creating laws. And many state governments are already leading the charge, especially in regulating romantic chatbots or friends.

The program focuses on protecting children from AI-powered deepfakes, a major problem highlighted in AI creating child sexual abuse. Protecting young people from the negative effects of AI is an ongoing battle, with many high-profile cases of young people using AI to harm themselves and kill themselves.

Blackburn’s policy plan includes general language related to children’s online safety. Existing bills like the Children’s Online Safety Act and the Children’s Online Privacy Protection Act are, in theory, designed to protect children, but lawyers and technology experts say they could create a chilling effect on free speech and lead to censorship.

While Trump’s AI framework mentions censorship, it is limited to preventing AI companies from incorporating bias or bias into their products. Trump has previously railed against what he calls “awakened AI”, a term the president and his supporters have used to attack concepts like diversity, equality and inclusion.

Job loss: It’s not just translators and data entry people who are worried about losing their jobs to AI — tech workers are dying like coders and engineers, too. There have been many concerns about AI disrupting the workforce, with retail giants like Amazon laying off thousands of workers in the name of effective AI. The White House says it should use “non-regulatory” methods to focus on youth development and AI workforce training.

Infrastructure: Consistent with Trump’s previous AI Action Plan, the draft calls for state and local governments to facilitate data center construction and operation. These facilities are becoming increasingly controversial, as nearby residents report environmental damage and strain on their existing electricity grids, leading to higher electricity bills.

Many technology companies have recently agreed to pay higher electricity costs, but there is no way to enforce the voluntary pledge.

Copyright: Whether the use of copyrighted material in AI training is fair use or copyright infringement is one of the biggest legal issues of the AI ​​age. The plan underscores the administration’s position that AI companies are fair use — meaning they won’t have to get permission or pay for copyrighted content when their models are created.

But, given the growing number of cases asking judges the same question, the federal government should let those cases play out. So far, the limited cases with Anthropic and Meta have yielded small victories for the tech companies, not the authors.

The draft document indicates that the federal government may be a future licensing partner for AI companies, saying it should “provide resources to make federal data sets accessible to industry and academia in AI-friendly formats for use in training AI models and systems.”

(Disclosure: Ziff Davis, CNET’s parent company, in 2025 filed a lawsuit against OpenAI, alleging that it infringed Ziff Davis’ copyrights in training and using its AI programs.)

Is the White House program doing enough?

Technology industry groups praised the administration’s proposals, while consumer advocacy groups offered the best of skepticism.

In a statement supporting the plan, the Consumer Technology Association supported a set of statewide rules.

“AI can and will make us better, and we agree that children need special protections, First Amendment rights are very important, deep fakes are dangerous and must be controlled, and Congress should not act to limit AI platforms from relying on fair use protections,” the tech industry trade group said.

But according to Samir Jain, vice president of policy at the Center for Democracy and Technology, the government’s playbook is riddled with internal conflicts. Although it calls for the federal government to set aside state laws and regulations in the development of AI, it also says that the federal government should not undermine state authority.

“The White House’s high-level AI draft contains some sound policy statements, but its usefulness to policymakers is limited by its internal contradictions and failure to address fundamental differences between different approaches to important topics like child online safety,” Jain said in a statement.

Ben Winters, director of AI and data privacy at the Consumer Federation of America, said the proposal puts Big Tech ahead of consumers.

“It’s encouraging to see some expressed desire to protect people from AI-generated scams and the exploitation of children’s data, but it’s not enough,” Winters said in a statement. “We need to see money where their mouths are on protections — a lot of money for consumer protection organizations at both the federal and state levels. So far, they’ve done nothing but cut and tear.”



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button