Dive Brief:
- U.S. artificial intelligence developers would be subject to data privacy obligations enforceable in federal court under a broad legislative proposal unveiled Friday by U.S. Sen. Marsha Blackburn, R-Tenn.
- Among other provisions, the proposed legislation would create a federal right for individuals to sue companies for using their personal data to train AI models without explicit consent, according to a section-by-section summary. It would allow for statutory and punitive damages, injunctions and attorney fees.
- Blackburn plans to formally introduce the bill in the new year to codify President Donald Trump’s push for “one federal rule book” for AI, the senator said in a press release.
Dive Insight:
The legislative framework comes on the heels of Trump’s signing of an executive order aimed at blocking “onerous” AI laws at the state level and promoting a national policy framework for the technology.
The order called for the administration to work with Congress “to ensure that there is a minimally burdensome national standard — not 50 discordant State ones.” The president directed David Sacks, White House special adviser for AI and crypto, and Michael Kratsios, science and technology adviser to the president, to jointly recommend federal AI legislation that would preempt any state laws in conflict with administration policy.
“Instead of pushing AI amnesty, President Trump rightfully called on Congress to pass federal standards and protections to solve the patchwork of state laws that have hindered AI innovation,” Blackburn said in the Friday release.
Besides giving individuals the right to sue over data privacy claims, Blackburn’s proposed bill would also:
- require the Federal Trade Commission to craft rules establishing “minimum reasonable” AI safeguards;
- empower the U.S. attorney general, state attorneys general and private actors to file suit to hold AI system developers liable for harms caused by “unreasonably dangerous or defective product claims”;
- require large, cutting-edge AI developers to implement protocols to manage and mitigate “catastrophic” risks related to their systems and file regular reports with the Department of Homeland Security;
- hold platforms liable for hosting an unauthorized digital replica of an individual if the platform has actual knowledge of the fact that the replica was not authorized by the person depicted; and
- mandate the reporting of AI-related job effects — including layoffs and job displacement — to the Department of Labor on a quarterly basis.
The legislation would preempt state laws governing the management of catastrophic AI risks, according to the summary. It would also “largely preempt” state laws addressing digital replicas to “create a workable national standard.”
However, the proposal would not preempt “any generally applicable law, including a body of common law or a scheme of sectoral governance that may address” AI.
The bill would become effective 180 days after enactment.