A U.S. company’s chatbot accessible to EU users could soon trigger major compliance hurdles and costs for finance chiefs.
As of Aug. 2, the EU AI Act enters a new enforcement phase, requiring newly deployed AI systems like chatbots and other AI-generated content to be clearly labeled and disclosed if placed on the EU market.
For CFOs of U.S.-based firms, this brings fresh obligations, a compliance lift and potential new costs. Like the Colorado AI Act, the EU regulation applies to both deployers — companies that use AI systems — and providers, companies that develop AI models.
CFOs need to be alert to two key risks: their companies could be fined either 35 million euros per incident or 7% of global annual revenue — whichever is higher. In some cases, their AI platforms could even be shut down, said Rohan Massey, a leader in Ropes & Gray’s data, privacy and cybersecurity practice and a managing partner in the law firm’s London office.
“If you have a system that you've built your business around that suddenly you're told you're not allowed to use — that doesn't make for a bad day or bad week. That stops your business,” Massey said in an interview.
Providers of general-purpose AI models — broad AI systems, including large language models that operate in the EU — must begin complying with requirements starting Aug. 2. This includes submitting technical documentation, publishing summaries of training data, providing information to downstream users and adopting copyright and risk mitigation policies if their models pose systemic risk. Models already on the EU market before Aug. 2 benefit from a two-year grandfathering period, giving providers until 2027 to comply.
For U.S. finance teams, the implications are wide-ranging: clarifying their role, budgeting for training and legal review, and documenting decisions under a law that applies extraterritorially. Training requirements for staff handling AI systems took effect on Feb. 2, 2025.
Many companies without in-house capacity are likely to turn to outside counsel — a cost that CFOs should factor into compliance planning, Massey said.
Deployer or provider? It’s murky.
The challenge firms many companies will face is determining whether they are deployers or providers under the law. While EU guidance suggests that a substantial modification — such as altering more than 33% of a model’s training compute — may make a company a provider, the distinction can be murky. Companies will also have to evaluate whether they’ve shifted from a deployer to provider as they customize foundation models, which could be a major issue.
He noted that the law does not clearly define when the threshold is crossed, though this ambiguity will likely be resolved through enforcement over the next five years.
Certain so-called high-risk AI systems, defined broadly by the regulations, could drive up costs if companies are forced to temporarily shut down systems to meet additional strict requirements for extra documentation and risk management, Massey said. These systems include those used in biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration and justice. The related obligations won’t take effect until mid-2026.
“It may be that a lot of organizations find themselves deemed high risk, which I think — especially from a U.S. perspective — kind of sets people back a bit,” he said.
Meanwhile, "unacceptable risk” AI systems were banned outright under the legislation in 2024, including applications for such uses as social scoring and emotion recognition in schools or workplaces and manipulative AI targeting vulnerable populations. Companies found using or offering these tools in the EU can face immediate enforcement action, including removal from the market.
Early preparatory steps — including inventorying AI systems and assessing provider vs. deployer roles — can help contain costs and reduce risk but compliance could be a moving target
“We’re likely to see more guidance coming out throughout the course of the end of this year, beginning of next year — and it may actually be that the guidance comes out rather close to the sort of implementation deadlines, which doesn’t give you good enough time,” Massey said.