Skip to content

The Five Most Important Enacted AI Regulations Affecting US and European Organizations

The world moved swiftly to adopt enterprise AI. Here come the regulations. In this story, we cover what security and risk teams need to know to weather the new regulatory waters.

Photo by Immo Wegmann / Unsplash

AI systems are now capable of generating convincing misinformation, automating cyberattacks, and making high-stakes decisions about credit, healthcare, and public safety, and these AI systems are being deployed at scale. For a time, they were deployed without a legal accountability framework. Now, nations are playing regulatory catch-up as governments worldwide realize just how powerful these systems have become.

This regulatory shift reflects a global consensus that AI has become too consequential for society to remain unregulated, with the EU, US states, and Asian governments independently concluding that legal frameworks are necessary, in their view, to ensure AI systems are transparent, auditable, and subject to human oversight before harms become widespread and irreversible.

As a result, enterprises face a fragmented compliance landscape in which identical AI systems in different regions trigger different—and sometimes conflicting—requirements: a credit-scoring AI might require EU conformity assessment and database registration, California incident reporting, Colorado impact assessments, and South Korea transparency labeling, with no harmonization across frameworks. Organizations must either build to the strictest global standard and absorb unnecessary costs in lighter-regulated markets or maintain jurisdiction-specific variants that multiply engineering complexity, compliance overhead, and the risk of costly mistakes.

AI-Generated Code Is Already Running Critical Infrastructure
Embedded systems are already running AI-generated code. Security leaders now face scale, speed, and regulatory risk gaps.

Benjamin Hori, now the chief strategy officer at Spotlite, an online marketplace for booking models and creatives based in South Korea, has been very dedicated to combating AI. As an international model earlier in his career, Hori especially dislikes deepfakes. However, in addition to fighting deepfakes, his current business endeavor must also manage digital identity protection, consent frameworks, and adapt to new AI and data transparency requirements. If AI isn't already, it will soon be central to each of those efforts. 

"The world moved incredibly fast to adopt AI solutions. We're only now starting to see the impact regulation can have on this sector. I expect stricter rules to trickle down that make it harder to collect data and use that data, which means anything you build today might need to be rebuilt tomorrow," Hori said. 

Elad Schulman, CEO and co-founder of Lasso Security, said this AI regulatory landscape is confusing because it is highly fragmented, with varying state and federal requirements and clear differences between the US, Europe, and other regions. "Organizations are facing a mix of regulatory frameworks, often with inconsistent definitions and expectations. In parallel, multiple international consortia are working to draft new regulations and standards based on real-world AI incidents, emerging attack techniques, and operational failures we are already seeing," Schulman said.

CYBR.SEC.CON CTA

The Five Most Important Enacted AI Regulations Affecting US and European Organizations

The European Union AI Act. Enforceable since August 2024, the EU AI Act categorizes AI systems into four risk tiers: prohibited (social scoring, real-time biometric surveillance), high-risk (healthcare, law enforcement, employment—requiring third-party audits and EU database registration), limited-risk (chatbots requiring disclosure), and minimal-risk (no restrictions). Penalties reach €35 million or 7 percent of global turnover. The law applies to any AI affecting EU individuals, making it mandatory for US companies serving European customers.

"The EU AI Act creates the heaviest operational lift," said Michael Bell, founder and CEO of Suzu Labs. Bell added that high-risk AI systems now require continuous logging, human oversight, and documentation that ties model outputs to specific training data and decision logic. And for organizations running production AI, that means retrofitting systems that were never built for auditability, he added. "Most enterprise AI deployments over the last three years prioritized speed to production, not regulatory compliance infrastructure," Bell said. 

 The Transparency in Frontier Artificial Intelligence Act (California SB 53). Signed September 2025 and currently enforced, SB 53 targets frontier models trained using more than 10^26 FLOPs. Frontier developers must publish transparency reports and report critical incidents to California authorities within 24 hours (for imminent danger) or within 15 days. Large developers (revenue> $500M) must publish frontier AI frameworks and conduct quarterly catastrophic risk assessments. Violations trigger up to $1 million in penalties

 "California's frontier model reporting hits a smaller number of companies, but those companies feel it hard," Bell explained. "If you're training models above certain compute thresholds, you now have incident reporting obligations and safety evaluation requirements that didn't exist 18 months ago," Bell said

 Singapore's AI Verify Framework is a voluntary, open-source testing framework that allows organizations to self-assess their AI systems against 11 governance principles using technical tools such as SHAP and AIF360. Currently limited to traditional supervised-learning models—not generative AI—it produces reports organizations can share with stakeholders. Effectiveness depends entirely on market adoption; there are no penalties for non-participation.

 While AI Verify is voluntary, organizations aren't treating it that way. "Singapore's AI Verify is technically voluntary, but any company selling AI services to Singapore government agencies treats it as mandatory. The testing requirements are specific and technical. You need to demonstrate algorithmic transparency and fairness metrics with actual test results, not just policy statements," Bell explained.

The South Korea AI Framework Act. Enforced since Jan. 20, 2026, the law targets high-impact AI in critical sectors, including healthcare, energy, and public services, and requires safety measures, risk management, and document preservation. Providers must notify users of AI use and label AI-generated content. Enforcement is light-touch, with fines capped at approximately $21,000 USD and a one-year grace period

Japan's AI Promotion Act represents an innovation-first, soft-law approach prioritizing voluntary cooperation over mandatory compliance. The framework emphasizes transparency and risk-based controls but relies on guidance, industry self-regulation, and reputation mechanisms rather than penalties or enforcement actions. 

Japan positions AI governance as a collaborative partnership between government and industry designed to foster innovation without imposing the compliance burdens that might deter market entry, experimentation, or rapid deployment. This deliberate choice reflects Japan's economic strategy to compete globally in AI development by minimizing regulatory friction while encouraging responsible practices through voluntary adoption of international standards and best practices

 "Japan and Korea's soft law approaches create a different burden. Compliance is technically voluntary, but reputation and market access depend on demonstrating alignment. Companies operating in those markets have to maintain compliance documentation even without formal legal mandates because customers and partners expect it," added Bell.

The global regulatory range is considerable

"The EU wants hard obligations and penalties. Japan runs on cooperation and reputation. Singapore has voluntary verification. California mandates incident reporting. These aren't just different rules; they're completely different philosophies on how regulation should be applied," Hori said.

Hori added that, as a result of that wide range in legal expectations, large multinationals face difficult choices. "You build separate compliance tracks for each jurisdiction and staff accordingly," added Hori. "If you're an early-stage company trying to go global, you're essentially picking which regulation will allow you scale efficiently without downstream conflict." 

For organizations running AI in production, Lasso Security's Schulman said the most substantial new operational burden is discovery, enabling CISOs to identify, assess, and maintain oversight of where AI is used across the organization, which models are in use, and which databases they have access to. "Without this visibility, it is impossible to secure AI and models or meet emerging compliance requirements. As AI adoption accelerates across teams and tools, controlled AI usage continuously inventoried, secured, and compliant has become a foundational operational challenge, not a one-time exercise," he said.

 When asked how organizations are building the "traceable, accountable AI stack many of these rules expect, Steven Swift, managing director at Suzu Labs, said a lot of organizations aren't. "They're instead choosing to accept the risk of inaction. Many of these organizations are choosing to invest the minimum into checkbox compliance, which minimally meets the letter of the requirement, without providing the meaningful improvement in risk posture that the framework was intended to provide."

 On the other end of the spectrum, there are organizations building systems that are capturing full prompt histories, so that logs exist and are mappable, Swift added. "Tools such as LangSmith help create these audit trails, for apps built within LangChain environments, for example. Even with detailed prompt logs, though, companies need to build in solutions that are use case specific to comply fully," Swift said.

With his model and creative talent marketplace going global, how does Hori plan for Spotlite to meet global AI regulations? "It's a challenge, he said. "The toughest operational burden right now isn't the compliance itself — it's deciding what to build and how, when the regulatory ground is still shifting," he said. "This is genuinely one of the hardest parts," he said.

"There's no clean answer," Hori concluded. "You default to the strictest standard where you can and accept that you'll be adjusting constantly as these frameworks mature. The honest reality is that most companies our size are operating with incomplete information and course-correcting as we go."

As AI technology evolves, implementations mature, and the regulations change, that's certainly the most likely path for many companies.

HOU.SEC.CON CTA

Latest