The 2025 AI-regulation reset: What cross-border operators need to know
- Gvantsa Baidoshvili
- 3 days ago
- 4 min read
Artificial intelligence is no longer simply a “tech feature”—it is now infrastructure subject to legal regimes, across multiple jurisdictions simultaneously. For companies building AI systems or deploying them internationally, the compliance challenge is real, urgent, and unavoidable. Below is a contextualised overview of four key jurisdictions: the European Union, the United States, the United Kingdom and Canada.
The European Union: Binding regulation, lifecycle obligations
The European Union has imposed the world’s first comprehensive horizontal AI regulation, the AI Act, which entered into force on 1 August 2024. From that date the regulation is valid; many of its provisions begin to apply on a staggered basis. According to European Commission documentation, the bans on “unacceptable risk” applications and the requirement for AI literacy will apply from around 2 February 2025. The rules governing general-purpose AI (GPAI) providers and many governance and transparency obligations will begin applying from around 2 August 2025. Full application of many high-risk provisions is expected by 2 August 2026 or later.
Compliance in the EU expects that AI systems are treated like regulated products: they must be subject to documented risk assessment, human oversight, logging, data-governance processes, robustness controls, post-market monitoring and external audit or conformity mechanisms. In practical terms, if an organisation develops or deploys AI into the EU market, it must build its product-development cycle to generate compliance-evidence—not simply add a legal clause.
United States: No federal AI statute—but enforcement and state regulation bite
The United States does not currently have a dedicated federal “AI Act.” Rather, the regulation of AI systems is emerging through existing consumer-protection laws (for example under the Federal Trade Commission), anti-discrimination laws, sectoral regulation (for financial services, employment, housing), and increasingly through state legislation. One notable example is Colorado’s Act on high-risk AI systems, which was passed in May 2024 and, after legislative amendment, will take full effect on 30 June 2026.
California has adopted regulations under its automated-decisionmaking regimes and employment-AI measures effective in the near term. What this means for cross-border risk is that US-exposed products cannot rely on the absence of a federal AI law. Instead, companies must map which states apply, what “high-risk” means in those contexts, and how liability may emerge from existing laws rather than bespoke AI obligations.
United Kingdom: Sector regulators and existing law, not a dedicated statute
In the UK, the regulatory strategy is to apply existing regulatory powers under the data-protection regime, equality/discrimination law, competition and consumer-protection law, and oversight by bodies such as the ICO, FCA and CMA. While a dedicated UK AI Act has been discussed, it has been delayed and is not yet in force. The injunctive force of UK law remains strong: AI systems deployed in contexts such as employment, finance, housing or public services will still trigger obligations under existing legal frameworks (for example the UK GDPR, the Equality Act). For operators in the UK, the key takeaway is that “no AI Act” does not equal “no regulation.” Instead, AI risk must be managed as part of general regulatory compliance and oversight functions.
Canada: Pending AI statute—but expectations already set
Canada’s federal Artificial Intelligence and Data Act (AIDA), introduced via Bill C-27 in 2022, has not been enacted into force as of early 2025. Parliamentary developments mean the statute is pending, not operational. Nonetheless, the government has published voluntary codes of conduct and risk-management guidance that effectively signal how enforcement will look in future. Organisations operating in Canada would be wise to behave as if the obligations of the future statute are already expected: internal risk-assessment, documentation, transparency of automated decision-making, human oversight and impact-monitoring are now best practice.
What this means for your business
Across these four jurisdictions, a clear pattern emerges: AI systems are being treated as regulated infrastructure—subject to oversight, traceability, audit-ready documentation and cross-border risk. The days when one could deploy “an algorithm” and adopt a standard licensing clause are over. Now the question is not simply “Is our contract boilerplate compliant?” but “Have we built the system, documentation and governance workflows that will stand up in multiple legal systems?”
From a practical perspective, the key steps include: mapping where your model is developed, trained and deployed; classifying whether it might fall under “high-risk” or “consequential decision-making”; building human-review or override workflows; maintaining logs of training, updates and performance; updating procurement and vendor-contracts to allocate AI-specific liabilities; and staying abreast of the fact that enforcement is already beginning, not just looming.
Any company serious about scaling globally needs to treat AI compliance as part of product architecture and operations—not a legal appendix.
Seeking Assistance? If you require assistance, GB and Partners Law Office has lawyers experienced in this area. For support and guidance, please contact us at info@gbplo.com, or click here:
General Information: The information provided in this article is intended solely for general informational purposes and should not be construed as legal advice. The content is based on the author's understanding of information and relevant laws as of the publication date. It is important to note that laws and regulations are dynamic and can change over time; they may also vary based on location and specific circumstances.
No Legal Advice or Attorney-Client Relationship: The contents of this article do not constitute legal advice and should not be relied upon as such. The transmission and receipt of the information in this article do not constitute or create an attorney-client relationship between the reader and GB and Partners Law Office or its attorney partners.
Consultation with Legal Professionals: We strongly advise readers to seek the advice of a qualified legal professional for legal counsel tailored to their specific situation. Laws and regulations related to any area are complex and vary based on numerous factors.
Disclaimer of Liability: The author and publisher of this article expressly disclaim all liability in respect of actions taken or not taken based on any contents of this article. We do not assume any responsibility for the accuracy or completeness of the information provided.




Comments