The EU AI Act: The Brussels Effect
The EU AI Act has set the global standard, categorising AI systems by risk. Crucially, its reach is extraterritorial. The ‘Brussels Effect’ means that any US company processing EU data, selling to EU clients, or even using outputs that affect EU citizens must comply. Risk Categorisation: The Act prohibits ‘unacceptable risk’ systems (e.g., social scoring) and places strict obligations on ‘high-risk’ systems. In finance, Credit Scoring and Risk Assessment systems are explicitly classified as high-risk. Obligations: Providers of high-risk systems must ensure high-quality data governance (to prevent bias), maintain detailed technical documentation, and ensure human oversight. The Cost: Non-compliance carries fines of up to 7% of global turnover or €35 million, whichever is higher. For a global bank or a scaling Fintech, this is a risk factor that directly impacts valuation models and requires immediate mitigation.
Compliance is now a technical constraint that must be engineered, not documented.
US Banking Regulations: SR 11-7 and E-23
In North America, regulators are using existing powerful frameworks to police AI. The Federal Reserve’s SR 11-7 (Guidance on Model Risk Management) and Canada’s OSFI E-23 are being aggressively applied to Generative AI. Defining ‘Model’: Regulators have clarified that AI and GenAI systems are ‘models.’ This subjects them to the same rigorous validation standards as traditional credit risk models. Banks must perform independent validation, stress testing, and ongoing monitoring for their AI agents. Third-Party Risk: The guidance explicitly holds banks responsible for the AI models of their vendors. A bank cannot outsource liability. This ‘vendor risk’ creates a massive market opportunity for startups that can provide ‘auditable’ AI, and a death knell for those that cannot provide transparency.
Compliance-by-Design turns regulatory burden into a competitive product feature.
Case Studies in Failure: The Cost of Bias
The regulatory crackdown is not theoretical. The 2024 enforcement actions against Apple and Goldman Sachs ($89 million in penalties) for failures in the Apple Card dispute handling system demonstrated that regulators will punish operational failures rooted in algorithmic opacity. The inability to explain or manage the system was a key factor in the severity of the fines. Similarly, the overarching lesson from recent ‘chatbot failures’ (such as the Air Canada case, though a contract issue, implies AI liability) is that companies are liable for the output of their agents. If an AI agent hallucinates a refund policy or a trade execution, the company pays the price.
The Strategic Response: Compliance-by-Design
To navigate this landscape, AltaBlack advises a strategy of ‘Compliance-by-Design.’ This involves: Regulatory Mapping: Mapping every AI use case to specific regulatory obligations (e.g., ‘Credit Agent’ → High-Risk EU AI Act → SR 11-7 Validation). Auditable Architecture: Building systems that generate their own compliance artifacts. For example, an agent that logs its ‘Chain of Thought’ reasoning for every credit decision automatically creates the audit trail required by regulators. Human-in-the-Loop Protocols: Hardcoding requirements for human review of high-stakes decisions. This is not just good practice; it is often a legal defence against claims of negligence.
Conclusion: Compliance is a Moat
In a regulated industry, the ability to be compliant is a product feature. Startups and institutions that can demonstrate a robust, ‘White-Box’ approach to AI governance will win the trust of risk-averse buyers and regulators alike. In the Liability Cycle, compliance is the ultimate moat.