As Artificial Intelligence (AI) increasingly transforms financial services, the EU AI Act introduces the first comprehensive regulatory framework for AI across the European Union. Its aim is to ensure AI is developed and used safely, responsibly and with appropriate oversight.
Understanding and preparing for this Act is essential for regulated financial services institutions aiming to leverage AI effectively.
Key messages:
- The EU AI Act establishes a clear risk based regulatory framework aimed at fostering safe, transparent, and ethical AI innovation.
- Financial institutions already familiar with regulatory frameworks will recognise strong alignment with existing governance and compliance frameworks.
- Proactive planning is essential to meet phased deadlines, manage compliance risks and leverage AI benefits safely, without disruption.
What the EU AI Act means for regulated firms
The EU AI Act introduces a regulatory framework that will feel familiar to financial institutions accustomed to managing operational and conduct risk. Rather than imposing an entirely new model of oversight, the Act builds on core regulatory expectations: transparency, accountability, risk governance, and applies them to AI systems.
The Act is designed to ensure that AI is deployed in a way that is not only innovative, but also responsible. The following core principles underpin the regulation:
- Trustworthy AI: ensuring AI systems are transparent, reliable, and safe, building consumer and market confidence.
- Protection of fundamental rights and safety: preventing harm and protecting privacy and non-discrimination.
- Promotion of innovation: encouraging AI development within clear ethical guidelines.
- Risk-based approach: scaling regulatory intensity with potential impact, prioritising controls for high-risk AI.
- EU as a global leader: establishing Europe’s influential role in global AI governance.
Key compliance milestones under the EU AI ACT
2 February 2025: prohibited AI practices and literacy requirements begin
- From this date, firms must cease the development or deployment of AI systems that fall under the Act’s prohibited use cases, including:
- social scoring by public authorities;
- exploiting vulnerabilities of specific groups; and
- subliminal manipulation likely to cause harm.
- To comply, firms should conduct an AI inventory to identify whether any existing or planned systems may be non-compliant. Any AI risks deemed unacceptable should be remediated.
- Firms should ensure that staff and third parties involved in the provision or deployment of AI systems meet appropriate AI literacy requirements.
2 August 2025: high-risk AI obligations and general-purpose AI governance begin
- This is the most significant compliance milestone for financial services firms using high-risk AI systems, (e.g. in credit scoring, AML, recruitment, biometric identification) or leveraging general-purpose AI (GPAI) models. Key obligations include:
- performing conformity assessments to ensure high-risk systems meet regulatory requirements;
- implementing risk management and quality assurance frameworks tailored to AI-specific risks;
- establishing human oversight mechanisms and maintaining clear, auditable documentation; and
- registering high-risk AI in the official EU database.
- Firms should classify AI systems using the AI risk framework, and ensure documentation, governance, and controls are in place.
2 August 2026: full application of the EU AI Act
- All remaining provisions of the EU AI Act come into force, including enforcement mechanisms, supervisory oversight, and the imposition of fines for non-compliance.
- Firms should ensure all internal teams (compliance, risk, data science, etc.) are aligned on ongoing compliance requirements and monitoring.
- Continuous monitoring, documentation, and internal audits should be embedded to demonstrate sustained adherence to the Act.
Practical steps for compliance
To prepare effectively financial institutions should integrate AI governance and risk management into existing frameworks:
- Define your AI strategy in alignment with EU AI Act provisions
- Develop or update AI policies detailing accepted uses and alignment with regulatory principles
- Clarify risk appetite in relation to AI risks
- Conduct comprehensive risk assessments of all current and planned AI systems and maintain ongoing monitoring
- Implement controls and assurance to ensure compliance and operational integrity.
- Embed AI considerations into due diligence for new technologies and vendor assessments.
- Update internal policies and procedures reflecting AI integrations.
- Conduct regular staff training on AI risks, regulatory obligations and ethical use to ensure compliance awareness and operational safety.
Cross-border impact: the EU AI Act and UK businesses
Firms should be mindful that, while the EU AI Act does not directly apply to UK based companies, it may still apply indirectly in certain situations, for example:
- Placing AI systems on the EU market: if a UK firm places an AI system on the EU market, whether through sale, distribution, or otherwise, the Act applies, regardless of the provider’s location.
- Using AI systems within the EU or targeting EU users: if a UK firm uses AI systems within the EU or provides services that use AI targeting individuals in the EU, it may fall within scope.
- Impacting individuals located in the EU: if the output of an AI system developed by a UK firm affects individuals in the EU, the firm may also be subject to certain provisions of the Act.
The Act applies to providers who place AI systems on the EU market or whose systems are used in ways that affect individuals within the EU, regardless of where the provider is based. Meaning, UK firms should carefully assess their AI activities for potential EU exposure.
Conclusion
While the EU AI Act introduces a new layer of regulatory obligations, its overarching aim is to support the safe, ethical, and effective use of AI. By integrating AI into familiar compliance frameworks and preparing early for upcoming deadlines, firms can not only reduce regulatory risk but also responsibly maximise the benefits of AI to enhance decision making, customer experience, and operational efficiency.
Given the complexity and scope of the requirements, proactive preparation is essential. Key actions firms should already be taking include:
- Conducting AI inventories to map internal and third-party use of AI
- Classifying AI systems according to the Act’s risk-based framework
- Reviewing contracts and supplier relationships to ensure compliance alignment
- Implementing training and governance structures to build internal capability
If you would like to discuss any aspects of this article, please feel free to reach out to Stuart Smith or your own fscom advisor.
This post contains a general summary of advice and is not a complete or definitive statement of the law. Specific advice should be obtained where appropriate.