Compliance

EU Artificial Intelligence Act for Financial Services: Duties, Dates, and a Practical Playbook

What banks and insurers must do under the EU AI Act—roles, evidence, timelines, and a two-week readiness plan you can start today.

By Lewis Cross·
EU Artificial Intelligence Act for Financial Services: Duties, Dates, and a Practical Playbook

Where we are now (and why this matters)

The EU Artificial Intelligence Act is law. It entered into force on 1 August 2024 and is rolling out in phases rather than a single big-bang day. The first bite was the ban on "unacceptable-risk" practices, which has applied since 2 February 2025. Transparency and governance duties for general-purpose models follow from August 2025. The broader operational requirements ramp up through 2026, with remaining high-risk hooks landing by 2027. In short: the calendar is real, and Brussels has said there's no pause coming. (Digital Strategy)


What actually changes for a bank or an insurer

The Act doesn't replace your governance; it hardens it. Two roles are central. Providers (you build or place an AI system on the market) owe technical documentation, a risk-management system, data governance notes, evaluations, and clear instructions for use. Deployers (you use an AI system in production) must run the system as intended, ensure human oversight, keep records and logs, and stop or adapt use if risks materialise. In a group structure, you can be both: a central platform team often looks like a provider internally, while a business unit using that platform is a deployer. The legal text is explicit about these roles and their obligations. (EUR-Lex)

General-purpose models (foundation models) add a wrinkle. Even when you buy them, you still own how they're used. That means verifying your upstream model provider's transparency and copyright summaries, then layering your own controls—retrieval, redaction, evaluation, and logging—around the way your people and processes actually use the model. The Commission has also introduced a voluntary Code of Practice for general-purpose model providers to evidence compliance on the way to the hard obligations. (European Parliament)


Dates you should plan around (plain English)

Think of the next two years in three steps:

  • From 2 February 2025: the ban on prohibited practices applies. If a pilot or vendor demo wanders into social scoring, broad emotion inference at work, or similar, redesign it or stop it. This is already live. (DLA Piper)
  • From August 2025: transparency and governance obligations for general-purpose models begin; Member States finalise enforcement setups; guidance and codes of practice kick in. Treat this as the point where buyers and auditors will start asking for papers, not promises. (European Parliament)
  • From August 2026 through 2027: most high-risk system requirements apply in practice by 2026, with the remaining classification hook and legacy general-purpose model backfill completed by 2027. Build evidence now so you're not recreating history later. (Software Improvement Group)

What "good" looks like in a live use case

Picture a bank that already runs a large-language-model assistant for collections. It summarises a customer's situation, drafts hardship replies, and suggests next steps by pulling from policy manuals and account notes. Under the Act, that bank writes a short system card explaining the intended purpose and limits ("draft only; must be approved by an agent"), adds data notes describing sources and masking, and turns on privacy-preserving prompt/output logs so decisions can be reconstructed for audit. It also stands up a small evaluation suite—grounding tests for accuracy, refusal tests for "off-limits" questions, and a privacy leak check. None of this slows the product team; it simply makes the operating model legible to risk and audit. (EUR-Lex)

An insurer can do something similar at First Notice of Loss. Document intake and classification already exists; what's new is traceability from document → chunk → answer so an adjuster can see sources, plus routine "red-team" probes for prompt injection via uploaded PDFs. If the claims desk accepts photos or video, add basic deepfake awareness checks at ingestion and record any mitigations in an incident log. Again, this is ordinary engineering discipline presented in a way supervisors understand. (EUR-Lex)


The smallest evidence pack that works

You do not need a hundred-page manual. You do need five artefacts you keep up to date:

  • A system card in plain language (what it's for; where it must not be used; known limitations).
  • Data notes (source, legal basis, retention, masking/tokenisation, and data-quality checks).
  • Evaluation results that live with the pipeline: accuracy/grounding, safety/refusal, privacy and basic adversarial tests; include pass/fail and the mitigation plan.
  • A one-page oversight playbook (who checks what; when to stop or roll back).
  • A logging approach that lets you reconstruct decisions without stockpiling personal data forever.

Those five map neatly to the provider/deployer obligations in the Regulation and will answer 80% of the first questions a committee—or a national AI authority—will ask. (EUR-Lex)

How to start without pausing delivery

Run a two-week sprint. In week one, list your live and near-live use cases and draft the system card and data notes for the top three. In week two, add a basic evaluation suite to the pipeline, write the one-page oversight playbook, and switch on privacy-preserving logs. If you rely on an external foundation model, file their transparency pack next to yours and record the due-diligence you performed. By the end, you can show compliance in motion rather than promising it later. (European Parliament)

Common traps (and how to avoid them)

Two patterns cause trouble. First, waiting for every last standard or code of practice before doing anything. Helpful as those documents are, they do not change the legal dates, and the Commission has publicly said there is no pause. Second, assuming general-purpose model duties sit only with the model maker. They do not. As a deployer in the Union you own how the model is used in your processes, so you still need retrieval controls, redaction, evaluation, and logs around your usage. (Reuters)

Need help with EU AI Act compliance?

If you want a quick, low-friction start, we run a two-week readiness sprint with your product, risk, and data teams. You'll leave with three system cards and data notes, a working evaluation suite, an oversight playbook, and a logging pattern that satisfies auditors without slowing delivery.

Book a free consultation


Sources

  • EUR-Lex — Regulation (EU) 2024/1689 (AI Act): definitions and obligations for providers and deployers; consolidated legal text. (EUR-Lex)
  • European Parliament explainer and dates: bans from 2 Feb 2025; general-purpose transparency after 12 months; high-risk obligations on a longer clock. (European Parliament)
  • DLA Piper brief on first deadline: prohibited practices in force from 2 Feb 2025. (DLA Piper)
  • European Commission — regulatory framework and GPAI Code of Practice: application timeline, transparency/copyright summaries, and voluntary code to evidence compliance. (Digital Strategy)
  • Reuters coverage: Commission confirming no delay to statutory dates. (Reuters)

Ready to implement this in your organization?

We help financial services companies build compliant AI systems with governance built in.