Supervising Digital Transformation in the Age of AI - EIOPA Skip to main content
An official website of the European UnionAn official EU website
European Insurance and Occupational Pensions Authority
  • Speech
  • 21 March 2025
  • 11 min read

Supervising Digital Transformation in the Age of AI

Keynote speech delivered by Petra Hielkema, EIOPA Chairperson at the Insurtech Insights Conference in London, UK on 20 March 2025 // CHECK AGAINST DELIVERY

Dear ladies and gentlemen,

It’s a pleasure to be here with you today at the Insurtech Insights conference. It’s a fantastic opportunity for all of us who are passionate about insurance and its future to learn more about each other’s perspective, share insights and hopefully even come to a common understanding of where our industry is headed and how we can navigate the path forward together.

I was invited to speak to you today about how EIOPA regulates and supervises insurers in a time of rapid technological change. But before diving into that, I would like to first offer some context on how regulators tend to view the world.

The pace of technological development in the digital realm is nothing short of extraordinary. Comparisons with the industrial revolutions in terms of how profoundly digitalisation and artificial intelligence are transforming our societies are therefore also not without merit. We are witnessing swift and far-reaching changes. That much, I believe, we can all agree on. Our perspectives on how to approach them, however, may differ.

To illustrate how regulators tend to view rapid innovation, let me draw an analogy:

When Karl-Benz invented the first car with an internal combustion engine at the end of the 19th century, his vehicle was little more than a questionably viable alternative to horses. And yet, it went on to become one of the most emblematic inventions of the second industrial revolution. These early cars had a top speed of 16 km/h (9.94 miles) and were both unreliable and unsafe – especially by today’s standards. Many people were outright afraid of them.

They were so weak they struggled even on slight inclines. They had no suspension, no brakes other than a simple hand lever and its steering was more akin to a boat’s rudder than an actual steering wheel. Drivers were untrained, and, unlike horses, cars had no instincts to rely on in sketchy situations. They had none of the warnings systems that we can count on today, such as horns and headlights, not to mention modern brake assistance systems or airbags.

Fast forwards more than a century, and we now have cars that travel 10 even 20 times as fast, weighing multiples of Benz’s original construction. While accidents do happen – as many of you active in the motor insurance business will know – driving today is vastly safer than it once was. But that was not a given from the start.

It took smart engineering, rigorous safety standards, clear traffic codes – and yes, regulatory oversight and the threat of penalties for misbehavers to get us where we are today.

Did innovation take place? Very much so. Was it unchecked innovation? No. Regulation provided a necessary framework. Was it still a success story? Without a doubt.

How was this possible? What are the lessons? In my view, the answer is that progress is at its best when innovation and regulation are in balance.

We are currently living through another revolution that is perhaps no less consequential than the industrial revolutions before it. The world of finance has long been digital but other areas of the world are rapidly catching up – and that is opening up new horizons and challenges.

We increasingly use wearable devices that measure our health, drive cars with GPS-tracking, live in homes with smart features and use social media to log every aspect of our lives. All of this generates data that can potentially be leveraged by insurers, among others. AI is the latest addition to this mix, and it has the potential to fundamentally shake up the way our businesses and markets operate.

Artificial intelligence can lead to faster claims handling and fraud detection, the development of more accurate risk assessments and more personalised products. The efficiency gains can be significant. It is therefore no surprise that we are seeing an increasing adoption of this technology across the insurance value chain. The launch of ChatGPT in late 2022 and the consequent spread of large language models that respond remarkably well to human queries have supercharged the adoption of AI technology. EIOPA’s market survey in 2024 found that about half of non-life insurers and a quarter of life insurers were already leveraging AI throughout the value chain, and there is much more in the pipeline.

I don’t need to further underline the possibilities that digitalisation and artificial intelligence hold in store for us. At an Insurtech conference, that would be preaching to the converted. Rather, I’m here to underline something equally important: namely, that unrestricted innovation can come at a cost – one that risks eroding trust.

That is exactly why European policymakers and regulators are working to find the right balance between innovation and regulation – also in this realm. We are aiming to strike a balance between allowing novel ideas and ensuring that they do not have a harmful impact on the way we live and work. 

Europe needs innovation. Europe wants innovation. However, our objective is for innovation to be responsible and respectful with the core principles and values that we have all collectively built: with enough space for businesses to flourish and sufficient safeguards for consumers and financial stability to be protected.

In the field of digitalisation and AI, the EU has recently introduced two landmark legislations to this end.

The first one, DORA, strengthens the digital resilience of financial institutions across the EU by requiring them to better guard against IT disruptions like cyberattacks or technical failures. The financial sector is increasingly dependent on technology and tech firms, including via outsourcing – making insurers, banks and financial market participants vulnerable to IT incidents. DORA establishes important risk management, resilience testing, incident reporting and information sharing requirements for financial actors to make sure that they can continue to provide vital services to the EU’s economy even when digital crises arise, like we all witnessed last year with the major CrowdStrike IT outage, which, unfortunately, will likely not be the last. 

It is important to note that DORA is applicable to both insurers and other financial institutioins as well as Critical Third parties. While DORA is new for the latter, for the first it is very much aligned with the regulation on IT risk management that was already in place under Solvency II. In fact, with DORA coming into force, EIOPA has decided to withdraw most of its guidelines on IT and Operational risk management, to avoid duplication. 

For Crititcal third parties, which still need to be identified, DORA is new. Yet, it contributes to a better protection against deliberate and unintended ICT blackouts. We will start overseeing these critical providers later this year. DORA is therefore an important step in future-proofing the EU’s financial sector.

But it’s not the only one. The second big project that the EU undertook is the Artificial Intelligence Act. 

So what does the AI Act bring to the table?

The AI Act is the world’s first horizontal legislation that governs the development, introduction and use of AI systems in a standalone legal act. We say it is horizontal because it concerns all AI cases regardless of whether these are used by financial undertakings or other institutions, be it aviation companies, hotels, car manufactures as well as public institutions such as law enforcement, the judiciary, or indeed EIOPA.

The AI Act introduces a risk-based approach to all AI applications across the economy, balancing innovation with trust. The goal? To create a human-centric environment where new technologies can thrive—safely, responsibly, and with the confidence of businesses and consumers alike.

It defines four risk levels for AI systems, those posing:

  • Unacceptable risks
  • High risk
  • Limited risks and
  • Minimal or no risks.

Systems with unacceptable risks are essentially prohibited. These include social scoring, exploitative or manipulative systems. 

For high-risk systems, the AI Act establishes robust risk management methods, such as high data quality standards as well as strong data governance and record-keeping practices. Among other high-risk AI systems, the AI Act identifies as high risk the use of AI systems for risk assessment and pricing in relation to natural persons in the case of life and health insurance. Companies using high-risk AI systems also need to retain human oversight, be able to meaningfully explain outcomes to users and inform their users upfront when they are subject to the use of high-risk systems.

However, recent clarifications from the European Commission suggest that mathematical optimalisation methods and traditional statistical models that insurers have been using for a long time may be excluded from the scope of the AI ACT. This indicates that the AI Act’s application may be more proportionate than originally anticipated.

As for the rest of AI systems in insurance that are not outright prohibited and do not constitute high risks, these continue to operate subject to existing sectoral legislation without new requirements. Still, AI users must ensure AI literacy among their staff and inform customers when they are interacting with AI systems.

Indeed, even before the AI Act was adopted the use of AI in insurance did not take place in an unregulated space. Due to its horizontal nature, the AI Act is to be applied in conjunction with existing sectoral legislation. For insurers, this means that the relevant provisions under Solvency II and IDD remain fully in force, including the requirements to act in the best interest of customers and to put in place an effective system of governance which provides for a sound and prudent management of the business. The principle of proportionality, which is core to the European insurance legislative framework, also applies to the use of AI by insurance undertakings. EIOPA has recently published an Opinion on AI Governance and risk management highlighting these aspects.

With its differentiated and targeted approach, the AI Act sets the foundation for a responsible uptake of AI in Europe. It prescribes robust standards for applications that carry high risks while easing the adoption of novel technologies for tasks that are least likely to harm consumers.

But the AI Act is more than regulating risks, and Europe is much more that the AI Act.

The AI Act creates a reliable environment with a light-touch approach for non-high-risk AI systems, and this in itself is a boon for innovation. Combined with the resilience features of DORA, it lays the groundwork for a successful, responsible and ethical deployment of AI in Europe.

The opportunities this arrangement creates must now be translated into innovation and business growth.

To help companies test out new ideas, the AI Act requires Member States to set up AI sandboxes. While these sandboxes are to be created at the national level, their implementation, operation and supervision should be uniform across the EU to avoid unnecessary fragmentation. 

These sandboxes will coexist with the 41 innovation hubs and 14 regulatory sandboxes that already exist in Europe’s financial sector in the EEA countries, with several jurisdictions running different schemes for the securities, banking and the insurance markets.

Europe is stepping up. Just last month, the Commission announced 20 billion euros of financing to establish AI factories that bring supercomputing capacity, reliable data and talents together to foster collaboration across universities, computing centers, industry representatives and financial actors. 

Europe has also acknowledged the crucial role of data in today’s digital economy and more particularly for new technologies such AI, and for this reason it has recently approved in the Data Act and the Data Governance Act, which aim to facilitate and promote the exchange and use of data within the European Economic Area.

The digital revolution is underway, and Europe is determined to pioneer a secure, sustainable and equitable version of it for business and citizens alike.

The broader European approach to digitalisation closely aligns with EIOPA’s own thinking around digital finance. Our digital strategy has the twin core motto of “technologically neutral and people first” and “flexible, yet firmly rooted”.

These principles are fundamental to any attempt to manage change in a way that is both safe and lasting. The ultimate goal is to find a mutually benefitting symbiotic relationship between stability, new ideas and consumers that can trust the products and services they are buying.

This can only happen if AI solutions are a force of good. This means that rather than driving a wedge between people, they should bring inclusiveness and economic opportunity. Rather than perpetuating biases, they should treat people fairly, regardless of their sexual orientation, race, religion, age or socioeconomic status. Rather than concentrating power and wealth in the hands of a few, AI should help bridge economic divides by fostering innovation that benefits small businesses and local communities. Rather than fueling misinformation and eroding trust, AI should enhance transparency and democratize access to reliable information and just processes.

In short, AI should serve as a tool for empowerment, ensuring that technological progress in the 21st century does not recreate the social divisions that the industrial age gave rise to.

The rules we have put in place set us on the right path to achieve responsible, consumer-centric innovation. For if consumers do not trust AI-driven products - and the companies behind them - then all progress will have been in vain.

It is my sincere hope that through constructive dialogue and collaboration, we can turn AI and the digital revolution into the successes they deserve to be. With regulation where necessary and ample space for innovation where possible.

Ladies and gentlemen, thank you very much for your attention.

 

Useful links:

Cross-border services | EU Digital Finance Platform

European Blockchain Regulatory Sandbox | EU Digital Finance Platform

Details

Publication date
21 March 2025