Why is a GDPR 2.0 being talked about? I think three factors have been influencing events…
- the evolution of the digital world, in particular around artificial intelligence and the internet of things;
- the pandemic related changes in work and social practices;
- the pull factor on data protection from the EU AI Act.
And to this can be added the five years or so of experience of living with the GDPR:
- where has it worked and where not?
- where is it too strong and where too weak?
- what did it not address, or where does it seem out of date?
All this and more have led to a renewed appetite to move to a new generation of data protection legislation, more in tune with the digital trajectories individuals and firms are now on. This will of course lead to a lot of debate and negotiation, yet at the same time, I’m minded that out of date legislation has a very short half life of usefulness. Better to bite the bullet and deal with it.
So what’s the ethical side to all this? It is that many of the issues that make up the gap between today’s GDPR and the forthcoming 2.0 version are ethical in origin. We’re talking here about ethical issues like fairness and autonomy in particular, but also about dimensions of privacy not covered in 1.0 (like non-personal data) or which have not had the expected impact (like consent).
This then is ethics moving legislation to GDPR 1.0 back in the mid 2010s and again behind the move in the mid 2020s to GDPR 2.0. Or to put it another way: this is society seeking a further strengthening of ethical values and social justice in relation to its data and analytics concerns.
What I’m going to do now is look at seven themes that will be influencing the shape and depth of GDPR 2.0, and under each one, outline the implications for insurers.
Beyond Privacy
The ICO here in the UK has signalled that it considers its remit under data protection legislation to encompass more than just privacy. It specifically mentions fairness and discrimination. I would expect this to become more evident and made explicit in legislation for GDPR 2.0.
The regulators in other European countries are thinking along similar lines and finding, in the scope of issues addressed in the EU AI Act, plenty of opportunities to take those steps. Indeed, it’s been impressed upon me on a number of occasions that the EU AI Act will act as a driver and key influence on GDPR 2.0.
For UK insurers, the implications are that they should no longer focus largely on the FCA for regulatory scrutiny. The ICO will be an additional and significant player to be paid attention to. The two signed a memorandum of understanding back in 2019. So long as the leadership and culture of the two regulators join together well, insurers should expect to see more coordinated activity from the two. This is simply a reflection of the modern insurance market, with its blend of data, analytics, decisions and behaviours.
More Than Personal
One of the first criticisms I heard about the GDPR at an EU privacy conference I was speaking at in 2017 was that it was too focussed on personal data and paid little to no attention to group level data. The latter was behind the big rise in the use of inferential decision strategies, where decisions were made not upon your data, but upon the data of people like you.
This ‘work around’ to the compliance costs associated with personal data became very popular, and data brokers and software houses exploited it. That door is going to be closed, and you can see the first steps towards this in the wording of the EU AI Act.
For UK insurers, the implications could be very significant. If a UK version of GDPR 2.0 ends up covering group data, then all their inferential analytics, and the processes dependent upon it, will become redundant. Protests that the cost of complying with such an extension to GDPR 2.0 are too great could well fall on deaf ears, given that the cost would be to remove the ‘work around’ to GDPR 1.0.
Firmness around Consent
The tightening of the consent components of data protection legislation will be a cornerstone of GDPR 2.0. All too often, consent has been worded in very generic terms, so hindering consumers having genuine choices and making informed decisions. More specific versions of consent may well be legislated for, thereby providing knock on support for things like data collection and data minimisation.
The automated decision making facilitated by artificial intelligence is very likely to attract particularly explicit forms of consent, possibly with some form of boiler plate wording. That in itself could result in a recalibration of how it is deployed, on the basis that not everyone will consent to being subject to it.
For insurers, the implications of a hardening attitude around consent will mean that the days of a ‘give us everything and we will do what we want with it’ form of consent will be numbered (more here). While the sector will argue, particularly in relation to new forms of fraud, that they need to gather all sorts of data about consumers, I believe that such concerns will not be allowed to trump fundamental rights.
Inferences and Predictions
The focus of GDPR 1.0 on personal data led to a rise in the use of inferential analytics to exploit group level data. That gap, as outlined above, will soon be closed. This will almost certainly spill over into the use of various forms of predictive analytics as well. The EU AI Act makes very clear its concerns about predictive practices.
For insurers, the implications of this will have to be watched closely. Risk is by its very nature only realised on an occasional basis, so thinking predictively is a common practice across the sector. A hardening attitude to the digital version of such thinking could be very impactful.
Contestability
Giving consumers rights to their personal data introduces by its very nature situations whereby choices are present and ready to be acted upon. As more and more secondary data and the output of inferential analytics influence the digital record held by firms about individual consumers, so rises the contestability of what firms are using to make decisions.
One outcome of this in GDPR is likely to be the reinvigorated enforcement of the right to challenge data held about you and to have it removed. This will then influence the associated right not to be subject to automated decision making, and to be left no worse off by doing so.
This is likely to speed up experimentation with new ways of holding and using consumer data – data trusts for example. And this could then influence the adoption of new forms of data relationships, in which data is no longer pulled off consumers by firms, but selectively pushed towards firms by consumers.
For insurers, the implications from all this are clear. Can they offer the same product at the same price both with and without automated decision making? Is their ‘pull’ form of digital strategy sustainable? What would be the sustainability of the ‘push’ alternative? Can their digital strategy accommodate these new forms of data relationships, or to put it another way, was it the right one to have chosen in the first place?
Punishments
Large headline fines may generate news stories, but the real impact of punishments under GDPR 2.0 will be much more varied. We’ve seen the beginnings of algorithm destruction in the US, centred around consent issues. It is likely to become established under GDPR 2.0. A less fatal measure to be adopted will be some form of forced transparency, whereby the data and analytics at issue are scrutinised in ways that are much more public.
For insurers, the implications of forced transparency should already be under review (more here). The prospect of algorithm destruction can be considered a possibility, should the issue be deemed significant enough. The insurer’s ethical risk assessment should be calibrating this.
Governance
Recent surveys have pointed to low levels of AI governance (here and here). And that includes the really surprising lack of effort to make sure systems are safe, secure and robust. This will be of great concern to policy makers and are likely to result in much higher expectations around governance being embedded into GDPR 2.0. It’s one thing to ‘learn quickly and fail fast’; it’s another to just not address obvious risks. These are omissions that will make that failure not just fast but painful as well.
If the findings of a recent US study hold true across other insurance regions, insurers have a lot of progress still to make on governance (and that’s a kind way of putting it). Yet if the sector now makes a sudden surge up the learning curve on AI governance, it will hit the obvious problem in respect of AI governance expertise: too much demand and not enough supply. It’s not a topic to go ‘fully DIY’ on.
To Sum Up
The EU AI Act will result in a renewed GDPR, referred to here as 2.0. This will not just be a European phenomenon. Similar developments in other regions will be energised as well. Insurers will very much be caught up in this, as the seven themes outlined above show. Digital strategies will have to be revisited in a host of different ways. This will result not in just some movement to the dials and levers of those digital strategies, but in the stripping down and rebuilding of decision engines. GDPR 2.0 should not be taken lightly.