The challenge being made to insurers and the regulator on discriminatory pricing is largely orientated around non-discrimination legislation like the Equalities Act. Given the problem seems to lie in how insurer decision systems are collecting and processing data, the Data Protection Act (DPA) is also a very relevant piece of legislation.
The DPA applies because race and ethnicity are a special category of personal data. And the DPA is full of multi-layered and intertwined conditions and exceptions about how it can be handled. One set are the 'substantial public interest conditions', which give exemptions for processing in relation to insurance and preventing fraud.
This means that insurers can process ethnicity data under certain conditions, but cannot then (as per the Equalities Act) use it directly in decision making. There are four points worth bearing in mind about this.
Permission with Conditions
The first is that as insurers are allowed to process special category data, they then have to implement controls relating to that processing, and for those controls to be documented.
The second is that the various levels of governance within an insurer are then expected to audit and judge both how well that processing of ethnicity data is falling within the set controls, and how suitable those controls are for addressing the obvious risks associated with that processing.
The third is that the DPA’s ‘substantial public interest conditions’ come with conditions themselves, one of which is that all this needs to be brought together in an ‘appropriate policy document’, the efficacy of which the Information Commissioner’s Office has the powers to assess.
The fourth is that pretty much the same applies to the Insurance Fraud Bureau, in its role as an ‘anti-fraud organisation’ under the Serious Crimes Act.
So what do these four points add up to? It is that insurers could well find themselves having to respond on discriminatory pricing not just to the FCA and EHRC, but to the ICO as well. And that the ICO’s powers in relation to special category data could be brought into play if the steps being taken by individual insurers on discriminatory pricing are judged not to be working well enough.
In short, the ICO could raise the lid on the opaque world of algorithmic pricing. If you then bring in Citizens Advice’s outcome data, the sector could become caught between a rock and a hard place.
The Paper Trail
As indicated above, any firm relying on the insurance and fraud substantial public interest conditions for using special category data must document how that data is being used and how control arrangements around it are being managed. If need be, that document can then be pulled in by the ICO for review. So what will the ICO be looking for?
It will clearly be judging the efficacy of those processes and controls relative to the risk posed by the insurer’s processing. And part of that judgement will be the extent to which the governance arrangements at that insurer have been doing the same. In other words, have the people at operational, compliance, audit and board level already been asking these questions?
Is this perhaps being rather ‘pot half empty’? Not really, for I’ve had doubts about the three lines of defence model for a while (more here). The sector would not have faced the pricing super complaint if it had been delivering on the promises so often built around it (the problem lies chiefly in unaddressed conflicts of interest).
Most regulators expect to see evidence about the extent to which the questions they themselves want to ask have already been asked within the firm itself. If they have been, then the matter is largely one of judgement. If they haven’t, then more systemic failings are being signalled. So my question to an insurer is: how well can you evidence the questions being asked that a regulator like the ICO would want to ask themselves, in relation to special category data?
Auditing an AI System
One scenario that an insurer’s governance people will then have to face up to is this: how well can we actually audit an AI system for discrimination? The problem for insurers is that best practice for auditing AI systems is sometimes described as ‘still in its infancy’. So the question then evolves into: how aligned is our processing of special category data with our capacity to audit that it is being used appropriately? To what extent is a difference permissible? How do our judgements on that align with those of the regulator?
Some people might think along the lines of ‘hey, that is not our problem’. That would be a significant misjudgement, in the context of the data being processing being ethnicity data, and the outcomes that Citizens Advice can already point to.
The Risk
The risk that the ICO would be looking for is this: that a firm could misuse the substantial public interest conditions in the DPA to collect large amounts of special category data, on the basis that they needed such data for some 'wider purpose', for example, to fight fraud. As these researchers point out, the possibility that debiasing provisions in data protection legislation could lead to over-surveillance of marginalised populations is a very serious concern.
Have I gone too far there? Not really, given what reliable sources have told me about certain practices.
Looking Forward
My key point to insurers is to recognise the full landscape of risk in relation to the issue of discriminatory pricing. It sends out ripples in all sorts of directions. Equally, should things not turn out as insurers would like them to, then the consequences could be very significant. Key provisions in equalities legislation and in data protection legislation could be put at risk, triggering repercussions for insurance practices that would dwarf those caused by the ban on lifetime value modelling. This makes the matter of a scale that the enterprise risk management people need to take an interest in.