We use cookies to allow us and selected partners to improve your experience and our advertising. By continuing to browse you consent to our use of cookies. You can understand more and change your cookies preferences here.

Katie Alpin
Head of Strategic Insight
Share this page

Using AI to make decisions about consumers: what protections do we need?

Summary 

 

  • The government is moving forwards with its proposals to reform data protection policy, including rules around how businesses can make use of Artificial Intelligence (AI) when processing personal data.

  • In autumn 2021, we explored how consumers felt about the use of AI, and the steps they wanted to see put in place to manage risks. 

  • Consumers see the benefits of AI, but worry about potential bias and inaccurate assumptions, particularly where AI is used to make a decision that could have a significant impact. 

  • Research participants felt very strongly that they should retain the right to challenge decisions made about them by AI. 

  • The government has committed to maintaining this right, but clarifying when it applies. In this process, consumer views must continue to be taken into account.

 

 

Introduction

 

Artificial Intelligence (AI) already plays a substantial role in our lives as consumers. When you sit down on an evening to watch some well-earned TV, your streaming service is likely to be using AI to decide what to recommend you watch next. When you click onto online retailers, AI may choose things they think will catch your eye, based on previous browsing behaviour, to put at the top of the list. And when you’re completing financial chores like organising insurance, AI probably plays a role in deciding how much you should pay. 

 

With this new technology already in active use in many markets, the rules of play are evolving. Last week the government published a draft Data Protection and Digital Information Bill which includes plans for when and how decisions made using AI can be subject to human moderation  At Which?, we’ll be working over the next few months to  ensure that the regulations the government puts in place are fair for consumers and protect them from harms that AI decisions might give rise to. 


In evaluating this draft Bill, we’re returning to the research we conducted last Autumn to inform our response to the government’s Data: A New Direction consultation to remind ourselves what mattered most to consumers.

 

 

 

Methodology 

 

 22 consumers took part in a 6 day online community between 18-23 October, hosted on our research platform Recollective. Participants selected to take part in the research reflected a balanced cross section of the online population, with a demographic spread as well as a range of internet behaviour/usage. 

 

During the online community, participants were able to examine stimulus materials, learn about the topic areas and proposed reforms, share their views and engage with other participants’ opinions. Participants completed a series of tasks, with each day of the community focused on a specific research question.

 

 

 

Consumers see the benefits of AI, but are wary of the risks 

 

Our research showed that consumers understood how AI could be beneficial, for example in improving efficiency for companies, and in supporting decision making. Many were a fan of algorithms that suggest new music, or TV shows to watch. 

 

However many were also concerned about the potential for bias, and that AI could make decisions based on inaccurate information, particularly where decisions are made solely by machines without human input. Participants in our research reflected on the complexity of human lives, and whether it was possible for AI to fairly make judgements based on limited information. 

 

“Everybody’s circumstances are different, one size doesn’t fit all! There are sometimes things that happen in people’s lives which you need to be able to have the opportunity to explain, when it comes to the company making a decision about something. That’s when it is good to have the human involvement.”

 

 

 

Retaining the right to challenge decisions  

 

One important way to manage some of the risks of AI is making sure consumers have the right to challenge decisions that are made about them by computers alone. This right is particularly important because it forces transparency and accountability in systems, making sure companies can be held to account if things go wrong. 

 

“Losing this right would mean that no-one would be held accountable for anything, companies can just blame mistakes on ‘system issues’.”

 

In its initial consultation, the government suggested removing this right, which met with strong condemnation from Which? and other organisations. In response, the government has dropped this specific proposal, but the draft legislation still seeks to relax the rules about when AI tools can be used to make ‘significant’ decisions about people.

 

 

 

Conclusion

 

The use of AI to automate decision making is likely to increase significantly in years to come, as the technology matures. We need to ensure the right protections are in place so consumers can benefit from the efficiencies, personalisation and cost savings these technologies could bring, without exposing consumers to unnecessary risks. We look forward to working with the government and other experts over the months ahead to make sure that the new Data Protection and Digital Information bill offers the protections consumers need to build trust in these new technologies.

Share this page