On 14 December 2020, the EU's Agency for Fundamental Rights (FRA), published a new report,  ‘Getting the future right – Artificial intelligence and fundamental rights in the EU’.

The report identifies pitfalls in the use of AI, for example in predictive policing, medical diagnosis, social services, and targeted advertising. 

It calls on the EU and EU countries to:

Make sure that AI respects ALL fundamental rights - AI can affect many rights - not just privacy and data protection. It can also discriminate or impede justice. Any future AI legislation has to consider all relevant fundamental rights and create effective safeguards.

Guarantee that people can challenge decisions taken by AI - people need to know when AI is used and how it is used, as well as how and where to complain. Organisations using AI need to be able to explain how their systems take decisions.

Requirement that organisations carry out fundamental rights impact assessments of AI before and during its use - organisations should carry out assessments of how AI could harm fundamental rights before roll out and then regularly during deployment. These assessments should go beyond current data protection impact assessments and assess all relevant fundamental rights.

Provide more guidance on data protection rules - the EU should further clarify how data protection rules apply to AI. In particular, the FRA believes that there is a high degree of uncertainty regarding rules regarding automated decision-making and the right to human review in the context of AI. The FRA recommends that DP bodies of EU member states provide practical guidance, recommendations and checklists on the use of AI.

Assess whether AI discriminates – awareness about the potential for AI to discriminate, and the impact of this, is relatively low. The FRA calls for more research funding to look into the potentially discriminatory effects of AI so the EU can guard against it.

Create an effective oversight system – the EU should invest in a more ‘joined-up’ system to hold businesses and public administrations accountable when using AI. Authorities need to ensure that oversight bodies have adequate resources and skills to do the job.