Despite positive results according to law enforcement officials, the use of algorithmic surveillance during the Paris Olympics has raised concerns among human rights groups, who warn of the potential abuse of the technology at future events.
Last week in France, police prefect Laurent Nuñez, a former interior minister and loyalist of French President Emmanuel Macron, told the legal committee of the French parliament that algorithmic surveillance had “proved its usefulness” following the third Olympic Games in the French capital. In response, human rights organisations have sounded the alarm about the risks to privacy and the potential for abuse of the technology at future events.
According to law enforcement officials, surveillance trials during the Paris Olympics and Paralympicsshowed promising results, paving the way for possible post-Olympic implementation.
Nuñez described the results of the Olympic security experiment as positive, and expressed his desire to see AI surveillance extended to other sporting and cultural events in France.
A law passed in 2023 for the Paris Games already allows for the possible use of AI surveillance until 31 March 2025.
The technology uses artificial intelligence programs to analyse images recorded by surveillance cameras. As it processes the footage, the system automatically identifies “abnormal” events, such as a person falling in the street or movements in a crowd that suggest panic. It does not rely on facial recognition.
Although law enforcement officials say its use will be limited, human rights activists fear that AI surveillance could lead to serious abuses.
“When people know they are being watched, they tend to change their behaviour, censor themselves and perhaps refrain from exercising certain rights,” said Katia Roux, technology and rights researcher at Amnesty International. “Any surveillance in a public space is an interference with the right to privacy,” Roux said.
“Under international law, it must be necessary and proportionate to a legitimate aim. The onus is on the authorities to show that there is no less intrusive way of ensuring security. This has not been demonstrated,” said the Amnesty International representative.
Another criticism relates to the artificial intelligence that underpins algorithmic surveillance.The technology has been developed using data that may contain discriminatory biases, which it could in turn reinforce.
“In other countries that have developed this kind of surveillance in public spaces, we see it being used disproportionately to target certain marginalised groups,” Roux said. In 2012, the London Olympics saw the massive deployment of CCTV cameras on the streets of the British capital.
Six years later, the FIFA World Cup in Russia provided an opportunity to experiment with facial recognition, which is still being used today. The French government is expected to present a report on the use of AI surveillance to parliament by the end of this year, after which a debate will be held on whether to extend its use beyond the current deadline of 31 March 2025.