London: Algorithms used to detect crime in Underground stations

Outside view of the Underground station
Privacy experts warn that it is easy to add more features to surveillance systems once they have already been set up. (Source: Sunil060902 – CC BY-SA 3.0 Deed )

In a London Underground station, thousands of people were under surveillance using algorithms designed to detect fare evaders and crime. This has been revealed by the research of US magazine, Wired. Train passengers were not informed of the tests.

The operator of the London Underground, Transport for London (TfL), tested the algorithms from October 2022 until the end of September 2023 in the Underground station Willesden Green, located in the north-west of the British capital. In response to the magazine’s Freedom of Information request, the TfL provided documents that reveal this. Wired has partially published these documents.

According to the transit operators, more than 110,000 people used Willesden Green station weekly in 2022.

Until now, it was only known that the algorithms were to be used in a test at the station for detecting whether guests could bypass entry barriers without a ticket.

Searching for weapons and smokers

As revealed in parts of the redacted documents, the algorithms were to detect several additional occurrences and situations: From “carrying or using a weapon” to smoking, people and animals on the platform as well as unattended luggage. Even littering at the station could be detected.

Staff was informed in real-time of conduct or situations deemed problematic – this included weapons or people on the platforms, for example. The Underground operator explained to Wired that existing surveillance cameras at the station were combined with algorithms and “numerous detection models” to identify behaviour patterns.

According to the report, the surveillance system issued 44,000 alerts during the test period. In 19,000 cases, staff was informed in real-time. The system most frequently sent alerts when a person potentially attempted to reach the platform without a valid ticket.

However, there were problems with recognising situations correctly. For example, when children followed their parents through the barrier entries and were reported as potential fare evaders. The software was also unsuccessful in differentiating between foldable and normal bicycles – the latter may only be taken on the train during certain hours.

According to the transit authority, the system was not used for monitoring staff. Furthermore, audio recording was not set up and facial recognition was not available.

Michael Birtwistle researches the regulation of artificial intelligence (AI) at the independent Ada Lovelace Institute. He stated to Wire that the implementation of AI in public spaces for identifying behaviour patterns “raises many of the same scientific, ethical, legal, and societal questions raised by facial recognition technologies”.

Warning of further upgrades

In response to Wired, privacy experts warn additionally that such surveillance systems could easily be expanded in the future – for example, with facial recognition.

Daniel Leufer from the NGO Access Now explains he always checks first such whether such systems are attempting to recognise aggression – because he is sceptical whether that it is even possible.

According to the report, the London system also attempted to recognise aggression – however, this could not be performed reliably. Therefore, it had to be adjusted so that a warning was provided whenever someone raised their arms – because this is a “common behaviour linked to acts of aggression”.

Madeleine Stone from the British NGO, Big Brother Watch, told Wired, algorithms for detecting aggressive behaviour are “deeply flawed” – additionally, the British data regulator has warned of the implementation of technologies for emotion analysis.

Furthermore, Stone says many passengers would be disturbed to find out that they were under surveillance. The Underground operator actually confirmed to Wired that the station did not provide any notice about the surveillance system test.

At first, people’s faces were made unrecognisable on recordings and the data was stored for a maximum of 14 days. However, after six months the TfL saved people’s faces who were suspected of evading fares – and stored the data longer.

Nothing has apparently been decided regarding the future of the system: TfL told Wired they were advised to run a second phase of the trial. Any additional implementation would be coordinated with “relevant stakeholders”, the company assured.

This past year, the operating company of the subway in New York City announced they would gather data in some stations with the help of algorithms for how many train passengers evade barriers without a ticket. Civil rights activists criticised the implementation of the system as a part of a growing surveillance apparatus in the city. (js)