The European Commission is considering a temporary ban on live facial recognition software to give regulators and policymakers the time and space necessary to catch up with the rapid developments and improvements in the software.

Concerns have been building for some time over the use of such software in public spaces, with some high profile cases raising awareness. The Kings Cross estate in London was found to be using the technology without informing data subjects whose personal data it was processing, and only last weekend two vans fitted with facial recognition technology were deployed at a football match at Cardiff City's stadium. Three police forces in the UK are trialling the use of such software in identifying suspected criminals, but the ICO has also urged caution on its use, describing the technology as "intrusive".

The use of the technology pushes the boundaries of lawful processing of personal data under the GDPR, with issues being raised over a lack of transparency, the potential for racial bias in the software and prolonged retention of the personal data collected, including so-called "false positive" matches.

The technology also necessarily collects a high volume of sensitive and special category data, such as personal data revealing racial or ethnic origin and religious beliefs, biometric data and data relating to criminal convictions. 

So what might the future look like for the regulation of facial recognition technology? According to the leaked report the EU is looking at a variety of options ranging from a voluntary ethical code to the use of legally binding EU instruments. With Brexit ever-looming it is unclear at this stage whether the UK would even participate the ban or its eventual outcome, but what is clear is that the proliferation of facial recognition into public life and public spaces needs to be carefully considered and balanced against the rights and freedoms of data subjects.