By Alix Leboulanger, Senior Analyst, Aerospace, Defence & Security, Frost & Sullivan
Google has launched on Monday a campaign to promote its new smart glasses “Google Glasses Enterprise Edition 2” for professional applications only (e.g. manufacturing, healthcare, mobility). Competitors have also recently launched new series of enhanced smart glasses, like Epson and its biometric glasses in partnership with Redrock Biometrics. Nevertheless, as soon as smart glasses become a professional tool and not anymore a recreational device for consumers, it swiftly slips from the addictive Pokemon Go game to the Big Brothers nightmare. It brings to light again the traditional fears of Augmented Reality (AR) devices to be deployed in security, especially when facial recognition is part of it.
As a matter of fact, wearables and augmented reality have brought facial recognition to a wider span of applications in policing. In China, police biometric glasses made by LLVision are connected to a tablet linked with a database of person of interest and people sought by the police. It is said to enable police officers to recognize 100,000 different faces and to identify a person in 100 milliseconds. It can also provide the person age, gender, ethnicity and address. This is technologically very impressive and potentially very useful in tracking terrorists or to prevent lone wolf attacks. The rise of non-state violent actors, the constant threat of terror attacks and strong urbanisation trends (it is easier to disappear in a crowd than in woods) are pushing security forces to consider such technology to reinforce their mission efficiency.
However facial recognition in western countries is definitely more a sensitive business. Strong concerns about data privacy and human rights are being vehemently voiced. This is not news; the debate was already quite acrimonious when police forces started to deploy body worn cameras. Adoption of surveillance cameras with facial recognition functionalities has only widened the discussion. If somehow improving police officers accountability, dissuade violence in police altercations, reduce crime rates in sensitive cities zones and strict regulatory framework of deployment have made these technologies almost acceptable, this is definitely not the same for biometric glasses.
Indeed, fear of arbitrary arrests and detention, violations of human rights, wrongly fed database and the lack of security around data entry, database protection and smart glasses (do they have any endpoint protection as of now?) leading to wrong pursuits in some cases, are valid apprehensions.
The social acceptance of biometric glasses varies from country to country, city to city, people to people. Recently , employees of key western suppliers of smart glasses (Microsoft, Google) have again strongly condemned the usage of smart glasses in military or policing, underscoring that AR devices were only intended for consumers experience and improve commercial businesses, nothing else.
So is facial recognition a dirty word to be banned from the policing world?
Certainly not. The technology is not yet resilient and mature enough (bandwidth, memory, equipment footprint, cloud) and this explains why smart glasses intakes are extremely low. Furthermore, applications in security remain experimental and when deployed, technological features mainly includes recordings, officers hands-free voice-command and location information (China excluded). Integration of facial recognition will come progressively as the general public get familiar with smart glasses. Strict regulatory framework for usage in specifically defined missions (i.e. first responders, search and rescue, disaster relief) should slowly pave the way to more confidence in this technology.
For more information, or to speak to the analyst please contact Jacqui Holmes on firstname.lastname@example.org