Key considerations for the ethical use of facial recognition technology
- Posted on August 13, 2019
- Estimated reading time 4 minutes
This article was originally written by Avanade alum Jeff Vilimek.
This article was originally published in MoneyInc.
Facial recognition technology has grown tremendously in both capability and availability over the last few years. It can help us solve difficult problems, increase security and improve customer or workplace experiences. Last year, nearly 3,000 missing children were identified in New Delhi, India, through the use of facial recognition technology. We are seeing equally important applications in healthcare diagnosis and assistance for the blind.
However, the technology is also easy to misuse, creating risks for everything from bad decisions to invasion of privacy or even violations of liberty. Recognition accuracy across gender and skin tones is improving rapidly, but has had deficiencies in the past. Lack of parity and accuracy across identifying characteristics can lead to inherent bias and bad decisions. These types of applications. The use of facial recognition in border protection, police surveillance, and even government mass surveillance are problematic areas where, without careful controls and human oversight, serious problems can arise. To address these concerns, the city of San Francisco banned recently the use of facial recognition by police and city agencies.
While that does mitigate the risk, the opportunity cost could be high. Outright bans eliminates any possible realization of value from new technologies. So how can we take advantage of the power of facial recognition while protecting against the perils? The solution lies in adopting a strong digital ethics framework that allows an organization to take advantage of the benefits, while also helping to ensure we avoid the potential problems and risks. There are four key components to a strong digital ethics framework: fair and inclusive, human accountability, trustworthiness and adaptability.
Fair and Inclusive
First, it’s crucial to focus on ensuring parity across all categories and elimination of bias. Microsoft has been working to continue to improve accuracy to ensure balance across genders and skin tones, and improvements are being made rapidly. Even more important is an “ethics by design” mindset, which prioritizes diverse teams. Having diverse teams involved in understanding inputs, outputs, use cases and downstream implications helps to both identify biases and ensure utility of the outcomes for all categories.
There’s an amplifying effect when humans and AI come together to accomplish tasks. If law enforcement were to unleash facial recognition and make enforcement decisions based solely on hits by the algorithm, there would be a clear problem given the varying accuracy rates of these algorithms. However, when the technology is used with proper oversight and human accountability for the ultimate decisions being made, it can be a powerful tool in solving difficult problems.
Transparency in use is a necessary starting point for ensuring that we can place trust in where and how facial recognition might be used. Further, we must look to standards bodies and watchdog groups to help verify our trust. The National Institute of Standards and Technology (NIST), an agency of the US Department of Commerce, now publishes an increasingly important review of tech accuracy.
With better understanding of the technology, including understanding its flaws and ongoing assessments of its accuracy, we can also continue to drive improvements. Adapting and improving AI systems over time is a key design principle and the improvement can be rapid when appropriately planned for. One factor helping to drive this is 3rd party oversight and measurement of bias and accuracy.
Beyond “ethics by design” and voluntary and proactive adoption of a digital ethics framework for development and use, there is a place for governmental regulation. Both Brad Smith, Microsoft’s President and Chief Legal Officer, and Satya Nadella, Microsoft’s CEO, have invited government regulation in this space in an open letter series and in Satya’s presentations recently at Davos. In addition to helping to limit risky and problematic use of the technology, regulation has the counterintuitive potential to accelerate its use and adoption. Absent regulation, the uncertainty around what should and should not be allowed and what will and will not be legal in the future has a chilling effect. Reasonable legislation governing appropriate and ethical use can reduce that uncertainty.
The Next Thing Now
There is great opportunity for better experiences, security and outcomes of all kinds through the use of facial recognition and related tech. The corollary question to brainstorming “What can we do?” is “What should we and should we not do?” The right approach is adopting an “ethics by design” mindset and implementing these solutions in the context of a digital ethics framework. This includes supporting oversight, measurement and reasonable regulation to drive improvements, confidence and trust in order to accelerate adoption. This way, we can both realize the value and avoid the perils of misuse.