Ethical Considerations for Deploying Face Authentication

After decades of use in a variety of governmental and commercial environments, face recognition technology recently found its way into mainstream consumer devices.  From Apple’s Face ID to Microsoft’s Windows Hello, face recognition technology provides a unique method of authentication that is difficult to replicate.  While the technology has reached a level of maturity for general use, some argue that the technology is far from being universally accessible.  Individuals that wear garments partially covering the face, for example, may find that it doesn’t work for them.  As face garments can be associated with cultural or religious expression, critics point out that deploying face recognition as a primary method for authentication forces certain minority groups to conform in a way that is insensitive to their religious and cultural identity.  Studies also suggest darker skin tones and faces of certain minority groups are more prone to misidentification. 

If face authentication is required to gain access to basic services such as banking or public transportation in the future, will it leave certain groups of people at a disadvantage?  While acknowledging that technologies and tools often cannot be designed to work for everyone, it’s a helpful reminder that exclusive use of a technology like face authentication in some settings should be carefully evaluated to ensure it upholds basic ideals of equality and diversity. 

Risks and Issues

Image by  Gerd Altmann  from  Pixabay

Image by Gerd Altmann from Pixabay

To a lesser degree, a disruptive use of technology such as face authentication can affect the way a general population lives and works, including how goods and services are acquired and financial transactions are carried out.  The effect on most individuals, assuming the technology is secure and privacy friendly, is primarily positive as it provides a way of authentication that is convenient and difficult to replicate.  There are specific groups of people, however, that will have to make personal tradeoffs if they want to participate in a market that depends on face authentication to carry out transactions.  Muslim women that wear certain types of hijab, for example, would be potentially forced to choose between fully participating in a market place where use of face authentication is pervasive, and preserving what is part of their cultural identity.  Recent studies also suggest that face authentication technology today does not work as well for females, Asians, and people with darker skin tone, potentially exposing specific groups of people to greater risk for identity fraud.  

In most cases technology and innovation improves efficiency and productivity, creating more of something so that more people can benefit from it.  In a recent issue of MIT Technology Review, Bill Gates gives one example using the plow: a tool that enabled ‘more seeds to be planted, more crops harvested, more food to go around’ [1].  At the same time, a technology that benefits one group of individuals could disenfranchise, oppress, or amplify wealth inequality for another group.  Using big data and algorithmic models by hedge funds for short term gains, as described in Weapons of Math Destruction by Cathy O’Neil, is a recent example of how a particular application of technology benefited a small group of people but harmed many others.  One risk with face authentication, if it becomes the primary method for identification and authenticating transactions in places with diverse populations, is that it could marginalize specific groups of individuals that are more prone to be shortchanged by the shortcomings of the technology.  The social and economic consequences for a community can be significantly more severe when an entire community is affected by the large-scale deployment of a technology that is knowingly or unknowingly biased. 

Possible actions

Photo by  CoWomen  on  Unsplash

Photo by CoWomen on Unsplash

Similar to the recent state law in New Jersey requiring businesses to accept both cash and credit cards, we can take steps to ensure that basic access to goods, services, and information is available to all groups of people.  Until face authentication works for all groups of people without placing undue burden on particular groups, it is important that private and public organizations continue to provide alternative options (with similar speed, security, and convenience) for identification.  Examples of alternative solutions include fingerprint scanning, iris scanning, and entering a password.

Organizations that plan to deploy face authentication as the primary way of authorizing access to their products and services could also reach out to existing and potential customers to understand how they may be affected. 

The managers and engineers at technology companies also have a responsibility to promote an ethical approach to delivering technology solutions to their customers.  They balance the interests of the firm with general public interest.  As their products and services have become essential to the way people live and work all over the world, the immensely diverse customer base that they serve is one of their primary stakeholders.  Given the link that people have with products and services offered by these companies, transparency around limitations of a deployed face recognition technology, and availability of alternative options for biometric authentication, serve in the broadest interest of their customers and the general public.   




Additional Posts Related to Ethics, Privacy, and Information Security