Microsoft Bans Police Use of AI Service for Facial Recognition
|The company added language to its code of conduct Thursday (May 2) saying that this artificial intelligence (AI) service may not “be used for facial recognition purposes by or for police departments in the United States,” a Microsoft spokesperson told PYMNTS Thursday in an email.
An earlier update to the code of conduct inadvertently left out “for facial recognition purposes” and has since been corrected to be consistent with Microsoft’s policy on facial recognition capabilities, the spokesperson said.
The Azure OpenAI Service enables users to build their own copilots and generative AI applications, according to the service’s webpage.
In other recent news around facial recognition, the White House unveiled a policy on AI that includes provisions saying that federal agencies must provide clear opt-out options for technologies like facial recognition. This opt-out option empowers individuals to choose an alternative identification verification process that doesn’t rely on potentially biased technology, PYMNTS reported in March.
In May 2023, the Federal Trade Commission (FTC) said the increased use of biometrics — data that depicts or describes physical, biological or behavioral traits, characteristics or measurements of or relating to a person’s body — raises “significant” concerns about security, privacy and discrimination.
“In recent years, biometric surveillance has grown more sophisticated and pervasive, posing new threats to privacy and civil rights,” Samuel Levine, director of the FTC’s Bureau of Consumer Protection, said May 18 when the agency published a policy statement on this topic. “Today’s policy statement makes clear that companies must comply with the law regardless of the technology they are using.”