Microsoft To Screen Its Azure Customers on Facial Recognition Use Cases
Microsoft on Tuesday announced version 2 of its "Responsible AI Standard" document, and disclosed a use-case approval process for its Azure Face API, Computer Vision, and Video Indexer customers.
The approval process, termed "Limited Access," is an "additional layer of scrutiny" on how Microsoft's facial recognition services get used. Customers will be cut off next year if they don't meet Microsoft's stipulations.
Here's how that change was characterized for existing customers:
Starting June 30, 2023, existing customers will no longer be able to access facial recognition capabilities if their facial recognition application has not been approved. Submit an application form for facial and celebrity recognition operations in Face API, Computer Vision, and Azure Video Indexer here, and our team will be in touch via email.
Microsoft is also removing emotional assessments from its face-scanning solutions, as well as "identity attributes such as gender, age, smile, facial hair, hair, and makeup." The idea is that such attributes can be used in "stereotyping, discrimination or unfair denial of services."
Facial recognition software is all about surveillance, but Microsoft conceives of some "limited access commercial use cases" for its technologies, as described in this document. Permitted commercial use cases include:
- Identity verification: "for opening a new account, verifying a worker, or authenticating to participate in an online assessment."
- Touchless access: for cards and tickets in "airports, stadiums, offices, and hospitals."
- Personalization: for kiosks at the workplace and at home.
- Blocking: to "prevent unauthorized access to digital or physical services or spaces."
There are also some limited-use cases for public sector applications of Microsoft's facial recognition technologies. They are similar to the commercial use cases, but also include letting law enforcement scan already apprehended suspects in court cases. Microsoft also permits facial scanning in cases where death or physical injury risk may be involved. It's also permitted for "humanitarian assistance."
Microsoft's policy already prohibits the use of "real-time facial recognition technology on mobile cameras used by law enforcement to attempt to identify individuals in uncontrolled, 'in the wild' environments."
Microsoft had made the headlines with the claim it was limiting police surveillance a couple of years ago. However, the American Civil Liberties Union also exposed that Microsoft was trying to sell facial recognition software to the U.S. Drug Enforcement Agency simultaneously with its law enforcement limitation claims.
With Microsoft's new approach, it's still the judge and jury of fair use of the facial recognition services that it sells. Google has offered similar ruminations on the use of facial recognition technologies, without completely ruling them out.
The new stipulations from Microsoft perhaps will tighten how its facial recognition services get used. It suggests a slightly different direction from Microsoft's recent past. For instance, three years ago, Microsoft had pulled back from a $78 million investment in Israeli startup AnyVision after it was said that AnyVision was using its face-scanning technologies to help the Israeli state surveil Palestinians.
Kurt Mackie is senior news producer for 1105 Media's Converge360 group.