News

Microsoft Floats AI Customer Commitments

Microsoft on Thursday announced three "customer commitments" toward organizations that may use its budding artificial intelligence (AI) technologies.

The commitments appear to be general statements of principle regarding AI use, in which Microsoft promised the following to customers:

  • Sharing our learnings about developing and deploying AI responsibly
  • Creating an AI Assurance Program, and
  • Supporting you as you implement your own AI systems responsibly.

The responsible use of AI with proper governance concerns industry, governments and organizations, opined Anthony Cook, Microsoft's corporate vice president and deputy general counsel:

Ensuring the right guardrails for the responsible use of AI will not be limited to technology companies and governments: every organization that creates or uses AI systems will need to develop and implement its own governance systems. That's why today we are announcing three AI Customer Commitments to assist our customers on their responsible AI journey.

The expertise sharing part of Microsoft's AI commitments is based on the documents that Microsoft uses, including its "Responsible AI Standard, AI Impact Assessment Template, AI Impact Assessment Guide, Transparency Notes, and detailed primers on the implementation of our responsible AI by design approach," Cook explained. Microsoft also plans to share its employee training curriculum and invest in "dedicated resources and expertise in regions around the world."

The AI Assurance Program part of Microsoft's AI commitments is still getting created. The exact details of the program weren't described, but Microsoft seems to be adopting some identity verification aspects used with its financial service industry customers for it. Microsoft also is incorporating guidelines from its "Governing AI" document (PDF). Additionally, Microsoft promised to "attest to how we are implementing the AI Risk Management Framework recently published by the U.S. National Institute of Standards and Technology (NIST)." Microsoft will seek views from "customer councils," too.

As for the customer support part of Microsoft's AI commitments, Microsoft is promising to create a "dedicated team of AI legal and regulatory experts" as a resource for organizations. It's also promising to leverage partner support in helping customers implement AI systems.

"Today we can announce that PwC and EY are our launch partners for this exciting program," Cook indicated.

Other Moves
Microsoft's AI commitments are getting publicized not long after it cut its pioneering Ethics and Society team that was involved in early work with Microsoft software development teams using AI. Microsoft also had suggested last month that it was aiming to hire more talent for its responsible AI program, which seemed like a different turn of events.

Microsoft uses OpenAI's large language models for its Azure OpenAI service. However, the secure use of OpenAI's service hasn't been altogether clear. For instance, Samsung last month banned the direct use of OpenAI's ChatGPT service over security concerns.  That action came after press accounts had suggested that some Samsung employees had put proprietary code into their ChatGPT prompts, which could be viewed by others.

Azure OpenAI for Government Users
In related news this week, Microsoft suggested that governments could now use the Microsoft Azure OpenAI service, provided that they take certain measures.  

Oddly, Azure Government users tap into the Azure OpenAI service via the "Azure Commercial service." However, governments will need to "request to modify content filters and data logging" to authorize such access. Moreover, government users of the Azure OpenAI service should "only utilize prompts for inferencing -- do not leverage fine-tuning with Controlled Unclassified Information (CUI) data," the announcement indicated.

One example proposed for government use of AI is battlefield decision making, as promoted by software company Palantir. Under this scenario, a military operator would use a "ChatGPT-style chatbot" to do battlefield assessments and reconnaissance, as well as suggest attack plans, per a Vice.com story description.

About the Author

Kurt Mackie is senior news producer for 1105 Media's Converge360 group.

Featured

comments powered by Disqus

Subscribe on YouTube