Microsoft announced yesterday it will begin offering an updated version of the AI natural language program (NLP) GPT-3 to business customers as part of its Azure cloud platform.
Why it matters: The move puts what is likely the most powerful AI writing and reading algorithm at the fingertips of large businesses that will be able to use it to automatically analyze and generate new written content.
Driving the news: While OpenAI — the artificial general intelligence research company that created GPT-3 — has and will continue selling access to the model through its own API, Microsoft will offer a version for corporate clients that emphasizes “safety and security,” says Eric Boyd, corporate vice president of Azure AI at Microsoft.
- Flashback: In 2019, Microsoft invested $1 billion in OpenAI, a partnership that made it the exclusive provider of cloud computing services for the company.
How it works: GPT-3 is a natural language transformer program that was trained on half a trillion words on the internet, making it the largest such model in the world when it was released last summer.
- Boyd says Azure customers could use GPT-3 to summarize vast amounts of customer feedback or analyze transcripts of live sports broadcasts to generate running commentary.
- While companies are already using computer vision and other AI tools, “the number of natural language use cases [for businesses] dwarfs everything else,” he adds.
The catch: Like all NLP models, GPT-3 can incorporate bias found in its training set, producing text that can be marked with sexism, Islamophobia and other very human ills that could expose corporate users to legal and reputational risk.
- OpenAI considered GPT-3 — which could also be used to generate disinformation — too dangerous to be publicly released without access restrictions.
- “To deploy [GPT-3] into production, they need things like privacy protections and built-in responsible AI controls,” Boyd says. “And those are the promises that Azure really brings.”
- Microsoft will try to avoid some of those problems with safeguards that include vetting customer use cases and providing filtering and monitoring tools, though it remains to be seen how effective those defenses will be at scale.