Microsoft CEO Satya Nadella speaks at the company’s Ignite Spotlight event in Seoul on November 15, 2022.
SeongJoon Cho | Bloomberg | Getty Images
Thanks to recent advances in artificial intelligence, new tools like ChatGPT are wowing consumers with their ability to generate compelling writing based on people’s queries and prompts.
While these AI-powered tools have gotten much better at generating creative and sometimes witty answers, they often contain inaccurate information.
For example, in February when Microsoft introduced its Bing chat tool, built using GPT-4 technology created by Microsoft-backed OpenAI, people noticed that the tool was giving wrong answers during a demo related to financial earnings reports. Like other AI language tools, including similar software from GoogleBing’s chat feature can sometimes they present false facts that users may believe it to be ground truth, a phenomenon researchers call “hallucinations.”
These factual issues have not slowed down the AI race between the two tech giants.
Google Tuesday he announced has brought AI-powered chat technology to Gmail and Google Docs, allowing it to help you write emails or documents. On Thursday, Microsoft said that its popular business applications like Word and Excel will soon come with ChatGPT-like technology nicknamed Copilot.
But this time, Microsoft is presenting the technology as “usefully bad.”
In an online presentation about Copilot’s new features, Microsoft executives cited the software’s tendency to produce inaccurate responses, but cited it as something that could be useful. As long as people realize that Copilot’s answers can be sloppy with the facts, they can correct the inaccuracies and send emails or complete presentation slides more quickly.
For example, if someone wants to create an email wishing a family member a birthday, Copilot can still be useful, even if the date of birth is incorrect. From Microsoft’s point of view, the mere fact that the tool generated text saved a person time and is therefore useful. People just need to be more careful and make sure the text is error-free.
Researchers may disagree.
Some technologists like Noah Giansiracusa and Gary Marcus they expressed themselves worry that people can trust modern artificial intelligence too much and take to heart advisory tools like ChatGPT that are present when they ask questions about health, finance and other important topics.
“ChatGPT’s toxicity guardrails are easily avoided by those determined to use them for evil, and as we saw earlier this week, all new search engines continue to hallucinate,” the two wrote in a recent Time op-ed. to build artificial intelligence that we can truly trust.”
It is unclear how reliable Copilot will be in practice.
Microsoft chief scientist and technical fellow Jaime Teevan said that when Copilot “does things wrong or is biased or abused,” Microsoft has “measures in place.” In addition, Microsoft will initially test the software with only 20 business customers to see how it works in the real world, she explained.
“We’re going to make mistakes, but when we make them, we fix them quickly,” Teevan said.
The business stakes are too high for Microsoft to ignore enthusiasm for generative AI technologies like ChatGPT. The challenge for the company will be to incorporate this technology in a way that doesn’t create public distrust of the software or lead to major public relations disasters.
“I’ve studied AI for decades and feel a huge sense of responsibility with this powerful new tool,” Teevan said. “We have a responsibility to get it into people’s hands and do it the right way.”