news-details

Why Microsoft's Copilot AI falsely accused court reporter of crimes he covered

Copilot's results had asserted that Bernklau was an escapee from a psychiatric institution, a convicted child abuser and a conman preying on widowers. For years, Bernklau had served as a court reporter and the artificial intelligence (AI) chatbot had falsely blamed him for the crimes he had covered.

The accusations against Bernklau are not true, of course, and are examples of generative AI "hallucinations." These are inaccurate or nonsensical responses to a prompt provided by the user and are alarmingly common with this technology. Anyone attempting to use AI should always proceed with great caution, because information from such systems needs validation and verification by humans before it can be trusted.

But why did Copilot hallucinate these terrible and false accusations?

Copilot and other generative AI systems like ChatGPT and Google Gemini are large language models (LLMs). The underlying information processing system in LLMs is known as a "deep learning neural network," which uses a large amount of human language to "train" its algorithm.

Related Posts
Advertisements
Market Overview
Top US Stocks
Cryptocurrency Market