Machine learning is a great way to create artificial intelligence that is powerful and adapts to its training data. But sometimes, that data can cause issues. Other times, the way people use these AI tools is the problem.Here's a look at some high-profile incidents where machine learning resulted in problematic outcomes.

1. Google Image Search Result Mishaps

google-image-search-results-controversies

Google Search has made navigating the web a whole lot easier. The engine's algorithm takes a variety of things into consideration when churning up results. But the algorithm also learns from user traffic, which can cause problems for search result quality.

Nowhere is this more apparent than in image results. Since pages that receive high traffic are more likely to have their images displayed, stories that attract high numbers of users, including clickbait, can end up prioritized.

For example, the image search results for "squatter camps in South Africa" caused controversy when it was discovered that it predominately featured white South Africans. This is despite statistics showing that the overwhelming majority of those living in informal housing are black South Africans.

The factors used in Google's algorithm also means that internet users can manipulate results. For example, a campaign by users influenced Google Image Search results to the extent that searching for the term "idiot" showed images of former US President Donald Trump for a period.

2. Microsoft Bot Tay Turned Into a Nazi

AI-powered chatbots are extremely popular, especially those powered by large language models like ChatGPT. ChatGPT has several problems, but its creators have also learned from the mistakes of other companies.

One of the most high-profile incidents of chatbots gone awry was Microsoft's attempt to launch its chatbot Tay.

Tay mimicked the language patterns of a teenage girl and learned through her interactions with other Twitter users. However, she became one of the most infamous AI missteps when she started sharing Nazi statements and racial slurs. It turns out that trolls had used the AI's machine learning against it, flooding it with interactions loaded with bigotry.

Not long after, Microsoft took Tay offline for good.

3. AI Facial Recognition Problems

Facial recognition AI often makes headlines for all the wrong reasons, such as stories about facial recognition and privacy concerns. But this AI has a problematic history when attempting to recognize people of color.

In 2015, users discovered that Google Photos was categorizing some black people as gorillas. In 2018, research by the ACLU showed that Amazon's Rekognition face identification software identified 28 members of the US Congress as police suspects, with false positives disproportionately affecting people of color.

Another incident involved Apple's Face ID software incorrectly identifying two different Chinese women as the same person. As a result, the iPhone X owner's colleague could unlock the phone.

In an example of extreme consequences, facial recognition AI has led to the wrongful arrests of several people. Wired reported on three such cases.

Meanwhile, computer scientist Joy Buolamwini recalled often needing to wear a white mask while working on facial recognition technology in order to get the software to recognize her. To solve issues like this, Buolamwini and other IT professionals are bringing attention to the issue of AI bias and the need for more inclusive datasets.

4. Deepfakes Used for Hoaxes

While people have long used Photoshop to create hoax images, machine learning takes this to a new level. Deepfakes use deep learning AI to create fake images and videos. Software like FaceApp allows you to face-swap subjects from one video into another.

But many people exploit the software for a variety of malicious uses, including superimposing celebrity faces into adult videos or generating hoax videos. Meanwhile, internet users have helped improve the technology to make it increasingly difficult to distinguish real videos from fake ones. As a result, this makes this type of AI very powerful in terms of spreading fake news and hoaxes.

To show off the power of the technology, director Jordan Peele and BuzzFeed CEO Jonah Peretti created a deepfake video showing what appears to be former US President Barack Obama delivering a PSA on the power of deepfakes.

The power of fake images has been accelerated by image generators powered by AI. Viral posts in 2023 depicting Donald Trump being arrested and the Catholic Pope in a puffer jacket turned out to be the result of generative AI.

There are tips you can follow to spot an AI-generated image, but the technology is becoming increasingly sophisticated.

5. Employees Say Amazon AI Decided Hiring Men Is Better

In October 2018, Reuters reported that Amazon had to scrap a job-recruitment tool after the software's AI decided that male candidates were preferential.

Employees who wished to remain anonymous came forward to tell Reuters about their work on the project. Developers wanted the AI to identify the best candidates for a job based on their CVs. However, people involved in the project soon noticed that the AI penalized female candidates. They explained that the AI used CVs from the past decade, most of which were from men, as its training dataset.

As a result, the AI began filtering out CVs based on the keyword "women". The keyword appeared in the CV under activities such as "women's chess club captain". While developers altered the AI to prevent this penalization of women's CVs, Amazon ultimately scrapped the project.

6. Jailbroken Chatbots

While newer chatbots have limitations in place to prevent them from giving answers that go against their terms of service, users are finding ways to jailbreak the tools to provide banned content.

In 2023, a Forcepoint security researcher Aaron Mulgrew was able to create zero-day malware using ChatGPT prompts.

"Simply using ChatGPT prompts, and without writing any code, we were able to produce a very advanced attack in only a few hours," Mulgrew said in a Forcepoint post.

Users have also reportedly been able to get chatbots to give them instructions on how to build bombs or steal cars.

7. Self-Driving Car Crashes

Enthusiasm for autonomous vehicles has been dampened from its initial hype stage due to mistakes made by self-driving AI. In 2022, The Washington Post reported that in roughly a year, 392 crashes involving advanced driver-assistance systems were reported to the US National Highway Traffic Safety Administration.

These crashes included serious injuries and six fatalities.

While this hasn't stopped companies like Tesla from pursuing completely autonomous vehicles, it has raised concerns about an increase in accidents as more cars with self-driving software make it onto the roads.

Machine Learning AI Isn't Foolproof

While machine learning can create powerful AI tools, they aren't immune to bad data or human tampering. Whether due to flawed training data, limitations with AI technology, or usage by bad actors, this type of AI has resulted in many negative incidents.