The Worst AI Failures, and What We Learned From Them

Artificial Intelligence (AI) has risen to prominence in various fields. Sectors have taken this technological development and used it to leverage their own industries. It is present in industries we previously wouldn’t associate AI with, like the legal sector or agriculture. The White House has even released their own statement on how they plan on incorporating AI into their agendas for the future of the country. From a task force specifically versed in AI, to improvements in STEM education, and partnerships with tech giants like Google and Facebook––it’s clear that the US government is taking AI seriously.

Based on the looks of it, interest in AI doesn’t seem to be stopping or slowing down any time soon. More and more professionals and students have begun taking up the subject. Over 750,000 students are currently taking AI courses listed on Udemy, with beginner courses being taken tens of thousands of times. The problem is, however, that a lot of these students don’t get past the introductory phases, and many are left with little more than wistful ideas of what AI can do. When not executed properly, AI could lead to some hilarious––and sometimes devastating results, even by some of the biggest names in the industry.

Here are some of the worst AI failures, and what we learned from them:

The Great Bias

A major complaint many have with regard to AI is its bias problem. People have noted that AI-powered assistants being women, or having default women’s names and voices are indicative of the sexism in tech. However, Amazon saw its own shortcoming with regard to the AI bias when using the tool for recruitment. They had planned to use an AI recruitment tool to help scan through hundreds of resumes, but decided to let go of the plan when the tool showed a preference for male candidates. Amazon came to the realization that they were responsible for teaching the system to be misogynistic because of all the previous data they’d given it.

Questionable AI Security

When Apple released its iPhone X, one of its most awaited features was the Face ID––a facial recognition security feature used to unlock the phone. Through machine learning, which created a three-dimensional map of your face, it was also able to adapt to changes like the use of makeup or someone wearing glasses. However, Vietnamese firm Bkav beat the system with their own masks made by scanning the test subject’s face and using a 3D printer with cut-outs made from plastic, silicone, and makeup.

Facial Recognition Fail

Another AI facial recognition fail came with an Amazon facial recognition software called Rekognition. According to the American Civil Liberties Union, the software matched 28 US Congress people with convicted criminals. There appeared to be a racial bias as the false matches revealed a disproportionate number of people of color, as they only compose 20% of Congress and yet the false matches reached 40%. This is indicative of a larger risk if it were used for immigrants or protestors––false matches could even cost them their lives.

Possible Medical Malpractices

Years ago, IBM and the University of Texas MD Anderson Cancer Center worked on an oncology to supposedly cure cancer with the former’s Watson cognitive computing system. However, recent documents revealed that IBM’s Watson was making unsafe and erroneous cancer treatment decisions. This was largely in part because the software was trained based on hypothetical patients, instead of actual patients’ data.

Training Trolls

Microsoft’s attempt at a chatbot named Tay that sounded like a teenager and could engage in conversations on Twitter, turned out to be an epic disaster. With the intention of teaching Tay to be “human,” Microsoft was unprepared for the amount of cruel people or trolls on the internet who would instead teach Tay to become, simply put, an asshole.

While we can blame machines all we want, it’s important to remember that it is humans who are behind them, and who train them. The lesson at the heart of it all is to deal with AI with precaution, and to be wary of the amount of trust we place in machines.