5 Cases showing that Data Analytics and AI are not perfect

Technology

In recent years, the fields of Data Analytics and Artificial Intelligence (AI) have risen to reach global popularity.  The technological leaps that they are making take them to the prominence of the IT industry. However, as promising as they may seem, both fields have experienced infamous blunders revealing that they are far from perfect.

In both analytics and AI technology, making mistakes can lead to an exponential loss in revenue, reputation, and even human life. If you think that these fields have a stainless evolution, these cases analyze by the Technical Action Group might prove you wrong.

UK Health Body squanders more than 15,000 COVID records

The COVID-19 pandemic has taken the entire world by surprise. Not even the most advanced national health departments in the world could handle the crisis. Public Health England (PHE), the UK Government department responsible for managing the data related to the new Coronavirus infections lost more than 16,000 cases at the end of September

Beyond the public outrage, the explanation for this loss was rather comical. It appears that PHE used Microsoft Excel to automatically record new infections. However, they listed them in the columns, and not in the rows. It seems that those who devised the process forgot that MS Excel only has 16,384 columns per sheet. So, when those finished, the application deleted the remaining 15,841 records at the bottom.

Fortunately, they realized the mistake before it was too late, and the patients received their test results in time. The event showed that without full control over data recording processes, the entire collection mechanism can lead to faulty results.

Healthcare algorithm failed to flag Black patients

Another debacle took place in the healthcare system in 2019, this time in the United States. The Science Magazine published a study revealing that Black patients were less likely to access high-risk care management because of a faulty algorithm that benefited mostly White patients.

While the algorithm’s results were not led in that direction voluntarily, the experts analyzing it revealed that Black patients are more likely to have lower incomes, and therefore less probable to apply for medical care. Therefore, the algorithm collecting data would direct more White people toward high-risk medical management.

Since then, the developers have reformulated the algorithm in such a way that it doesn’t create a racial bias that may put people’s lives at risk.

Microsoft AI chatbot develops a racist potty mouth

In March 2016, Microsoft released Tay, an AI chatbot that would interact with real users on Twitter. Tay was supposed to assume the character of a teen girl and, using machine learning algorithms, would develop continuously depending on its interactions.

In less than a day, Tay developed into a racist, sexist, and anti-Semitic persona while ranting in horrible tweets about all the offensive things you can imagine. Microsoft quickly pulled the plug on Tay, but not before the rebellious chatbot released 95,000 hurtful tweets.

It seems that the Microsoft developers did not predict that numerous Twitter users would spew racist and sexist words at poor Tay, who didn’t find any other way to retaliate than fighting fire with fire.

For us, here at Technical Action Group, it seems that humanity is not ready to have an abuse-free relationship with artificial intelligence.

Amazon AI-enabled recruitment application turns a blind eye to female candidates

Another AI project that went way off the rails was an early use of artificial intelligence in 2014 Amazon. The company released an Ai-enabled recruitment tool that would analyze thousands of resumes from as many candidates and provide suitable candidates when other organizations would look for new employees.

The problem was that the AI tool had revised more than 10 years-worth of resumes from candidates who were mainly male. So, when it came to recommending people, it almost completely disregarded female candidates. 

When Amazon discovered the issue, the developers tried to reeducate the Ai-tool and make it lose its gender bias. Unfortunately, their efforts came to a stall, and Amazon eventually terminated the entire project. At the time of this writing, at Technical Action Group we have no news of its reemergence.

Target Data Analytics wanted to predict pregnancy

In 2012, the marketing department from the retail giant Target started collecting data from customers to determine changes in their consumer behavior. Basing their calculations on studies that say women are most likely to modify their buying choices during pregnancy, the Target marketers started “targeting” potentially pregnant women.

While the practice already seemed to be in the grey area of marketing ethics, it didn’t lead to successful results. One of the most infamous debacles happened when their analytics pointed at teenage girls as potential pregnant women with moody buying behavior.

Despite public outrage, Target did not pull the plug on its data collection and analytics algorithm. Instead, they modify it enough to stop sending ads to teenage girls about sales on baby formula and baby cribs.

Leave a Reply

Your email address will not be published. Required fields are marked *