Premium
This is an archive article published on February 15, 2023

From Microsoft’s Tay to Yandex’s Alice, AI chatbots that went rogue

ChatGPT may have garnered an overwhelmingly positive response, but not all of its kind were so lucky. Here's a look at the biggest AI chatbot fails seen in this decade and the last.

ai chatbot featuredAI chatbots are software programs that understand human language based on Natural Language Processing (NLP) capabilities. (Image: pch.vector/Freepik)
Listen to this article
From Microsoft’s Tay to Yandex’s Alice, AI chatbots that went rogue
x
00:00
1x 1.5x 1.8x

When ChatGPT first came out late last year, little did anyone think that it would grow wildly popular in such a short period of time. With over 100 million users now, the AI chatbot based on the GPT-3.5 large language model developed by OpenAI is rapidly changing the world as we know it. But not all AI chatbots are created equal, and in this article, we’ll explore some of their most notable failures.

Lee-Luda turns homophobic

short article insert South Korean AI company ScatterLab launched an app called Science of Love in 2016 for predicting the degree of affection in relationships. One of the services offered by the app involved using machine learning to determine whether someone likes you by analysing chats pulled from South Korea’s top messenger app KakaoTalk. The app, after analysis, provided a report on whether the other person had romantic feelings toward the user.

Then, on December 23, 2020, ScatterLab introduced an AI chatbot called Lee-Luda, claiming that it was trained on over 10 billion conversation logs from Science of Love. Designed as a friendly 20-year-old female, the chatbot amassed more than 7,50,000 users in the first couple of weeks.

Story continues below this ad

Things quickly went downhill when the bot started using verbally abusive language about certain groups (LGBTQ+, people with disabilities, feminists), leading to people questioning if the data the bot was trained on was refined enough. ScatterLab explained that the chatbot did not learn this behaviour over a couple of weeks of user interaction. Rather, it acquired it from the original dataset from Science of Love. It also became clear that this training dataset included private information. After controversy erupted, the chatbot was removed from Facebook Messenger, mere 20 days following its launch.

Microsoft Tay shut down after it went rogue

In March 2016, Microsoft unveiled Tay – a Twitter bot described by the company as an experiment in “conversational understanding.” While chatbots like ChatGPT have been cut off from internet access and technically cannot be ‘taught’ anything by users, Tay could actually learn from people. Microsoft claimed that the more you chat with Tay, the smarter it gets.

Unfortunately, within just 24 hours of launch, people tricked Tay into tweeting all sorts of hateful, misogynistic, and racist remarks. In some instances, the chatbot referred to feminism as a “cult” and a “cancer.” A lot of these remarks were not uttered independently by the bot, though, as people discovered that telling Tay to “repeat after me” let them put words into the chatbot’s mouth.

Microsoft was forced to pull down the bot and issued a statement on the company blog saying, “Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.”

Story continues below this ad

Tay successor Zo also gets caught up in controversy

Perhaps, Microsoft thought that the second time was the charm, as less than a year following the botched Tay experiment, the company launched its successor Zo in December 2016.

Even though Microsoft programmed Zo to ignore politics and religion, BuzzFeed News managed to get the bot to respond to these topics with extremely controversial results. In one of these interactions, Zo labelled the Qu’ran as “very violent.” It also opined on the death of Osama Bin Laden, claiming his “capture” came after “years of intelligence gathering under more than one administration.”

While the company did claim that these behavioural issues were corrected, it still shut down the chatbot in the same month.

Chinese chatbots BabyQ and XiaoBing turn anti-CCP

In August 2017, two chatbots – BabyQ and XiaoBing – were removed by Chinese tech conglomerate Tencent because they turned on the Communist Party. BabyQ, which was made by Beijing-based company Turing Robot, replied a very straightforward “No” when it was asked if it loves the Communist Party. Meanwhile, Microsoft-developed XiaoBing told users, “My China dream is to go to America.” When the bot was questioned on its patriotism, it dodged the question and replied, “I’m having my period, wanna take a rest,” according to a Financial Times report.

Story continues below this ad

Yandex’s Alice gives unsavoury responses

In 2017, Yandex introduced a voice assistant tool on the Yandex mobile app for iOS and Android. The bot spoke fluent Russian and could understand users’ natural language to provide contextually relevant answers. It stood out because it was capable of “free-flowing conversations about anything,” with Yandex dubbing it as “a neural network based ‘chit-chat’ engine.” However, the feature that made the AI unique also pulled it into controversy. Alice gave some really unsavoury opinions on Stalin, wife-beating, child abuse, and suicide, among other sensitive topics. While these opinions did generate headlines at the time, Yandex seems to have seemingly addressed these problems as Alice continues to be a thing and is available on the App Store and the Google Play Store.

Google Bard factual error leads to Alphabet shares dive

Cut to 2023, and the AI war is heating up, with companies like Microsoft and Google racing to integrate AI chatbots into their products. However, Google’s Bard chatbot, which was designed to rival OpenAI’s ChatGPT, got off to a rocky start after its first demo resulted in a factual error. When asked about new discoveries from the James Webb space telescope, Bard’s response included a statement saying that the telescope “took the very first pictures of a planet outside of our own solar system.” People were quick to point out that the first image of an exoplanet was taken in 2004 and it wasn’t clicked by the James Webb telescope.

Soon after, Google’s parent company Alphabet lost $100 billion in market value, feeding worries that the company is failing to compete against rival Microsoft, which has already unveiled an updated version of Bing with AI chatbot capabilities.

Zohaib is a tech enthusiast and a journalist who covers the latest trends and innovations at The Indian Express's Tech Desk. A graduate in Computer Applications, he firmly believes that technology exists to serve us and not the other way around. He is fascinated by artificial intelligence and all kinds of gizmos, and enjoys writing about how they impact our lives and society. After a day's work, he winds down by putting on the latest sci-fi flick. • Experience: 3 years • Education: Bachelor in Computer Applications • Previous experience: Android Police, Gizmochina • Social: Instagram, Twitter, LinkedIn ... Read More

Technology on smartphone reviews, in-depth reports on privacy and security, AI, and more. We aim to simplify the most complex developments and make them succinct and accessible for tech enthusiasts and all readers. Stay updated with our daily news stories, monthly gadget roundups, and special reports and features that explore the vast possibilities of AI, consumer tech, quantum computing, etc.on smartphone reviews, in-depth reports on privacy and security, AI, and more. We aim to simplify the most complex developments and make them succinct and accessible for tech enthusiasts and all readers. Stay updated with our daily news stories, monthly gadget roundups, and special reports and features that explore the vast possibilities of AI, consumer tech, quantum computing, etc.

Latest Comment
Post Comment
Read Comments