Microsoft apologizes after it’s AI chatbot Tay turns racist and sexist

Microsoft publicly apologised for its Twitter chatbot Tay, saying that the reason behind the sexist and racist comments generated by the AI program was due to certain users exploiting vulnerabilities present in it.

Tay was a AI chatbot introduced by Microsoft to engage and entertain Twitter users of ages 18 to 24. the program was inspired by XiaoIce chatbot of China, which is currently used by over 40 million people for having conversations.However within 24 hours of Tay’s release, certain twitter users were able corrupt the innocent bot, making it spew hateful and hurtful tweets. The Redmond based company decided to pull the plug on the AI program and also deleted several offensive tweets made by it.

“Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time,” said Peter Lee, Corporate Vice President, Microsoft Research in the blog post.

The AI program has not been completely scrapped yet, as Lee confirmed that Tay will go back online, once the engineers figure out how to stop people from taking advantage of the bot.