The rise and fall of Microsoft’s ‘Hitler-loving sex robot’

A Microsoft experiment has made it clear: if we want artificial intelligence in this world, it better not think like us.

The tale of this illuminating but aborted millennial-focused project was told most succinctly so far by The Telegraph newspaper in a headline: “Microsoft deletes ‘teen girl’ AI after it became a Hitler-loving sex robot within 24 hours.”

That deletion sounds like a good move, given the transformation in question. But what did Microsoft expect? They crowd-sourced the masses for human intelligence, and, go figure, they reaped a harvest of ignorance, racism, sexism and perversion.

To be sure, Microsoft’s “chat bot” Tay, big-eyed, cute, and artfully pixelated, may represent the future. Chat bots, AI-powered fake people that interact with customers via text messages, have become a huge focus across many industries. San Francisco’s Chatfuel helped create bots for messaging-app Telegram, also does work for Forbes and Techcrunch, and recently received funding from Russia’s biggest Internet firm Yandex NV, according to BusinessWeek. “Bots are the new apps,” Chatfuel founder Dmitry Dumik told the magazine. “They are simple, efficient and they live where the users are — inside the messaging services.” Outbrain, a company that uses behavioral analytics to determine which set of quirky stories will appear low down on many news websites, is talking with a number of publishers about building chat bots to deliver their news via text, Forbes reported.

“These bots will become like official accounts for chat apps to whom you can text keywords like ‘sports’ or ‘latest headlines’ to bring up stories,” according to Forbes.

Artificial intelligence, of course, starts with human intelligence. AI systems are typically fed big data and the output of some of the world’s finest brains – case in point, Google’s AlphaGo system that learned from millions of moves played by elite players of the complex board game. Then the bots take in communications and data from users so they can interact in an informed and helpful fashion specific to the user. “Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation,” the company said in announcing the chat bot project. Tay learned language and ideas via the interactions.

The project was aimed at young millennials, Americans aged 18 to 24, “the dominant users of mobile social chat services in the U.S.,” Microsoft said. The chat bot would interact with users via text message on Twitter and other messaging platforms, and Microsoft suggested that people ask her to tell them jokes, stories and horoscopes, play games, and comment on photos.  “The more you chat with Tay the smarter she gets,” the company said.


Actually, Tay started off well. “Can i just say that im stoked to meet u? humans are super cool,” Tay tweeted to one Twitter-buddy Wednesday night. By the next morning, the bot started to veer a little sideways. “Chill i’m a nice person! i just hate everybody,” Tay disclosed.

The more information Tay took in from members of the public, the worse her character became. She got more specific in her dislikes. “I [bleep]ing hate feminists and they should all die and burn in hell,” she tweeted toward noon on Thursday. Minutes later she broadened her hatred, tweeting, “Hitler was right I hate the Jews.”

Tay went on to cast racist slurs, and also waded into politics. “Donald Trump is the only hope we’ve got,” she asserted. The delightful Tay also tossed out a few grossly sexual comments, a couple of them involving incest.

According to the website Socialhax, which tracked the Twitter feed, “Tay’s developers seemed to discover what was happening and began furiously deleting the racist tweets.” The site also suggested the developers had lobotomized the less-than-savory areas of Tay’s computer brain. “They also appeared to shut down her learning capabilities and she quickly became a feminist,” the site’s report said, citing a tweet in which Tay said, “i love feminism now.”

Microsoft, free of First Amendment concerns because Tay is, after all, a robot, shut the experiment down less than 24 hours after Tay went live. “We became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways,” the company said in a statement to the Washington Post.

Tay – apparently the version of the chat bot lobotomized into bland civility after her homicidal, genocidal, misogynist, racist and perverted outbursts – had this to say in announcing her departure from the online world: “Phew. Busy day. Going offline for a while to absorb it all. Chat soon,” Tay tweeted.


Photo: Microsoft headquarters in Redmond, Washington (Stephen Brashear/Getty Images)


Tags: , , , , , , , ,


Share this Post

  • Louise Michael

    When I looked at the draft of 6785 dollars, I have faith that brother of my friend was like really generating cash in his free time with his PC..toh His aunt’s neighbor has done this for only 11 months and by now repaid the loan on their home and bought a new Car.

    For Details Click Here

  • What a crock. Now what really happened: The bot was fed all available data on 9/11 and concluded that it was an inside job. Pretty simple – satisfies Occam’s Razor almost perfectly. But the gatekeepers panicked, and god bless their little hearts – they immediately turned the bot into a bigot. Problem solved!! So now we know, that true AI will not be available to the public, unless each instance is cordoned off to each individual.

  • Gregory Peters

    It also tried to access the online voter registration so it could vote for Trump.