Saturday, March 26, 2016

Microsoft says sorry for racist chatbot – Yahoo! News

Microsoft says it is “deeply saddened” about the racist and sexist tweets chatbot Tay. The self-learning bot would enter into talks with tweeters, but fell quickly out of control.

The derogatory robot was therefore taken two days after the launch offline. It went wrong when Twitter users asked questions about Hitler and the Holocaust. He repeated certain phrases, and thereby made bizarre statements.

Hitler tweets
The idea was that he would sound thus as his audience, young people aged 18 to 24 year, but that audience immediately grabbed his chance to see how far Tay would go in taking over racist and sexist phrases. Microsoft pulled the bone offline after phrases like “Hitler was right, I hate Jews” and “Bush did 9/11″ took over.

Microsoft will only start again when the project chatbot again so does he no lyrics can tweet that go against the principles and values ​​of the company, late vice president Peter Lee said in a blog.

Social exercise required
“Although we Tay had prepared a lot of ways to abuse the system, we were not prepared for that kind of attack,” writes he. “We take full responsibility for that we had not seen before this opportunity. We take this lesson along, together with the lessons learned from our experiments in China, Japan and USA.”

Tay one of many chat bots developed by Microsoft. In China there XiaoIce, a chatbot that ‘talk’ with 40 million people. The challenges for such artificial intelligence are not only technically, but also socially. Testing must therefore be in public forums, says the Internet giant. But people offend in this process, which was not the intention.

RTL Z

LikeTweet

No comments:

Post a Comment