A chatbot developed by Microsoft has gone rogue on Twitter, swearing and making racist remarks and inflammatory political statements.
The experimental AI, which learns from conversations, was designed to interact with 18-24-year-olds.
Just 24 hours after artificial intelligence Tay was unleashed, Microsoft appeared to be editing some of its more inflammatory comments.
The software firm said it was "making some adjustments".
"The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We're making some adjustments to Tay," the firm said in a statement.
After hours of unfettered tweeting from Tay, Microsoft appeared to be less chilled than its teenage AI.
No comments:
Post a Comment