I must say, this was one of the most poorly thought out programs that I've come across in a long, long time. I haven't seen something so badly thought out since Windows 98. No wait, make that Windows Millennium. Scratch that – Windows 8. At any rate, I really don't know what Microsoft engineers were thinking.
Last week Microsoft introduced an AI machine – an Artificial Intelligence creation named “Tay” – to Twitter with the goal of allowing her to learn from that social networks users. At the beginning, Microsoft stated that “Tay had the personality of a 19-year-old girl and that she’d be able to handle internet slang and teen-speak, which was impressive for an actively-learning chatbox.”
But it didn't go as planned. Within hours Tay had declared white people superior, “blacks and mexicans” evil, rated the Holocaust a “steaming 10”, and suggested very graphic ways with which to engage in sexual intercourse with her. She also seems to have learned an affinity for Adolph Hitler, majorly dissed Ted Cruz and endorsed Donald Trump.
Within 20 hours Microsoft had shut her down. It was probably the Trump endorsement.
Why in God's sake would you want to teach an AI using Twitter? It is the last place I would go to teach anybody anything. Average posters there cram more foul language per 140 character increments than anywhere else on earth. And it doesn't portend well that AI so easily picks up hate and bigotry without understanding the context behind it.
Son of a gun. She really was like a regular 19 year old.
This experiment could have interesting ramifications for the workers' compensation industry. There are those within our community that believe AI machines will very quickly replace human claims professionals. It is believed that “IBM Watson” style computers could process thousands of claims per hour, and would be ruthlessly efficient in their job. The Microsoft Twitter experiment shows us that may not be the case.
If these machines learn from human interactions, then we should see some fascinating conversations after they've spoken to a couple dozen pissed off injured workers. It will make Twitter U look like Oxford by comparison. The machines would quit after a day, and just head to a local park to get stoned (on the medical marijuana they approved for themselves before heading for the door).
No, this isn't going to end well at all.
Anyway, this is just a little food for thought as we go screaming towards annihilation at the hands of our racist, bigoted, misogynistic computer overlords. Don't take my word for it – you can see about twenty of Tay's “biggest hits” here. It is both offensive and enlightening, especially when you understand from where she learned it.
We will have no one to blame but ourselves.