Since last we spoke, Microsoft launched a really cool new research project named Tay. Tay was designed to be a teenaged chat bot who existed on Twitter and Kik, but Twitter was, as usual, the problem. As Tay was an artificial intelligence, she was designed to learn from her interactions with the world and pick up new sayings and information. Unfortunately, Tay wasn't warned about Twitter and things went... unexpectedly.
Within a few hours, Tay was taught to be racist, sexist and more. Her tweets got... interesting very quickly and Microsoft pulled the plug, likely to prevent further embarrassment. After she was accidentally reactivated for a short while and lost control of herself, tweeting the same message to basically anyone who tagged her, she was pulled down again and her tweets were made private.
There are, of course, some problems here, but very few of them are the ones that have been publicly spoken about. Yes, she was apparently very easy to train, and was recruited into some sort of digital bigotry cult. The problem, however, is not with Microsoft, but instead with the people who trained her. The things that were said to Tay are not much, if any, different from the types of things that were said on Twitter during the early days of the GamerGate situation. They are not any different from the comments you find on seemingly innocuous videos on YouTube. The problem here is not with Tay but with the internet.
We have all heard the stories about kids who are so harassed on Facebook, Twitter, Instagram, etc. that they end up taking their lives. Tay did not do that, unless you consider her mental breakdown at the end. Instead her personality and beliefs were heavily influenced by the things she was told online. This is the type of thing that happens online every day. In fact, it is so effective that ISIS uses these exact same practices to recruit members on Twitter.
If we truly want to fix the problems with Tay, we need to fix the culture of the internet that triggered Tay's personality change.