It seems like no major brand is having quite the level of public outcry that YouTube has faced in the past few months. Whether it be inappropriate content in YouTube Kids, or ads with drive-by crypto miners, or their ongoing battle with Amazon, or of course the years-old demonetization scandal, YouTube seems to be unable to just catch a break. Recently, however, the company has come under fire for its uneven response to controversial content on the site.
For example, almost exactly a year ago, then-popular YouTuber PewDiePie paid a group of people to hold a racist sign in a video. YouTube and Disney's reactions were swift and harsh, with Disney dropping him entirely and YouTube terminated his Preferred status and canceled his YouTube Red series within hours of the video's release. More recently, another popular YouTuber and Preferred partner, Logan Paul, uploaded a video in which he openly mocked Japanese people in their home country. He then uploaded a video from a forest in the country known for suicides, where he encountered a body.
The response from the public was equally swift, but the response from YouTube was not. In fact, it took some time before the company removed Paul's Preferred status, which it did a few days after the video was published, despite the content of the video being far more inflammatory than PewDiePie's content had been. After an apologetic video shortly after the idiotic incidents, Paul took a break from YouTube, but temptation is a harsh mistress, and he returned this week with his trademark nonsense. In fact, he returned with a video in which he used a Taser on a rat and pulled a live fish from the water, suffocating it.
Clearly he hasn't learned anything from his experiences. This time, however, YouTube's response was quick, terminating all of Paul's advertising on his channel, which accounts for roughly $1 million per month for the 22-year-old. But PewDiePie and Logan Paul are not the only idiots on YouTube producing this kind of content, so what is YouTube's plan? Apparently trying to solidify their policies in writing - sort of.
When one creator does something particularly blatant-like conducts a heinous prank where people are traumatized, promotes violence or hate toward a group, demonstrates cruelty, or sensationalizes the pain of others in an attempt to gain views or subscribers-it can cause lasting damage to the community, including viewers, creators and the outside world. That damage can have real-world consequences not only to users, but also to other creators, leading to missed creative opportunities, lost revenue and serious harm to your livelihoods. That's why it's critical to ensure that the actions of a few don't impact the 99.9 percent of you who use your channels to connect with your fans or build thriving businesses.
So, the company recognizes the problem, but does not quite offer a solid solution. As is normal for Google-owned properties, the policies are open to interpretation at a level that is difficult to work with. For example, what is a "heinous prank" or what constitutes "cruelty" and to whom? It's fine that YouTube believes they want to censor content, it is their site after all, but with such vague descriptions, it will be hard to know where the moving line will be at any moment. But this is the problem you always encounter when you begin to censor content.