Online Censorship Through Selective Demonetization
By Robert H. Porter
YouTube recently made the decision to clarify its policy concerning the “advertiser friendliness” of content creator’s videos, which has led to an apparent increase in the demonetization of online content. Demonitization is essentially the reduction of content creators’ ability to collect ad-based revenue from their content. While this demonetization has been occurring since 2012, it was only recently that YouTube changed its communication to its content creators. Thus, while this policy of demonetization of “non-advertiser friendly” content has been going on for years – seemingly without the knowledge of a large proportion of content creators – this is the first that many have heard about this “new” policy.
So what does “advertiser friendliness” actually mean? According to YouTube’s policy, the content uploader must ensure that the video content, metadata, and thumbnail image contain “little to no inappropriate or mature content.” Furthermore, material that is not considered appropriate for advertising includes: “controversial or sensitive subjects and events, including subjects related to war, political conflicts, natural disasters, and tragedies, even if graphic imagery is not shown.”
There are a large number of content creators on YouTube that pick up on current events, news stories, and political developments, all of which can be deemed “controversial or sensitive” depending on one’s perspective. One of the main joys of YouTube is that it originally served as a place when one could upload pretty much anything and share it with the world (provided it broke no laws such as copyright legislation). With increasing censorship, this freedom is slowly being eroded.
Essentially, YouTube is simply adjusting and enforcing its own terms of service, which it is perfectly entitled to do, and while the recent clarification of if its policy has spawned protest hashtags such as #YouTubeisDead, there is little doubt that YouTube will not be overly harmed by this online snafu. However, the real issue seems to be that the enforcement of this demonetization policy is incredibly uneven. While independent content creators seem to be targeted, large corporations such as CNN do not seem to be hit by the same demonetization policy in regards to what content is permitted on monetized videos.
From where I sit, however, there remains the glaring (un)intentional sending of a rather pointed message: this looks like blatant censorship of smaller, independent content creators. Demonetizing an individual’s content equates to censorship. Full stop. Furthermore, demonetizing individuals while allowing large corporations to continue unheeded reeks of the de-democratization of the online sphere. Again, the (un)intentional message is disturbing: “If we don’t like your message, we can shut you down.”
Another major problem is that, whatever algorithm is being used by YouTube, it seems to be both incredibly broad as well as oddly specific. As CBC reported, videos demonstrating educational science experiments such as “Menthos Coke Explosions” have been demonetized under the policy, as have several videos on suicide prevention because “suicide” is one of YouTube’s “problematic” tags. There is no room for nuance, and, as a result, a significant number of creators who are producing content designed for educational or beneficial purposes are at risk of being swept under YouTube’s digital rug.
Of course, the number of content creators who rely solely on advertising revenue from their content on YouTube is incredibly small. However, many users are only able to continue to produce content due to the advertising revenue that they receive based on the number of their viewers and subscribers. When a company has little or no competition, however, it is much easier to yield to the demands of other large corporations (and funders) rather than the “little guys” that produce a vast range of your content.
So why does this matter? Why should we be concerned with whether a massive online corporation limits the perspectives of its users through the reduction of advertising revenues of video producers? It matters because this issue touches on the prevailing use of big data and algorithms to monitor and collect information concerning youth and their online lives.
The amount of data that is funnelled through big data algorithms is increasingly worrying, especially since it tends to trap individuals in discriminatory categories. In this instance, the algorithms are contributing towards the censorship of ideas and opinions online, but algorithms like this could very well be used to reduce individuals to data-figures in discriminatory data-driven systems.
Complications such as this are bound to arise when the public sphere, personal communication, and social media are dragged into the corporate models of commercialization. The big question that remains is what kind of responses will we see in response to the ever-increasingly algorithm-driven world?