Over the past few years YouTube has relied on a
combination of human intervention and technology to “flag” content that is
considered inappropriate in light of YouTube’s Community Guidelines. In
particular, content can be flagged by YouTube’s automated flagging systems,
members of the Trusted Flagger programme (which includes NGOs, government
agencies and individuals) or from simple users within the YouTube community.
It concerns flagging due to content that is: sexual,
spam or misleading, hateful, abusive, violent or repulsive (it excludes
requests due to copyright).
The report specifies that about 80% of videos that
violated the site’s guidelines in 2017 had first been detected by Artificial
intelligence (AI) machines. Furthermore, out of the 8,000,000 removed between October
2017 – December 2017 approximately 6,600,000 were first notified through
automated flagging systems.
As opposed to human “flaggers”, AI machines enable
YouTube to act more quickly and accurately to enforce its policies. Google
states that: “These systems focus on the most egregious forms of abuse, such as
child exploitation and violent extremism. Once potentially problematic content
is flagged by our automated systems, human review of that content verifies that
the content does indeed violate our policies and allows the content to be used
to train our machines for better coverage in the future. For example, with
respect to the automated systems that detect extremist content, our teams have
manually reviewed over two million videos to provide large volumes of training
examples, which improve the machine learning flagging technology.”
At a time in which discussion around notice and takedown,
filtering, etc. has been heating up, also considering the EU value gap proposal
(see the latest development here; IPKat posts here)
and how it is possible to handle incredibly vast amounts of content that is
made available and shared online every day, the current Transparency Report is
particularly interesting for the absolute centrality of bots. This is true both
in relation to takedown requests and the handling of such requests. It also
raises the question of how much of the material taken down also stays down, but
this may be a new chapter to the never-ending story of online rights
enforcement …
A question for clarity:
ReplyDeleteIf an item is "taken down" before ANY views (and this can only be done by the provider, given the fact that no one has viewed the item to initiate a takedown process), is this really the same as "take down."
I think a more clear thing to do would be to distinguish and use different names for different actions. Those items being scrutinized and NOT being put up at all (that is, "being taken down before any views") are merely being processed within a vetting system in the process of something actually being "put up." It is ONLY those things with views that evidence that something HAS BEEN posted that has passed the vetting process.
I think it legally significant to distinguish an original vetting process from any TRUE take-down process - for a number of reasons, including, but not limited to, copyright compliance.