
In the digital age, the role of internet platforms has evolved dramatically, bringing with it complex legal challenges. One of the most contentious issues is the liability of these platforms for the content they host. This blog post explores the intricate landscape of platform liability, focusing on copyright infringement and hate speech, and examines the potential role of automated systems—robots—as the new judges in this domain.
The Evolution of Platform Liability
The internet has transformed from a network for researchers to a global platform for communication, commerce, and content sharing. Early on, internet service providers (ISPs) argued that they were mere intermediaries, not responsible for the content their users posted. This led to the creation of "Safe Harbour" protections, shielding ISPs from liability under certain conditions. In Europe, the E-Commerce Directive of 2000 established these protections, precluding monetary damages against ISPs for unlawful content they host, provided they act expeditiously to remove it once they become aware of its existence.
However, the rapid evolution of technology and business models has outpaced legislation. The definition of "hosting" as a legal concept has been strained, and the ecosystem of intermediaries has become increasingly complex, including search engines, social media, e-commerce platforms, and more. This complexity has prompted international organizations and legislators to revisit the issue of platform liability.
Copyright Infringement vs. Hate Speech
The legal landscape for platform liability varies significantly between copyright infringement and hate speech. Copyright law has seen substantial developments, particularly with the introduction of the EU Copyright Directive in 2019. This directive requires platforms to obtain permission from rights holders to allow access to copyrighted content or face liability for infringing content uploaded by users. Platforms must also act expeditiously to remove infringing content upon receiving notice from rights holders.
In contrast, the regulation of hate speech is less clear-cut. While the EU has made efforts to address online hate speech through voluntary codes of conduct with major platforms, the legal framework remains fragmented. The challenge lies in balancing the removal of harmful content with the protection of freedom of expression.
The Role of Automated Systems
Given the volume of content uploaded to platforms daily, manual moderation is impractical. Automated systems, or "robots," have been proposed as a solution to filter and remove illegal content. These systems can quickly identify and take down infringing material, but their implementation raises several concerns.
Effectiveness and Accuracy: Automated systems must be sophisticated enough to distinguish between infringing and non-infringing uses of content, such as fair use or parody. This requires advanced machine learning algorithms capable of nuanced analysis.
Bias and Fairness: There is a risk that automated systems may exhibit biases, leading to the disproportionate removal of content from certain groups. Ensuring fairness and transparency in these systems is crucial.
Legal and Ethical Implications: The use of automated systems to moderate content raises questions about accountability and due process. Who is responsible when an automated system makes a mistake? How can users appeal wrongful takedowns?
Case Studies and Legal Precedents
Several legal cases highlight the challenges and complexities of platform liability. In the landmark case of L'Oreal v. eBay, the Court of Justice of the European Union (CJEU) ruled that platforms could lose their Safe Harbour protections if they are aware of infringing activities and fail to act. This case underscored the need for platforms to proactively monitor and remove illegal content.
In the realm of hate speech, the case of Glawischnig-Piesczek v. Facebook demonstrated the difficulties in balancing content removal with freedom of expression. The CJEU ruled that platforms could be required to remove not only the specific content deemed illegal but also identical or equivalent content, provided it does not require an independent assessment.
The Future of Platform Liability
The EU's proposed Digital Services Act (DSA) and Digital Markets Act (DMA) aim to create a more predictable and trusted online environment. These regulations seek to harmonize the rules for removing illegal content and protecting users' rights across different types of content, including copyright infringement and hate speech.
The DSA, for example, proposes a system of "trusted flaggers" to expedite the removal of illegal content and mandates platforms to implement internal complaint-handling systems. It also reaffirms the prohibition of general monitoring obligations, ensuring that platforms are not required to indiscriminately monitor all user activity.
The intersection of copyright, hate speech, and platform liability presents a complex legal landscape. While automated systems offer a promising solution for moderating content, their implementation must be carefully managed to ensure effectiveness, fairness, and accountability. As legislators continue to refine the legal framework, the role of robots as the new judges will undoubtedly evolve, shaping the future of digital content moderation.
From a paper by M. Favale, ‘Robots as the New Judges: Copyright, Hate Speech and Platforms’ 44(8) European Intellectual Property Review, 461 – 471 (2022)
Create Your Own Website With Webador