
Minor creators hold copyright over their content, lack the legal capacity to licence it, and remain theoretically liable for infringing others' — all at once. Banning them from platforms resolves none of this [ full paper here]
Kidscreators are neither properly protected as authors nor adequately shielded as potential infringers, while their creative output is commercially exploited on the basis of licences whose legal validity is, at best, uncertain...
The wave of social media bans that has swept from Australia through Norway and into a growing number of EU member states since 2024 reflects a genuine and understandable frustration with the pace of platform reform. Australia's Online Safety Amendment Act 2024 imposed a categorical ban on under-16s; France established a "digital majority" at 15; Spain, Portugal, Denmark, and others have initiated parallel legislative processes. These instruments are not illegitimate — they apply regulatory pressure and signal that the status quo is no longer tolerable.
But they are structurally incomplete. They address the symptom — children's presence on platforms — without addressing the underlying legal failure: the complete absence of a coherent framework for children's intellectual property rights and platform liability in relation to minor creators.
Three gaps, one paradox
The paradox rests on three simultaneous legal facts that no jurisdiction has yet brought into coherent relationship with one another.
First, copyright law in virtually all jurisdictions vests authorship in the natural person who creates an original work, with no minimum age requirement. The Berne Convention imposes none; nor do the Copyright, Designs and Patents Act 1988, the US Copyright Act, or the French Code de la propriété intellectuelle. A ten-year-old who films and edits an original video is, from the moment of creation, the author of that work. This is not a contested point — it follows directly from the originality threshold applied in each system.
Second, minors generally lack the contractual capacity to exercise the rights that authorship confers. Under English law, contracts entered into by a minor are voidable at the minor's election unless they fall within the category of contracts for "necessaries" or for the minor's benefit. US law is broadly similar. The standard platform End User Licence Agreement — which typically grants the platform a perpetual, transferable, sublicensable, royalty-free, worldwide licence to exploit user content commercially — is not readily characterised as a contract for the minor's benefit taken as a whole. The licence embedded in such agreements is of uncertain legal validity when the signatory is under the age of majority. And yet no platform has put in place a systematic mechanism for obtaining verified parental ratification of that licence grant, despite COPPA having required verifiable parental consent for data processing since 1998.
Third, copyright infringement is, in most jurisdictions, a strict liability tort: intention and knowledge are not required. A minor who uploads a video using a few seconds of a popular song is, technically, an infringer — and the law provides no age-adjusted framework for that liability. The minor who cannot lawfully sign a licence agreement is simultaneously exposed, in principle, to the full force of infringement law.
This three-way incoherence is then compounded by the automated content recognition systems that platforms deploy under Article 17 of the EU's Digital Single Market Directive, which routinely block legitimate creative content — remixes, parodies, short quotations — that would be protected under copyright exceptions if assessed by a human. Minor creators, lacking the legal literacy to invoke those exceptions formally, bear a disproportionate share of the chilling effect.
What platforms already know — and how they use it
The standard industry response to safety-by-design proposals has been a familiar one: reliable identification of minor users, without invasive identity verification, is technically infeasible. The research literature, and more importantly the platforms' own operational conduct, have made this position debatable.
The Evidence
The field of author profiling — inferring demographic characteristics from patterns of written expression — has demonstrated for over a decade that age estimation from text is achievable without any identity verification. HaCohen-Kerner's 2022 survey documents that machine learning models trained on linguistic features (syntactic complexity, vocabulary range, spelling patterns, slang and emoji usage) can reliably identify age-group membership.1 More recent deep learning approaches achieve age-group prediction accuracy of 84.2% from raw text alone, without reference to biographical data.2
More significant for the legal argument is the empirical evidence on what platforms are already doing. Hilbert et al.'s "#BigTech @Minors" (2025) demonstrated through systematic behavioural experimentation that the recommendation algorithms of YouTube, Instagram, and TikTok rapidly adapt content delivery to accounts displaying behavioural signals associated with underage users — typically within a single session.3 This adaptation is driven by the same algorithmic infrastructure used for advertising personalisation.
The legal implication is direct. If a platform can infer that a user is likely a minor from their behavioural signature and deploy that inference to shape their commercial experience — as the Hilbert et al. evidence demonstrates they do — it can equally deploy that inference to apply protective legal defaults: minor-specific licence conditions, age-sensitive content recognition calibration, the prohibition on behavioural profiling mandated by GDPR Recital 38 and DSA Article 28.
The obstacle, in other words, is not technical. It is commercial. Platforms have chosen to use their algorithmic knowledge of minor users to maximise engagement rather than to fulfil the obligations that existing law already imposes. This is precisely the "systemic risk" that Articles 34 and 35 of the Digital Services Act require very large online platforms to assess and mitigate.
Existing rights, absent enforcement
One of the more counterintuitive arguments, in my analysis, is that the problem is not primarily an absence of substantive legal rights. Moral rights — the right of attribution, the right of integrity — attach automatically from the moment of creation under both civil law (Articles L121-1 ff. of the French CPI; §§ 12–14 UrhG) and, in more limited form, under common law systems. They are not displaced by the minor's lack of contractual capacity. The GDPR right of erasure under Article 17 is similarly available to minor creators whose consent to processing was given without parental authorisation, and survives into adulthood.
What does not exist is an enforcement architecture that does not depend entirely on the legal representative. Every one of these rights must, in practice, be exercised through a parent or guardian — or deferred until the child reaches majority. In the kidfluencer context, where the parent or guardian is often the party commercially exploiting the child's content, this dependency contains an obvious and unresolved conflict of interest that neither the French nor the Californian legislative models (both of which mandate earnings ring-fencing along labour law lines) have yet addressed at the level of intellectual property enforcement.
What a coherent framework would require
My policy recommendations follow identifiable legislative pathways and do not require the creation of new substantive rights. Three measures are central.
First, at the level of copyright and contract law: legislatures should codify what existing common law and civil law already imply — that minors hold copyright in their original digital creations, but that the exercise of economic rights requires parental authorisation or judicial approval. Platforms should be required to apply minor-specific licence terms automatically to accounts identified as belonging to users under 18, limiting the licence to non-commercial uses and prohibiting sublicensing of the minor's content without verified parental consent.
Second, at the level of the DSM Directive: the Liability Paradox generated by Article 17's automated filtering obligation requires a targeted legislative fix. The most instructive model is Germany's UrhDaG, § 9, which establishes a presumption that certain short transformative uploads — short extracts invoking recognised exceptions such as parody, quotation, or pastiche — are permitted uses. Extending a version of this presumption to verified minor accounts at EU level would neutralise the chilling effect of automated blocking on children's expressive output before any rights-holder complaint is lodged, without requiring minors to invoke or understand the exceptions that technically protect them.
Third, at the level of platform architecture: the safety-by-design obligation already imposed by DSA Articles 34 and 35 should be operationalised, through Commission implementing act or guidance, to require platforms to redirect their existing age-inference capability toward protective rather than commercial ends. This includes age-sensitive interface design, copyright literacy prompts at the point of upload, and the automatic application of GDPR Recital 38's prohibition on behavioural profiling to accounts identified as probable minor users. The EU Age Verification Blueprint, currently in pilot testing using zero-knowledge proof architecture, represents the most technically promising infrastructure for this purpose — but should be extended to include behavioural and linguistic modalities alongside document-based verification.
On bans
None of this is to suggest that the age-ban legislation emerging across multiple jurisdictions is simply misconceived. I argue that such measures are defensible as emergency instruments, applying pressure on platforms and signalling regulatory urgency, but they are transitional rather than structural. A Q1 2026 audit by the Australian eSafety Commissioner found that VPN usage among 13-to-15-year-olds rose by 40% following the commencement of the Australian ban — driving the most affected group toward less regulated environments, where protections are even thinner. A banned minor remains a copyright author; her creative output does not cease to exist because her access to a distribution channel is revoked.
The sustainable response is one that makes the digital environment safe by design — that redirects existing platform capabilities toward compliance with obligations the law already imposes, and that ensures the rights minor creators already hold are finally made enforceable. Access restriction is not a substitute for that framework. It is, at best, a reason to build it more quickly.
This post is based entirely on: Marcella Favale, "The Liability Paradox: Copyright Ownership, Contractual Incapacity, and Platform Exploitation of Minor Creators under the DSM Directive and DSA" (2025) available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6501918. Dr. Favale is Copyright Lecturer at Sciences Po Paris (School of Management and Impact; School of Law) and Visiting Research Fellow at Bournemouth University, Centre for IP Policy and Management (CIPPM).
Notes
1 Y. HaCohen-Kerner, "Survey on profiling age and gender of text authors," Expert Systems with Applications 199 (2022), 117140.
2 V. Thakur and A. Tickoo, "Text2Gender: A Deep Learning Architecture for Analysis of Blogger's Age and Gender," arXiv:2305.08633 (2023). The 84.2% figure refers to age-group classification accuracy from raw blog text, without biographical data.
3 M. Hilbert et al., "#BigTech @Minors: Social Media Algorithms Have Actionable Knowledge about Child Users and At-Risk Teens," Telematics and Informatics 103 (2025), 102341.

Add comment
Comments