Illegal Content Reporting prior to the DSA

Illegal Content Reporting had existed in various forms prior to the DSA. Takedown channels were available to regulators and government bodies, particularly in countries with local laws generally not captured in the community standards of global platforms. A prime example of this is lèse majesté, an offence deemed serious in some regions but which to my knowledge no mainstream social media platforms has declared a violation of their community standards.

Precedents of publicly accessible illegal content reporting were rare. The prime mover in this regard was the German NetzDG law which went live on the 1st of January 2018. This was followed in later years by the Austrian KoPl-G and the French LCEN laws. These regulations had most of the following in common:

  • they were national in scope (respectively Germany, Austria and France)

  • they covered a limited set of content formats (KoPl-G for instance focused on comments)

  • were aimed at user-generated content

  • defined a clear moderation turnaround time (for NetzDG and KoPl-G this was 24 hours)

  • mandated appeals for uploaders and reporters

  • entailed transparency obligations

  • perhaps most importantly they defined illegal content to an exhaustive set of national laws

In this week’s article, we will discuss how the DSA differs from these three national precedents, and some implications of these differences, which we’ll look at under several headings:

Content in scope

These national legal precedents focused on user-generated content, and specifically individual posts, such as video, photo and comments. 

The DSA on the other explicitly covers paid content (advertising), e-commerce listings, and user profiles of various sorts, be they creators, advertisers or merchants. 

Operational parameters

The DSA defines only broadly the parameters within which companies must moderate content. Without undue delay is the most ubiquitous timeframe in the final draft, an expression open to some interpretation: it’s almost certainly not a couple of weeks but is likely a little more generous than the one hour set down in the TCO regulation. 

So what is it then? 12 hours, 24 hours, 48 hours? T&S teams across platforms have no doubt tried hard to set a turnaround (TAT) time that is not too burdensome for their operations teams but nonetheless rigorous enough to placate their auditors.  

NetzDG and KoPl-G on the other hand unambiguously set 24 hours as the default TAT, whilst allowing for up to 7 days for complex cases. While this left no room for manoeuvre for teams wishing to set their own TATs, this approach did boast the virtue of clarity.

Appeals

The LCEN, NetzDG and KoPl-G all mandated appeals for uploaders and reporters (although appeals for reporters came a little later for NetzDG). In stark contrast with the DSA, KoPl-G mandated availability of appeal for only 10 weeks from the time of the original moderation decision; the DSA on the other hand provides for appeals for six months … we’ll have more to say on this in another post. 

Transparency reporting

Both the NetzDG and KoPl-G laws mandated the publication of transparency reports every six months, the contents of which were not dissimilar from what the DSA mandates.

Laws in scope

To put it in simple terms, the LCEN, NetzDG and KoPl-G defined the scope of illegal content for moderation purposes. To be more precise, each regulation circumscribed an exhaustive set of laws to be applied to online content. These were for the most part quite standard, encompassing various forms of child endangerment, terrorism and hate speech, although the Germans certainly spiced things up with treasonous forgery or Landesverräterische Fälschung under the NetzDG. 

The DSA by contrast provides no such list. An EU citizen may report content as illegal and cite any provision in national or European law, and expect it to be reviewed without undue delay. Users may even submit reports without citing specific laws, and do so instead under broader headings such as privacy or hate speech. 

There are many good reasons for this of course. First of all, the illegal content reporting mechanism is intended to be user-friendly, and it would be asking a lot of the average citizen to make robust legal citations. EU laws are also not fully harmonised. Not every country has hate speech laws, and those that do might not share the same definition. 

In my experience, however, a nice neat set of a dozen or so laws is much easier to operationalise: they can easily be incorporated into a menu for reporting users. It’s much easier to explain moderation decisions as they can be linked to in notifications and help centre articles. It makes transparency reporting much easier. And, finally, experienced legal and policy minds can craft operational policies for moderators to review content objectively and consistently.

From the point of view of a T&S practitioner, finding a workable solution that enables all EU citizens to easily report almost every type of content for any given national or Union law was one of the greatest challenges of implementing the DSA. I would still expect much iteration to happen in this space in the years to come.

If your team or platform would like to learn from industry best practices for implementing the DSA, we at Pelidum are here to help.

Liam Melia is the Managing Director of Pelidum Trust & Safety

© Pelidum Trust & Safety 2024

Previous
Previous

What DSA Article 16 enforcement numbers tell us

Next
Next

Social Media and threats: the recent incident involving the Irish Prime Minister