Content moderation notifications and the Digital Services Act

In broad terms, the DSA mandates that when people report content, be they users, government authorities, law enforcement, Trusted Flaggers, they must receive confirmation that their report has been received, and be notified of the final decision on their report. 

For creators and uploaders of content, the DSA mandates that they be notified of any restriction of visibility of their content, which covers removal of, disabling of access to, or demotion of content. It likewise obliges platforms to inform users if their access to monetisation or other services has been restricted, or if their account is terminated or suspended.

For all parties the DSA enshrines the right to appeal - more on that in a future post. 

But what does this really change you might ask? Surely platforms were already notifying users when they reported content or when their content was moderated? The answer, at least for mature social media platforms, is that they mostly did. But the qualification mostly is important. Why not entirely and in every circumstance?

The fact of the matter is that prior to the advent of the DSA, there was little to no legal obligation on platforms to send moderation notifications. Of course they mostly did but that was responding to a business imperative, not to a regulatory one: not informing people when and why their content is moderated makes for a terrible user experience; it damages trust, and exposes platforms to wider reputational risks as well to all kinds of conspiracies about how and why they moderate content. 

As a rule of thumb, the more aggressive the content or account restriction, the more likely a creator would be notified. The contents of that notification however were subject to variation and their level of detail, or lack thereof, was left in the hands of T&S teams. Likewise, reporters usually did learn of the outcome of their reports; nonetheless, we’ve probably all submitted a report at some time that disappeared off into the ether. In short, there were gaps and the user experience, while not terrible, had room for improvement.

The reasons for this vary but usually it was quite simple and not as nefarious as one might think: maintaining a robust and reliable notification system is complicated. In a world where T&S teams are never short of problems, a team could easily prioritise improving their comment hate speech classifier over fixing some known bugs in the comments notification system. At times as well, especially when new products were being launched, teams may have chosen the expedient option and kicked notifications down the road to be dealt with at a later date.

Even where notification systems were robust, however, the DSA goes some way further than what was the prevailing industry standard. Prior to the DSA, if your content was removed, you likely received a notification along the lines of: “we have removed your post for violating our Community Standards”. Some platforms may have provided more detail on the reason for removal and invited you to review their rules on hate speech or bullying, and perhaps given an appeal option, but not much more than that. 

The DSA goes far beyond this. Article 17 lays out the elements of the Statement of Reasons that must be sent to uploaders when their content is moderated. Several of these are entirely unprecedented to the best of my knowledge, in particular those related to the source of the report and the use of automation:

Content of Statement of Reasons per Art 17 Standard Practice prior to DSA
The nature of the content and/or account restriction Mostly
The facts and circumstances of the moderation report (was it user-reported, proactively detected, etc) No, this was not standard practice
Whether automation was used No, unaware of any precedent
For legal restrictions, the legal basis for actioning For some country-specific laws
For platform policy restrictions, the basis in the ToS or Community Guidelines Mostly but inconsistent
Information on how to appeal Mostly

In conclusion, the DSA has brought about two important changes in content moderation notification systems:

  1. It has made obligatory the inclusion of certain details that had hitherto not been standard across the industry. In this way, it has raised the bar for platforms operating in Europe. 

  2. More fundamentally it has seen to it that T&S teams now longer can decide whether or not to notify reporters or uploaders about moderation decisions. They can no longer decide to be economical with their reasons for moderating content should it be expedient for them to do so. A platform that chooses not to notify its users of moderation decisions them is no longer making a decision merely to compromise user experience and trust in order to save time or money; it is choosing not to comply with EU law. The formal elements of Article 17 aside, therein lies the true game changer. 

Liam Melia is the Managing Director of Pelidum Trust & Safety.

© Pelidum Trust & Safety 2024

Previous
Previous

Appeals under the DSA - what actually changed?

Next
Next

The DSA one year on