Social Media and threats: the recent incident involving the Irish Prime Minister
The Irish media has reported this week that a threat was made to the Irish Taoiseach (Prime Minister) on Instagram over the weekend, and that the post remained live on the platform for two days after being reported by Gardaí (Irish Police).
Meta’s Community Guidelines have for a long time prohibited threats of violence, and have been particularly vigilant towards heads of state. There were special provisions towards heads of state even when I worked at Facebook in 2016. I was therefore somewhat surprised to read that a threat towards the life of the Taoiseach could remain live on platform for up to two days after being reported.
Pelidum has not seen this content, nor have we spoken to anyone at Facebook on this incident. However, as a team with years of experience doing moderation and crisis management work, we thought it could be valuable to share some hypotheses about why the content could remain live for so long. While it’s understandable that the public might conclude that platforms are simply indifferent about such matters, platforms deploy significant resources to mitigate these threats and have nothing to gain from conspicuous negligence.
Here are some possible explanations as to why this may have happened:
Reporting channel
Weekend short-staffing
Ambiguous content
Lack of context of review team
Human error
Tooling failure
1. Wrong reporting channel
The news report states that the content was reported to Meta by an Garda Síochána (Irish police). Law Enforcement Authorities typically have their own reporting channels for requesting data on users suspected of engaging in criminal activity. Sometimes Law Enforcement will submit a takedown request at the same time as they submit a data request and via the same channel. Depending on how the response teams are set up, it may take some time to get these kind of dual-purpose reports in front of the right team.
Alternatively, they may have submitted a report via the DSA Article 09 Takedown Order channel. In this case, you’d expect the content to be removed relatively quickly, certainly within 48 hours. It is unlikely but if the Gardaí used the standard Community Guidelines reporting function, one would still expect the content to be reviewed quite quickly.
It can always happen that the intake channel misfires and that a review case is not created, or is misrouted to the wrong team, or that the review tool fails to render the content correctly.
2. Weekend short-staffing
Despite Meta’s significant resources, response teams are typically shorter-staffed on weekends than they are during the week, and operate a skeleton staff that handle the highest priority issues only. A death threat to a head of state would almost certainly fulfil the criteria of a high-priority incident; however, with less hands on deck, if multiple crises happen at once, it’s easy for a report to go overlooked or be mis-prioritised.
3. Human error
Which ties in nicely with the next point: to err is human, and with a skeleton team that is jumping from one crisis to another, it is easy for the response oncall to review something too quickly and make the wrong decision.
4. Ambiguous content
This can be compounded if the content is ambiguous. Again I reiterate that Pelidum has not seen the content; but some threats are deliberately expressed ambiguously. Sometimes referred to as ‘veiled threats’, the poster may only refer indirectly to the target in order to evade enforcement, or to achieve some form of plausible deniability.
However, an article published in the Irish Times today describes the threats as “extremely sinister” and that they contain references to the Taoiseach, his family, a weapon, and extreme violence, which suggests an explicit threat. An article in the Irish Sun shares some of the purported content of the post, which also suggests an explicit threat and discounts the ambiguity hypothesis.
5. Lack of context of review team
If the reference to the Taoiseach was oblique, the response team may have overlooked the threat. If the people on call were not familiar with Irish politics, it may not have been obvious to them who the target was.
In the case of a threat spanning multiple posts, it is also possible that the reviewer looked at only one or some but not all, and did not put all the pieces of the jigsaw together.
6. Tooling failure
Even assuming that the report was immediately routed to the right review team, who had the time to review the content and come to the correct decision, T&S tools can still misfire. It has certainly happened to me in the past that I have pressed the delete button only to find later that the content was still on platform. Just like a car can fail to start, or a phone might not turn on, the delete button can fail to delete.
Conclusion
The facts available suggest that either a routing error prevented the report from reaching the right team, or a technical error prevented the enforcement action from taking effect, or a stretched team simply made an error under immense time constraints. This incident shows us that things can go wrong on even the most robust platforms with the most seasoned of T&S teams for a host of reasons.
If your company needs help in fine-tuning its crisis management and escalation processes, we’d be delighted to help.
Press sources:
https://www.rte.ie/news/2024/0806/1463665-taoiseach-threat/
https://www.rte.ie/news/2024/0807/1463785-mcentee-social-media/
https://www.thesun.ie/news/13563221/simon-harris-family-death-threats-details-instagram-gardai/
Disclaimer: the aim of this blog post is to help readers without first-hand experience of Trust & Safety to understand how things can go wrong. It is not an assertion of fact nor is it an apportioning of blame. It is a set of possible explanations as to why T&S teams do not always get to the right outcome despite immense effort. These are not specific to Meta and could apply to any platform.
Liam Melia is the Managing Director of Pelidum Trust & Safety
© Pelidum Trust & Safety 2024