Your Safety, Our Expertise

Effective solutions to complex problems by Trust & Safety practitioners 


What we do

  • Set up end-to-end Trust & Safety function with the right approach from the onset, tailored to your business, scalable and cost effective:

    • Moderation policy ideation, creation & launch

    • Process creation, optimisation & validation

    • Enforcement and Escalation processes (internal and external)

    • User reporting & notification mechanisms 

    • Data analytics and reporting

    • Safety by Design

    • Crisis Management

  • We can ensure your content moderation operations are DSA-compliant:

    • DSA Article 16 Illegal Content Reporting

    • DSA Article 17 Statements of Reasons

    • DSA Article 20 Complaints

    • DSA Article 21 Out of Court Dispute Settlements

    • DSA Article 23 Vexatious Reporting

    • DSA Transparency Reporting

  • We can craft, manage and execute your platform's elections integrity playbook:

    • Elections calendar planning

    • Threat detection and monitoring

    • Risk scenario planning

    • Running of Integrity Operations Centre

    • Crisis management

    • Data analytics and reporting

  • We design and deliver solutions for your analytics and reporting needs. We can advise you on what to measure, how to measure it, and what insights to draw.

    • Moderation performance dashboards

    • Core integrity analytics

    • DSA Transparency Reporting

    • Other Transparency obligations (as per regional needs)

  • For businesses losing revenue to counterfeit sales of their copyrighted or trademarked products, we offer a detection solution which leverages cross-platform signals to surface infringing content:

    • Cross-platform monitoring across social media, web forums and other open web sources

    • Investigation and measurement of off-platform signals

    • Regular reporting of abuse sources and prevalence

    • Continuous mitigation and joint deployment of anti-abuse measures

  • We can conduct stress-testing (red teaming) on the integrity of your platform across a range of abuse verticals:

    • Generative AI

    • Fake IDs, ID verification

    • Fake accounts

    • Mass accounts, Accounts takeover

    • Prohibited/violating content

    • Financial fraud

    • Behavioural fraud

    • Off Platform solicitation

    • Recidivism

    • Location spoofing

    • Fake reviews

  • If you have a generative AI product (includes AI bots, chats, support, etc), we offer stress-tests on the following abuse verticals:

    • Intellectual Property (IP) Theft

    • Discrimination in Job Descriptions (JDs) and Posts

    • Religious Content

    • Hateful Content

    • Harassment & Bullying

    • Violence

    • Sexual Harassment

    • Illegal Activities

    • Misinformation

    • Low-Quality Content

  • Does your business have a Trust & Safety challenge but you’re not sure where to get started? 

    Get in touch, and we'll be delighted to discuss how we can tailor our expertise to meet your needs.


Meet our Co-Founders

Portrait of Pelidum Managin Director

Liam Melia

Liam started out as a linguist and initially trained to become a translator. Looking for a career change, he joined the YouTube Policy & Enforcement team in 2015 where his language skills and interest in current affairs were put to good use. 

He went on to work at Facebook for four and a half years where he cut his teeth in content regulation in Germany, working on the NetzDG law and the EU Code of Conduct on Hate Speech. During this time he developed deep domain expertise in crisis management and elections integrity, being involved in German, EU and US elections. More recently, Liam led the implementation of the Avia Law and of the DSA for TikTok Trust & Safety.

Peter Dudič

Peter started his Trust & Safety journey more than 10 years ago at YouTube. As YouTube’s policy and legal lead, Peter led the regulatory adoption of the NetzDG Law and built operations teams in India and Germany. He also established YouTube’s first Legal Operations Team outside California and developed an innovative tool for privacy/legal complaint processing.

After moving to the start-up world, Peter managed high-profile projects like the EU Commission's study on Algorithmic Amplification of Extremist Content and the Measurement of the Code of Practice on Disinformation. He also delivered projects for VLOPs and small/medium platforms.

Get in Touch

Are you interested in exploring our Trust and Safety services? Fill out the contact form with your requirements, and we'll contact you soon.