August 25, 2025 | Martijn Schoonvelde and Elizaveta Gaufman

 

Beyond fact-checking: Why EU disinformation policies must rethink credibility

Disinformation is an increasing problem in European politics. From election interference to conspiracy theories circulating during the pandemic, the question of how to protect public debate in the digital age has become of central concern to EU policy-makers. Over the past decade, Brussels has introduced an ever growing toolkit: regulations on platforms, codes of practice, fact-checking networks, and counter-disinformation task forces. These efforts have mostly targeted foreign information manipulation and attempts by state-backed actors to distort European discourse.

Yet despite these developments, current frameworks risk missing something fundamental. The issue of disinformation is not just about the supply of falsehoods, it is also about the demand side; why the public chooses to trust or dismiss particular pieces of content. Here, the perception of credibility is crucial. How do individuals decide whether they can trust information? What subtle signals shape their judgments? And how might these dynamics affect the success – or failure – of Europe’s disinformation strategies?

In the REGROUP focus paper Tackling online disinformation at the institutional and societal level we present a survey experiment, which seeks to shed light on these questions. By looking at how young people evaluate social media posts, we identify patterns that highlight the limits of current policy frameworks and point toward more effective, audience-centered approaches.

 

The experiment: testing credibility in context

Our study involves 152 undergraduate and postgraduate students, each asked to evaluate the credibility of a series of social media posts. Before beginning, respondents self-identified their gender and country of origin. They then rated posts covering a range of topics, some more obviously political than others. Finally, participants provided written reflections explaining why they had judged certain posts credible or not.

This design allows us to compare two layers of credibility assessment: the numerical trust ratings on the one hand, and the qualitative reasoning behind them on the other. In doing so, we can see not only what people said about credibility but also how identity cues subtly influenced their decisions – often in ways they did not consciously recognize.

 

What we found: content matters, but so does identity

Unsurprisingly, content is the single strongest driver of credibility. This finding reinforces the basic intuition that people are not completely swayed by surface cues; substance matters.

However, credibility is not determined by content alone. We observe a small but consistent effect of identity alignment. For example, respondents tend to rate posts as more credible when they share their gender and national background with the authors of these posts. These effects are rarely acknowledged explicitly in the written explanations, but the statistical pattern is there: shared identity cues create a sense of social proximity, which in turn increase credibility ratings.

Some of our expectations did not hold. Posts from authors with Anglo-American names – often thought to carry epistemic authority online – are not deemed more credible. Similarly, posts from male-presenting authors are not rated as more credible than those from female-presenting ones. This suggests that the identity dynamics underpinning credibility ratings are more nuanced than traditional stereotypes imply.

 

Why this matters for policy

These findings carry implications for the EU’s disinformation strategy. Current approaches focus heavily on enforcement (removing false or harmful content) and fact-checking (correcting misleading claims). While necessary, these measures overlook the psychological and social processes that shape why certain messages stick.

If trust in information is partly mediated by identity cues, then a one-size-fits-all approach to disinformation will inevitably fall short. For example, fact-checking initiatives often assume that simply presenting verified information is enough. But if audiences are predisposed to trust sources who look and sound like them, corrections from external “authorities” may fail to resonate (or even reinforce skepticism).

Similarly, communication strategies that privilege a narrow idea of credibility risk alienating younger, more diverse audiences whose digital habits are shifting rapidly toward emerging platforms. Effective counter-disinformation must therefore move beyond what content says to consider who says it, how it is presented, and how different communities perceive its legitimacy.

 

Rethinking disinformation policies: three recommendations

Based on our experiment, we see three priorities for European policymakers:

  1. Adapt communication to evolving digital habits. Younger audiences are moving away from traditional platforms toward short-form video, ephemeral content, and decentralized communities. Policymakers need to meet them where they are – not only on Twitter/X or Facebook but also on TikTok, Instagram, and whatever comes next.
  2. Invest in pre-bunking and media literacy that account for identity. Pre-bunking (inoculating people against false claims before they encounter them) and media literacy campaigns remain essential. But they must be designed with sensitivity to identity dynamics. Instead of assuming fixed standards of credibility, initiatives should recognize the diversity of audiences and the ways social proximity shapes trust.
  3. Move beyond enforcement toward inclusion. Removing false content or penalizing platforms has its place, but building resilience requires engagement. That means listening to communities, respecting audience diversity, and co-creating strategies that reflect how people actually process information.

 

Conclusion: from the “what” to the “why” of trust in disinformation

Ultimately, disinformation is not just about the spread of falsehoods; it is about why people would believe them. Our experiment underscores that credibility is not fixed but constructed through a mix of content, identity, and situational cues. For EU policy, this means that safeguarding political discourse online will require more than stronger regulations and more fact-checkers. It will require a flexible, inclusive, evidence-based approach that engages directly with the human side of trust.

Only by addressing not only the what of disinformation but also the why of trust assessments can Europe strengthen resilience in its political discourse and sustain public trust in the digital age.

 

This text summarises some of the findings in the REGROUP paper “Tackling online disinformation at the institutional and societal level“.