8  Conclusion

It has long been a puzzle to the deficit model–which suggests that trust in science is primarily driven by science knowledge–that knowledge of science is at best weakly associated with science attitudes (Allum et al. 2008; National Academies of Sciences, Engineering, and Medicine 2016). The rational impression account can make sense of this: it lays out how trusting science without recalling specific knowledge can be the result of a sound inference process, rooted in basic cognitive mechanisms of information evaluation.

The account is compatible with the finding that education, and in particular science education, has been repeatedly identified as one of the strongest correlates of trust in science (Bak 2001; Noy and O’Brien 2019; Wellcome Global Monitor 2018, 2020; but see Cologna et al. 2025). By contrast with the deficit model, it suggests that the main causal role of education for public trust in science is not transmission of knowledge and understanding, but impression generation.

The rational impression account aligns with recent normative accounts which shift the focus from listing particular key institutional features that make science trustworthy-certain methods, norms, or processes–to the diversity of science: Cartwright et al. (2022) make the case that scientific knowledge emerges from a tangle of results, relying on diverse research methods. Oreskes (2019) makes a similar case: She argues that scientific practice takes place in different scientific communities who rely on a variety of different research methods. Through some shared practices, in particular peer-review, these communities engage in critical dialogue. What makes scientific knowledge trustworthy, according to Oreskes, is when from this diversity of actors and methods, a consensus emerges. According to this view, to infer trustworthiness, people should have a representation of the diversity of science. The rational impression account is, in a way, less strict: it does not require a representation of diversity. However, for inferences from convergence to be sound, people do need to have a representation of science as an institution of independent thinkers.

The studies presented in this thesis face several limitations which are detailed in the individual chapters. Here, I will point out some more general, theoretical limitations of the rational impression account.

The rational impression account fits with a sociological literature investigating how “individual cognition and practice establish and maintain institutional fields and status hierarchies, especially in the face of imperfect knowledge” (Gauchat and Andrews 2018, 569). It proposes a possible micro-level model of trust in science and should be seen as complementing, not competing with, macro-level processes that shape public trust in science. Sociological macro-level accounts have made the case that trust in science is entangled with broader cultural and political dynamics. These accounts, like the individual-level accounts reviewed above, tend to focus on explaining distrust in science. For example, Gauchat (2011) describes the ‘alienation model’, according to which the “public disassociation with science is a symptom of a general disenchantment with late modernity, mainly, the limitations associated with codified expertise, rational bureaucracy, and institutional authority” (Gauchat 2011, 2). This explanation builds on the work of social theorists (Habermas 1989; Beck 1992; Giddens 1991; see Gauchat 2011 for an overview) who suggested that a modern, complex world increasingly requires expertise, and thus shapes institutions of knowledge elites. People who are not part of these institutions experience a lack of agency, resulting in a feeling of alienation. Similarly, Gauchat (2023) argues that politicization of science in the US needs to be seen in its broader cultural context. Precisely, according to Gauchat, science has enabled the authority of the modern regulatory state. Consequently, conservative distrust of science reflects deeper structural tensions with the institutions and rational–legal authority of modern governance. At the micro-level, this is consistent with research showing that right-wing authoritarian ideology is associated with distrust towards science and scientists (Kerr and Wilson 2021).

Another limitation of the rational impression account is that it assumes people have a representation of science as consensual. However, in practice–with perhaps some exceptions, such as during the Covid-19 pandemic–most people do not literally compare the opinions of different scientists for themselves and come to the conclusion that something is largely consensual. Where, then, could the representation of consensus possibly emerge? A plausible explanation, I believe, is that education fosters a representation of consensus: During education, in particular during early education, knowledge is typically presented as simply the result of science–a seemingly unanimous enterprise that produces knowledge. School books hardly teach about historical science controversies, suggest uncertainty around scientific findings, or cover cutting-edge research where disagreements are the norm. This could induce a default consensus assumption in people’s perceptions of science. However, this argument is of course only speculative.

The rational impression account is also limited in its implications. First, I do not believe that flooding people with impressive consensual science knowledge is the key to overcoming all distrust in science. In the context of trust in political institutions, research has shown that trust and distrust are not necessarily symmetrical: what causes the former might not help alleviate the latter (Bertsou 2019). I believe this is at least to some degree true for science, too. Especially in a context of the global north, where essentially everyone has been exposed to science through a basic science education, trust in science via the mechanisms of the rational impression account is likely to be the default state. This is in line with our findings on quasi-universal trust in basic science in the US (Chapter 6). Consensus messaging has been shown to help convince people to trust science on particular issues, such as climate change or vaccines, but it is less clear whether consensus messaging could also enhance perceptions of trustworthiness. It might be the case that the people convinced by consensus messages might have already generally trusted science, but have not held strong opinions on the specific matter, as has been argued to be the case for a large segment of the public on most matters (Bourdieu 1979; Zaller 1992). For people who do not only lack trust, but who actively distrust, motivated reasoning accounts are likely better suited as a theoretical framework. Addressing relevant underlying motivations directly might be more fruitful to mitigate distrust in science than exposing people to consensual science more generally.

Second, and related, just because I propose an account by which trust in science can be rational, this does not mean that, conversely, all distrust in science is irrational. Some groups of people do in fact have good reasons not to trust science. For example, some science has historically contributed to fostering racism (see e.g. Fuentes 2023; Nobles et al. 2022), via instances such as the tragically famous Tuskegee syphilis study (Brandt 1978; Scharff et al. 2010).

Third, I do not think that science communication should stress consensus at all costs. In the rational impression account, consensus plays a central role for generating trust. However, this should not incentivize science communicators to neglect transparency about uncertainty. Acknowledging uncertainty in science communication has been argued to be crucial for fostering long term trust in science (Druckman 2015). For example, in the context of Covid-19 vaccines, Petersen et al. (2021) have shown that communicating uncertainty is crucial for building long term trust in health authorities.

Fourth, science communication should not aim for impressiveness at all costs either. Research has shown that intellectual humility can increase trust in scientists (Koetke et al. 2024). Trying to oversell scientific results might therefore backfire. People appear to value transparency via open data practices in science (Song, Markowitz, and Taylor 2022), and trust science that replicates more (Hendriks, Kienhues, and Bromme 2020). I would therefore expect that simply doing better, more transparent science and being humble about it is likely to be the most effective strategy to impress the public and elicit perceptions of trustworthiness.

Fifth, educators should not stop aiming at fostering a proper understanding of science. Most students might not understand all of the content, or recall much specific knowledge later on. However, for some students at least, some of that knowledge will be remembered, and will prove important in their lives. Second, to be impressive, a piece of information does not need to be confusingly complex. In fact, a proper understanding of research findings and their methods might even help in appreciating their complexity–even if, once again, that understanding is forgotten later.

Despite these caveats, I believe that the rational impressions account offers optimism for studies of science-society interfaces, and the field of science communication in particular: Exposure to science, especially one that leaves an impression, might be the foundation of public trust in science. Low scientific literacy levels should not discourage education and communication efforts, as they are not necessarily a good indicator of the value added in terms of fostering trust in science.

Taking a broader perspective, our account fits into a picture of humans as not gullible (Mercier 2017, 2020). The “failure” of the deficit model, i.e. the fact that science knowledge appears to not be strongly associated with trust in science, might suggest that public trust in science is, to a large extent, irrational. The notion that trust in science is irrational or easily granted may amplify concerns about the impact of misinformation: if trust lacks a solid, rational foundation, then we would expect misinformation to easily lead people astray. There is much work to be done still to understand how misinformation impacts people’s beliefs, and in particular elite-driven misinformation and more subtle forms of misinformation, such as one-sided reporting. But in Chapter 7, we show that people are generally able to distinguish between true and false news and, if anything, tend to be generally skeptical of news. As a consequence, for a better informed public, fighting for (true) information seems at least as relevant as fighting against misinformation. Misinformation researchers increasingly acknowledge this: A recent report on science misinformation by the National Science Foundation (National Academies of Sciences 2024) dedicates considerable space on developing strategies to produce better information, for example by promoting high-quality science, health, and medical journalism.

The rational impression account stresses the role of fighting for information, when it comes to fostering trust in science. Well-placed trust in science does not require profound understanding or recall of specific knowledge; but it does require exposure to good science.