1  Summary

Recognizing and addressing some of the pressing challenges we face as human society, including global health and climate change, requires trust in science. Philosophers of science have argued that people should trust science for its epistemic qualities–its capacity to produce accurate knowledge. Under this premise, the literature on public understanding of science has long sought to explain people’s trust in science by their knowledge of it–with sobering results: While people do tend to trust science, they do not tend to know much about it. If not grounded in knowledge, is public trust in science mostly irrational? In this thesis, I argue that no, not necessarily. From a cognitive perspective, this thesis aims to provide an explanation of the foundations of trust in science at the micro-level. I develop a ‘rational impression’ account of trust in science, according to which people do not need to understand or remember much about science to trust it. The account builds on two basic cognitive mechanisms of information evaluation: First, if someone finds out something that is hard-to-know, we tend to be impressed by it, if we believe it is true. This impression makes us infer that the person is competent, a crucial component of trustworthiness. Second, if something is highly consensual, we tend to infer that it is likely to be true, and that those who agree are competent. These inferences from consensus are particularly relevant in the context of science, where most people lack relevant background knowledge to evaluate claims for themselves. Scientists agree on hard-to-know findings such as the size of the milky way or the atomic structure of DNA. Although most people do not understand much of how the scientists came to make these findings, nor remember the details of the findings, the consensus provides good reasons to trust the scientists. This account underlines the critical role of education and science communication in fostering trust in science.

This thesis is structured as follows: Chapter 2 lays out the motivation for this thesis, summarizes the rational impression account and situates it in the literature on trust in science.

In Chapter 3 and Chapter 4, to explain public trust in science, we lay out the foundations of the rational impressions account. In Chapter 3, we show that exposure to impressive science increases people’s trust in science, but that people tend to almost immediately forget much of the content that generated this impression. In Chapter 4, we show that in non-science related contexts, where participants were deprived of relevant background knowledge, they inferred that informants–individuals providing answers on some question–who agree more with each other had more accurate answers and were more competent. Using simulations and analytical arguments, we argue that these inferences from convergence–the extent to which informants agree on a piece of information, the most extreme form of which is consensus–are rational, under a wide range of parameters, given that informants are independent and unbiased. Participants took this into account: when given cues that the informants might be biased, participants’ inferences from convergence were weakened.

In Chapter 5 and Chapter 6, we test two predictions that follow from the rational impressions account. In Chapter 5, we find that in a representative sample of the French population, trust in science–within and between different disciplines–was associated with perceptions of consensus and precision: the more precise and consensual people perceived science to be, the more they tended to trust it. According to the rational impression account, people who have received science education should have had the opportunity to form impressions of trustworthiness of science. This should have built a solid baseline of trust in science. In line with this prediction, in Chapter 6, we show that, in the US, almost everyone–even people who said they don’t trust science in general or who held specific beliefs blatantly violating scientific knowledge (e.g. that the earth is flat)–trusted most basic science knowledge (e.g. that electrons are smaller than atoms). This finding suggests some conclusions about distrust in science: The fact that trust in basic science knowledge is nearly at ceiling for almost everyone suggests that those who nevertheless report not trusting science do so because they are driven by specific, partial rejections of science (e.g. climate change denial). Given the overwhelming trust in basic science, these rejections are likely to stem from motivations exogenous to science. This lends support to motivated reasoning accounts of science rejection.

The rational impression account for trust in science is built on the hypothesis that people tend to be good at evaluating information, by relying on mechanisms of epistemic vigilance. In Chapter 7, I explore another consequence of this hypothesis, which is that people should be good at judging the veracity of news. In a meta-analysis, we found that this was largely the case: people around the world were generally able to distinguish true from false news. When they erred, people were slightly more skeptical towards true news than they were gullible towards false news. We do not conclude from these results that all misinformation is harmless, but that people don’t simply believe all misinformation they encounter–if anything, they tend to have the opposite tendency to not believe information, even if accurate. Based on this, we argue that if we are concerned about an informed public, we should not only focus on fighting against misinformation, but also on fighting for accurate information.

In Chapter 8, I discuss limitations of this thesis. I argue that the broader picture that emerges from the evidence presented in this thesis is that people come to trust science, and information more generally, based on mechanisms of information evaluation that are, on average, sound, and do, in most cases, work.