Pondering the authority of science

This post contributed by Piper Corp, ESA Science Policy Analyst

Who says we have to listen to scientists? When President Obama vowed in his inaugural address to “restore science to its rightful place,” where exactly was he talking about? The thou-shalts and self-evident truths on which Americans base so many decisions have little to say about consulting sound science. Still, though science rarely plays a significant role in US policies, it garners a tremendous amount of respect.

John Marburger, who served as Science Advisor to the President during the George W. Bush Administration, focused on this question of scientific authority during part of his keynote speech at a recent DC workshop on usable science. Riffing on sociologist Max Weber’s three classes of authority (rational [legal], traditional [moral], and charismatic), Marburger suggested that scientific authority, having “no intrinsic authoritative value,” is fundamentally charismatic. According to Weber, charisma is:

a certain quality of an individual personality, by virtue of which he is set apart from ordinary men and treated as endowed with supernatural, superhuman, or at least specifically exceptional powers or qualities. These are such as are not accessible to the ordinary person, but are regarded as of divine origin or as exemplary, and on the basis of them the individual concerned is treated as a leader.

Indeed, science achieves a level of objectivity and reliability far beyond that of everyday reasoning. It carries with it the promise of a methodical and repeatable process and, as such, integrity. The result, though, is that in public culture, science is primarily a pathway to facts. Scientific expertise, in other words, has been reduced to the results section. But is the scientific process entirely devoid of values and subjectivity? Not at all. While we’ve come to define rigorous science by the mechanisms used to ensure impartiality – peer review, quantitative and statistical analyses— even the most punctilious researcher must make decisions based on values: what to study, how to study it, how to talk about it.

Who has the authority to make these decisions? The intuitive answer is, of course, the scientist, and when the goal of research is to advance knowledge within a particular field, there is no one more apt for the task. But a great deal of research—including basic research—seeks to build knowledge that is useful to society. And this is where scientific expertise reaches its limits: usable science is as dependent on the user as it is on the scientist.

So what exactly is usable science? At first glance, it suggests a shift in focus from questions to answers, making many dismiss it as applied research, simply rebranded. But as the workshop’s expert panel pointed out, it’s not a matter of presenting results in a useful manner or applying science to answer specific questions; it’s a matter of making choices about research, process, and project design in a way that is mindful of what decision makers need to know. Put another way:

Scientific research inevitably leads to more questions, expanding the possibilities for research. But the progress of knowledge within a particular scientific discipline (such as hydrology or ecology) is not necessarily linked to real-world problems (such as drought or species loss). For example, an incremental advance in the skill of a groundwater model may be of interest to hydrologists in the field; but that advance may not translate into any additional utility for water managers and others dealing with water scarcity issues. Producing science for decision making requires recognizing the differences between supporting research valued by the discipline itself, and supporting research for the purpose of solving a particular problem.

(from Usable Science: A Handbook for Science Policy Decision Makers, a new publication featured at the workshop)

Usable science is as much about policy needs-informed science as it is science-informed policy. Stakeholders, then, become co-authors in a way, providing their own kind of expertise on how science will metabolize as public knowledge.

There is a danger here, of course. Blending these different forms of expertise can erode scientific integrity, resulting in findings or responses that are no longer empirically justified, but nevertheless retain their authority. Marburger discussed several instances from his time in Washington, DC when science was used as a way to rationalize a course of action that decision makers intuitively believed to be right. The climate debate is an excellent current example, with both sides focusing on the scientific consensus or perceived lack thereof. Another risk—losing necessary complexity when communicating science to a non-expert audience—is equally problematic. Ecosystem service markets, though critical in sustainability efforts, will impose a false appearance of linearity on ecosystems, a simplification that could present a significant obstacle in conveying ecological complexity to the public. Elsewhere in the sciences, findings from research into the workings of the mind are appearing in popular publications as disconcertingly conclusive.

Still, when efforts to address national and global challenges could so clearly benefit from scientific input, usable science demands the community’s attention and commitment.  So as a growing number of scientists vie for a seat at the policy table, perhaps they also should pull up a chair for key stakeholders at theirs.