Arbiters or Agitators? Why Perception Matters For Science Advocates (Part Two)

Climate_Moyers

In a previous post, I talked about the perpetual balancing act for scientists who navigate policy and politics. Several news outlets have concluded, based on a recent study, that scientists have little to lose by becoming political activists. (For a little context, I recommend reading that post first). However, this conclusion overstates the evidence. In this post I take a deep dive into that study, which looks at the effects of scientists’ political advocacy on societal perceptions. This post is not nearly as circumspect as the prior. Be forewarned: it’s about to get wonky in here.

The study, in brief

Does Engagement in Advocacy Hurt the Credibility of Scientists? Results from a Randomized National Survey Experiment

The study was conducted by researchers at the Center For Climate Change Communication at George Mason University. A sample of randomly selected American adults was asked to read a fictitious Facebook post from a climate scientist (“Dr. Dave Wilson”), who highlights his recent ‘interview’ with the Associated Press. The participants saw only the Facebook post, which varied by one key factor: a sentence that elaborated on the main message of the interview.

This sentence differed in its advocacy, with six levels of intensity. At one end of the spectrum, participants saw a post that merely contained a summary of an empirical finding; namely, that CO2 emissions recently reached their highest levels in recorded history. At the other end of the spectrum, the scientist advocated for specific policy positions: either reducing emissions from coal-fired power plants (the prototypical liberal position) or building more nuclear power plants—a policy approach more amenable to conservatives.

The participants were then asked to quantify their perception of the scientist’s credibility, defined by a collection of attributes like competence, expertise, trustworthiness, and sincerity. They were also asked to rate their own political ideology—very liberal, somewhat liberal, moderate, somewhat conservative, or conservative—along with their party affiliation.

The top-level findings highlighted by the researchers are two-fold. First, increasing the degree of advocacy had no effect on perceptions of the climate scientist: his credibility did not suffer when he advocated for non-specific political action, or even when he advocated cutting back on fossil fuel emissions. Advocating for more nuclear power did hurt his credibility, though—which the researchers didn’t expect. The second major finding was that self-described conservatives held a more negative view of the climate scientist’s credibility, irrespective of his level of advocacy or the nature of said advocacy.

However, there a few issues with the design of the study that make it difficult to apply these conclusions to the current American political context.

First Issue: Small Effect Size

As described in my last post, a few major media outlets took notice of the research and ran with its top-line findings. Here’s how The Washington Post framed the conclusion of the study:

The results were pretty surprising: When respondents were asked about the researcher’s credibility after reading the Facebook posts, none of the stances seemed to produce a significantly lower credibility rating for the scientist, except for the stance advocating nuclear power.

That statement is based on this piece of data:

Kotcher-1

Source: Kotcher et al., 2017 (Figure 1)

The Post goes on:

[T]he study also found that political conservatives rated Wilson as having less credibility than liberals did. But this didn’t vary depending on the stance he was taking—conservatives were just more dismissive period.

The supporting evidence? Here it is:

Kotcher-Supp-1

Source: Kotcher et al., 2017 (Supp. Figure 1, modified)

For both figures, the level of credibility is on a scale of 1 to 7. The rating shown in the figure is actually a mean composite of several factors such as trustworthiness, honesty, and expertise. For example, respondents were asked to rate the scientist on a continuum that ran from “not at all trustworthy” (1) to “extremely trustworthy” (7).

Let’s first extrapolate what a numeric ‘credibility scale’ would actually translate to, practically speaking. When convincing someone of the existence of climate change, a politically meaningful credibility gap suggests something like this:

A climatologist makes a statement to a liberal and a conservative about the importance of taking action to stop climate change. The liberal nods his head in general agreement, thinking “this scientist is willing to use his expertise to take important and needed action.”  The conservative, meanwhile, thinks “color me skeptical. He’s just an ideologue trying to push his own political agenda.” The liberal accepts the validity of the science, which already accords with his worldview. The conservative doesn’t embrace the science, rejecting a messenger perceived to be tied to an agenda that conflicts with his worldview. 

While admittedly an entirely hypothetical scenario, this roughly approximates what most politically active scientists would consider a qualitatively meaningful problem of credibility, with respect to accepting a controversial scientific concept.

Within that context, let’s now parse the data. Overall, it’s true that credibility did decrease when nuclear power was advocated, and that self-described conservatives perceived the scientist as less credible overall. However, the effect size is quite small. The gap between making a ‘factual statement’ versus ‘nuclear advocacy’ was 0.47 points (5.18 versus 4.71), and conservatives perceived the scientist as less credible by a mere 0.38 points compared to liberals (5.25 versus 4.87).

How to interpret this? The changes, while statistically significant, seem far too minor to connect to a qualitatively substantive change in perception. It’s a bit of a stretch to say that a 0.38 drop in credibility on a 7-point scale means conservatives were “dismissive” of the climate activist, especially given that all three ideological groups hit well above the “credibility median” and thus all essentially trusted the scientist.

(Note that the variability around each mean—bars represent 95% confidence intervals—is narrow, indicating that the changes were consistently small. Wider CI’s would have suggested that some subset of participants showed much larger credibility gaps. Also note that the effect size is somewhat visually exaggerated, as the y-intercept is not set to zero.)

Second Issue: Sample selection

Adults were randomly selected for this study. Adjustments were made for age, gender, race, region, income, and education, in order to survey a nationally representative sample of the American population.

But how representative was the sampling, in terms of political ideology? Here are comparable Gallup numbers, which were collected during the same period as the study (2014). I’ve put them side-by-side with those of the current study:

Gallup-Kotcher

There are some notable differences in the ideological makeup of the two samples. The Kotcher study included more liberals and moderates, and fewer conservatives (by over ten points). This doesn’t suggest, however, that the study’s sampling was inaccurate. Rather, it reflects the fact that the studies sampled two different populations: participants in the Kotcher study were recruited using an online survey, while Gallup recruits via landlines and cell phones.

This is an important caveat: while online news consumption is on the rise, Americans still predominantly get their news from traditional, “offline” media sources—the main exception being Millennials. The decision to recruit participants entirely online means that the study findings are more reflective of an up-and-coming demographic, not the entirety of the population as it stands today.

The other issue, with respect to the underlying sample, is the decision to focus on American adults. At first blush, it may seem entirely reasonable to look at a randomized group of adults, if the goal is to measure how Americans view scientists. But then, that isn’t exactly the goal. We’re talking about the intersection of science and politics, so it’s important to control for political engagement.

Yes, as a scientist you’d prefer to know how to navigate the perceptions of every possible American. Realistically, though, science advocates only need to persuade people who actually vote. This is why political polls usually survey registered voters, and try to identify likely voters during election cycles. In the realm of politics, it really doesn’t matter what someone thinks of you, if they aren’t going to get up and do something about it.

A recent Pew report shows just how essential a role ideology plays in predicting someone’s political engagement. Here are some snippets from that study (recreated from data here):

AlwaysVoted
Volunteered

Contacted

What’s clear from these data is that ideology is a pretty good predictor of political activism; liberals and conservatives are both considerably likelier to participate in the essential facets of the political process.

But notice a key difference: in contrast to the study we’re discussing here, Pew didn’t use self-reported ideology. Instead, they created an  “ideological consistency” scale. What’s the difference? The ideological consistency metric is based on a policy preferences survey: participants were asked to rate specific views on policy questions, and their ideological makeup was then calculated based on those answers.

Why is this superior to self-reporting? Try a miniature experiment: ask several self-identified liberal and conservative friends/family members/co-workers to define their liberalism or conservatism. Most likely you’ll find a very diverse range of responses, reflective of their varying levels of knowledge, intellectual upbringing, personal values, attachment to particular issues, and any number of other factors.

Getting an accurate and meaningful picture of someone’s ideology is a complex task. Looking at the Pew data, though, it’s clearly a worthwhile one. Science communication studies examining ideology ought to leverage these more precise assessment tools. It need not even be an intricate ideological consistency survey, but even something as simple as identifying primary voters—who are highly influential on election outcomes, and who also hold strong, preconceived political narratives that may color their views on advocacy by scientists.

In the current study, it’s possible that if the sample were adjusted for political engagement, or with a fine-tuned measurement of ideological disposition, perceptions of credibility may have been more strongly impacted by overt advocacy.

Even if there still wasn’t an impact, the results would be more informative, and more relevant to advocates who want to persuade politically active citizens.

Finally: The framing and tone of advocacy

To get a sense of the advocacy’s tone in this study, compare the two posts at either end of the spectrum:

Kotcher 2017 Supplementary-Fig1

Dr. Wilson, fact-bringer

Kotcher 2017 Supplementary-Fig3

Dr. Wilson, advocate

 

 

 

 

 

 

 

 

Dr. Dave Wilson seems like a pretty mild, reasonable guy. His advocacy is, shall we say, temperate. But, how well does this reflect the general landscape of real-world political advocacy?

PBS/NOVA recently ran a story on the March for Science that featured Dr. James Hansen, a climatologist, professor (adjunct) at Columbia University, and a vocal advocate for climate change mitigation:

Hansen used the worldwide media attention stirred by [a controversial 2016 climate study] to renew calls for a global tax on carbon. When journalists raised the issue of his mixing science and advocacy, Hansen retorted, “This isn’t advocacy, this is what’s needed. We’re allowing fossil fuel companies to use the atmosphere as a free waste dump. If scientists don’t say it then politicians will tell you what’s needed, and that will be based upon politics rather than science.”

Dr. Hansen and “Dr. Wilson” are both advocating for a similar policy outcome. However, note Dr. Hansen’s assumptions: 1. Fossil fuel companies are selfish polluters; 2. Policy decisions by the government stem from some Manichean battle between scientists and politicians.

Looking past the specific policy prescription, the advocacy is packaged in such a way that only the most ardent like-minded people will be convinced. In other words, it isn’t persuasion; it’s choir-preaching.

This speaks to framing, an issue science communicators need to pay very close attention to.  Appropriate framing is absolutely essential to getting your message across. It means understanding the values and views of your target audience—and actually talking to them, rather than at them.

(Framing is, fortunately, starting to receive mainstream attention among science communicators, and was identified as a important area of future research by the National Academies Press; also see some nice blogging on framing here).

This last point is not a deficiency of the current study per se. The authors clearly emphasize that the experiment is a tightly controlled one, tailored to a specific form of online advocacy. However, given that the study does not reflect this important reality of real-world advocacy, some caution should be taken before applying its findings to American politics.

So what does the study mean for science advocates?

Climate_Moyers

Only time will tell how a march will impact America’s perception of scientists. Source: AP Photo / Keith Srakocic, 2013

As the March for Science approaches, the debate over the proper role of science advocacy will become increasingly heated. When it’s all said and done, we can anticipate a prolonged discussion on whether the march helped the mission of science in the U.S. Ultimately, then, the study described here is just one small piece in a vastly larger assemblage of considerations for scientists.

Certainly, the study’s meaning may have been overintrepeted in some quarters. Understandably so: it’s seductively easy to get carried away chasing headlines that conform to our own preconceptions. This is especially true in the age of social media, where memes can be all-too-readily propagated. When injected into the political bloodstream, science can just as easily fall victim to the same contortions of fact that often prevail in politics. That is why even topics discussed internally among scientists and science communicators need to be guided by evidence—and imbued with a natural proclivity toward skepticism.

Scientists share a herculean task: create new knowledge, serve as empirical truth-bringers, guide the political process with evidence—all while striving to avoid personal bias and maintain societal trust. While the findings in this study should be taken with appropriate caution, these research efforts are precisely the kind that need to be undertaken to make strides in such an immense aim. And like any good study, this one raises as many questions as it answers.

This should be of comfort to science advocates, because the best antidote to incomplete science is, you guessed it: more science.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s