There was another one of those “we’re living in echo chambers” papers recently. It’s from the same data on science and conspiracy theory social media that triggered a wave of “Facebook is making us dumber” news stories early in 2016.
I think this genre is fueling nihilism. Our perspective is at risk if we put too much weight on one-dimensional fragments of a complex picture.
We’re not all part of just one single social constellation, for starters. Echo chambers aren’t necessarily silos containing only true believers. And the only influence isn’t that exerted by those driving the extremes of polarization.
We’re being swamped with stories about widespread belief in conspiracy theories, too. Increased attention doesn’t always correspond to an increase in a phenomenon, though.
As far as we know, Uscinski and Parent concluded, conspiracy thinking hasn’t been growing in the United States – it might even have reduced since the Cold War, despite a possible uptick after the Watergate, Iran-Contra era. It’s not worse on “the right” than on “the left”. Partisans and communities are vulnerable to different theories, though, partly depending on who they think of as “us” and “them”, and which way power tilts on an issue for them. What looks logical depends on where you’re standing.
You imagine more in the shadows when you’re fearful or at a disadvantage. Anxiety thrives in vulnerability. Marius Raab argues that conspiracy theories are narratives that attempt to make sense of the world:
There are parallels between conspiratorial thinking and scientific reasoning: In both cases, we try to look behind the things and into them.
Conspiracy theorizing involves investigating details and making arguments, using the jargon and other aspects of academic work. Theories are constructed out of many bits and pieces, with solid facts, unreliable information, and conjecture linked by motivated reasoning. (More about motivated reasoning here.)
It’s the ratio of reliable knowledge to conjecture and unreliable information that determines how well- or poorly-based a conspiracy theory is.
Poorly-based conspiracy theories have a lot in common with bad science. That’s based on unreliable data and explanatory stories arrived at by motivated reasoning too. And bad science simultaneously provides fuel for many conspiracy theories and erodes trust in science and scientists.
I recently wrote a short post about virtuous spirals of trust being needed to counteract this vicious cycle. In our age of mass personal communication in particular, I argued, we’re all raindrops that contribute to the flood, with the potential to help or make it worse.
A comment on the post challenged this position, asking me, where’s the proof this isn’t futile? Here’s an excerpt:
…I feel that we have heard it all before and this approach is not working…It seems that as long as individuals feel that their beliefs are being challenged by my own beliefs, I have come to realize that I cannot erase the misinformation that forms the foundation of that person’s beliefs. This is true regardless of the level of trust between me and another individual or the level of respect and empathy that I consciously make an effort to express…
Ultimately, I think I am personally tired of being blamed for not trying hard enough to be a positive ambassador for science…I think what we need is something bigger than individual scientists acting in a responsible and engaged manner… (Bamini, 10 December 2016)
I think Bamini’s frustration echoes a lot of people’s thoughts. There’s not much instant gratification in this arena. I believe there’s no slam dunk evidence of a particular argumentative technique anyone can use to trigger a Damascus road experience because there is no such thing: changing minds is not that simple. And while respect is essential to maintain relationships and be listened to, I don’t think pretending respect you don’t really feel fools people.
On top of that, we are all prone to what’s been called conformist bias [PDF]. We can’t verify everything on every issue ourselves: we rely on a division of cognitive labor, choosing who to trust on what issue. We know others are susceptible to conformist bias too. So if you’re from a group that people are suspicious of on an issue, you’re pushing uphill.
Then why do I believe individual’s actions matter so much anyway? I’ll start by fleshing out the social capital model of influence I wrote about with an example from my experience.
I used this sketch of bridging and bonding social capital, where each dot represents a person, with connections within and between social groups. These two types of capital are like social glue. (There’s a glossary of social capital jargon here.)
The people who are red dots are an important social asset – they create bridging capital between groups. They are points at which culture and trust can be shared and exchanged socially. That makes them both critical for social cohesion and conduits for social change. They have connections with people from other groups, or they might be members of more than one group, too, moving between cultures with ease – a foot in both camps, if you like.
My example involves organized community groups, journalists, scientists, and divisive scientific controversies. If there is a major public science controversy, then community groups and social movements will be active – they may even have started it. Think of anti-vaccination movements since the 19th century, or Greenpeace sparking off the anti-GMO movement in 1996. In other examples, other kinds of social groups play a big role, like faith communities.
Once clashing forces have entrenched a scientific controversy out in the non-scientific community, studies alone don’t typically resolve it. It’s a social problem now. New studies and interpretations usually just keep fueling the controversy. A major scientific controversy in the community has become a power struggle. If it’s politicized, then it’s trench warfare.
If a controversy is going to subside before it’s obsolete, it needs a power shift: either sufficient weight drains out of one side, or enough builds up the other, moving the center of gravity of opinion. Everyone doesn’t have to change their mind for social change to occur. Take for example community opinion in the U.S. over the war in Vietnam in the 1970s: opposition grew, but it wasn’t society-wide [PDF].
In sociologist Brian Martin’s 1991 analysis of the water fluoridation debate, he looked at examples where scientific controversies have more or less resolved in the past. It was likely to be because of:
- the intervention of a player powerful enough to forcefully tilt the scales;
- new groups or influential individuals entering the fray that have enough weight to shift the balance; and/or
- the conversion of influential individuals who help drag community weight with them when they change their position.
In my first example, I’m one of those people who shifted sides, and pulled others along.
I left school at 16 – never had any particular interest in science or mathematics. I settled into health and women’s activism alongside motherhood in my early 20s. Here’s me, arguing an anti-establishment position on evening primetime TV in Australia:
I was involved in several groups, some mainstream, but also more than toe-deep in some pretty fringe issues. And I was the elected head of the national home birth organization.
But a few years after that TV debate, I was arguing from a very different position, and a different interpretation of the same data. And I was sitting at the table in the Australian equivalent of the NIH pushing for guidelines to effectively rein in home birth practice. Enough of my community came with me to enable both those guidelines and other policy changes to occur, and then have an impact.
Here’s my transition from community activism then, to working at a science institution now, plotted on that social capital model.
I didn’t change deeply held beliefs because someone convinced me in one discussion, or even a few. It was a process over years. The scientists and others who influenced me weren’t cheerleaders for the establishment. They were critical of weak research and arguments, regardless of whose interests it served. And they didn’t just expect people like me to believe them because they were experts. They wanted to increase the expertise of others in scientific thinking, especially community leaders.
I had been defending my community’s side in a heated controversy for years. When I changed position, it was a major upheaval. But I was still a member of the community, with many long and very deep ties. It was a two-way street – a lot of reciprocity (another form of social capital).
Being an insider made it harder for people to dismiss me, although some saw me as a traitor and cast me as one of “them” now. Being well-known and still in their midst made it harder for that to gain traction.
Shifting the policy of an impassioned community on an existential issue was tough work. Fissures open up quickly once there’s factional in-fighting. It took a constant and intense stream of talking, strategizing, cajoling, arguing, writing, traveling, meetings, and media work. And it took many people throughout the network to be convinced, and then thrash through it with others in their spheres of influence.
That’s a hard process to engineer deliberately. Jeremy Grimshaw and colleagues argue, in the context of using opinion leaders to change healthcare practice, that there might be four prerequisites to success:
Firstly, there must be effective interpersonal communication networks. Secondly peer influence must work amongst professional groups. Thirdly, opinion leaders must be readily identifiable. And finally, the leaders must be inclined to adopt changes based on evidence, so that they can honestly influence others.
Outside of professional contexts, the situation is more fluid. Influencing influential advocates might not have as dramatic effects as in my example, but it’s still an important strategy. At the recent Academy Health/NIH conference on dissemination and implementation, there were two presentations in particular that pointed to the importance of advocates and to relationships of trust.
Alexandra Morshad discussed results of a survey of cancer advocates and legislators. Both advocates and legislators assigned more weight to research coming from “someone I know or respect” than having the results support the position they hold. Trust was given more weight by legislators under 50 (a mean of 4.4 out of a possible high of 5). And legislators placed high trust in the evidence within internal legislative bureaus.
Itzhak Yanovitsky and his colleagues analyzed the evidence claims in 1,360 U.S. Congressional Bills and 2,103 Hearings. He spoke about the importance of go-betweens:
Through their information behavior (e.g. sharing, withholding, filtering, and interpreting information), influence (reputation, authority), and carrying capacity, knowledge brokers dynamically regulate the scope, nature, and speed in which policy relevant information travels.
It’s hard to study how decisions really get made, Yanovitsky said, because it’s open and messy: “It’s an ecosystem… There is a social dimension of information use, of trust, and negotiation”. Information is used tactically, he said, and timing matters – so there need to be standing relationships.
Most people aren’t so heavily invested in the subject of every controversy, though, and social influence mobilizes and spreads in all sorts of ways. Adam Pearson and Jonathon Schuldt write that social influence can have an impact, even on a highly politicized issue [PDF]:
In a study of Midwestern U.S. voters, Romero-Canyas, Gaby, Silver, and Schneider (2015) found that exposure to a Republican elected official who endorsed acceptance of anthropogenic climate change increased acceptance among Republican voters. Moreover, this change in belief came at no reputational cost to the messenger: participants viewed the politician equally positively whether or not they received the climate change message. Thus, perceptions of others’ climate beliefs can sway personal beliefs even when they conflict with deeply held partisan views.
Meta-beliefs about the views of scientists can similarly override partisan views. In a national survey experiment, Van der Linden, Leiserowitz, Feinberg, and Maibach (2015) found that exposure to accurate information about the scientific consensus on human-caused climate change increased both Democrats’ and Republicans’ estimates of consensus, which predicted key beliefs about climate change and led to increased support for public action. Thus, far from being static, ideological beliefs – including those related to climate change – appear to be highly sensitive to our perceptions of others’ beliefs.
It’s one of the reasons I argued in my “Antidote” post that it’s important to let people know your position. The sheer doggedness of the most extreme and polarizing people in social networks leads to a spiral of silence of more moderate views. That gives people the wrong impression of what others’ beliefs are [PDF].
People can be strongly influenced by people they know, although not necessarily on every topic. The “61 million person experiment”, a randomized trial of using Facebook to get people to vote on election day, found people were more strongly influenced by people close to them. Another experiment found getting the same message from multiple people close to you in a network had an impact.
Of course, there’s no guarantee that for every person, being more careful to grow trust, and speaking up, will ever have a meaningful impact. Even if it does have ripple effects, you might not know it. To me, though, it’s a bit like that Joel Pett climate change cartoon where someone says: “What if it’s a big hoax and we create a better world for nothing?” Being more credible has its own rewards.
More Absolutely Maybe on these themes:
And I wrote about the history of consumer/community groups and social movements in health in International Journal of Technology Assessment in Health Care in (1998).
[Update on 6 January]: I had forgotten to include the 61-million person experiment, and the only other relevant experimental study I found: second last para added.
Photos of me on TV are of Terry Willesee Tonight (TWT) late in 1987, the major primetime current affairs show on a commercial network in Australia at the time. The name of the “Establishment” representative I was debating escapes me at the moment. (For Aussies, this was the one that had Mike Carlton’s news review puppets – Terry Willessee was replaced in that time slot the next year by Derryn Hinch.)
Logic Lane is in Oxford, England. I took this photo in 2016.
* The thoughts Hilda Bastian expresses here at Absolutely Maybe are personal, and do not necessarily reflect the views of the National Institutes of Health or the U.S. Department of Health and Human Services.