Boomerangs and politics
Posted: Sat Apr 27, 2013 1:05 pm
Warning: long and self-indulgent. Writing is my way of organizing and clarifying my thoughts on a subject, so it helps me even if no one else wants to take the time to read the entire thing. But I’ll go ahead and share it just in case someone is currently interested in the topic as well and has the time to read it. My feelings won’t be hurt if no one has that kind of time.
My latest interaction with Droopy and ldsfaq in the off-topic forum in regards to the former conservative opposition to the landmark civil rights act of 1964 triggered a desire to learn more about the psychology of ideology and motivated reasoning. Once again, the information I presented made absolutely no difference in their position – even when I went out of my way to find sources that should have been acceptable to them – ie, fellow conservatives. The most mind-boggling event in that interaction was when Droopy completely ignored a conservative source I heavily cited and praised from Claremont, and then, a few days later on the exact same thread, turned around and cited and praised the same source, while declaring a liberal like me would never think of using such a reasoned source. It was one of the most stunning and unsettling moments I’ve ever had in regards to trying to dialogue with someone of opposing views. Even more stunning was Droopy’s subsequent inability to see that there was something wrong in what he had done, that it signaled a need to change his approach (ie, actually read his opponent’s comments).
This reminded me, of course, of the studies done into motivated reasoning, which all human beings are suspect to. Gary Marcus, the author of a book I’d previously read and enjoyed in the subject (Kluge: The Haphazard Evolution of the Human Mind) wrote a concise summary of the topic for the New Yorker. Because it’s relatively short, I’m going to share the entire article.
http://www.newyorker.com/online/blogs/e ... ology.html
I found another excellent article on the subject called Boomerang Effect in Science Communication: How Motivated Reasoning and Identity Cues Amplify Opinion Polarization About Climate Mitigation Policies, by P. Sol Hart and Erik C.Nisbet.
The authors begin by explaining that the traditional model for science communication is the deficit model: ie, people have a deficit of information leading them to adhere to erroneous ideas, and that deficit can be corrected by the presentation of additional information from reliable sources. But clearly this model hasn’t worked. Despite years of attempts at providing more information from reliable sources (scientists), the polarization over climate change has actually increased. The introspection on why this is so has resulted in some truly interesting ideas regarding human reasoning.
I admit that I also adhere to the deficit model when interacting with others about controversial topics. This is especially clear on the internet, where we are often interacting with other people for a specific reason: Mormonism, religious beliefs, or politics. I think that is why the internet has such a wild west flavor. Whereas in real life, we interact with people around us for a multitude of reasons (coworkers, family, friends, etc), on the internet it’s often with one specific purpose that involves targeted information or ideas. So there is no tempering effect on communications (and, of course, faceless anonymity allows people to be more blunt and sometimes rude).
But I know by now that the deficit model doesn’t work. I learned this long ago with true believing Mormons, and forums such as this one make it clear that it also doesn’t work with political or policy discussions.
So I know it doesn’t work, but why doesn’t it work? How do some people so effectively disregard factual information that, for all intents and purposes, they might as well not even read it? Here are some quotes from the Boomerang article that help clarify.
http://www.climateaccess.org/sites/defa ... cation.pdf
I have long believed that human beings are particularly driven by tribal tendencies. I think this is indisputable. I also think it is evolutionarily hard-wired into our species due the obvious fact that humans who lived, worked, and traveled together with a reliable group had higher chances of survival and reproduction, so whatever underlying genetic trait predisposed human beings to do so was evolutionarily favored. So it makes perfect sense that motivated reasoning would be highly impacted by identity cues. Are we talking about “my tribe” here? If so, concern is heightened. If not, concern is muted.
The cited article is referring to climate change, so when scientists talk to American audiences about the devastating impact of climate change on third-world, already impoverished nations, concern is muted. That’s why a recent trend has been to emphasize the effect of climate change on Americans themselves, such as the devastating effect of extreme weather. I predict that as weather becomes more extreme, that as the worst hurricanes that used to be thought of as occurring every fifty years or more start occurring every other year, then we will be at the point where Americans are more open to making changes.
I think anyone who has interacted with believing Mormons in the internet can testify to this effect. When I first left the church, I naïvely believed that any person with access to the same historical information I had just accessed would also leave the church, or remain as a nonbeliever for social reasons alone. As others before me have accurately stated, not only is the church not true, it is obviously not true. If an LDS person were exposed to a fictitious religion with the same history as the LDS church, that person would most likely conclude that it was irrational for anyone to believe in that religion. If a person down the street engaged in the same sexual behavior as Joseph Smith, or engaged in the chicanery of Joseph Smith, they would be disregarded as an untrustworthy person of low moral character. But when the same history is connected to Mormonism, belief is sometimes heightened (see all the internet defenders of the faith who claim that this has only made their faith stronger).
For a long time, this was puzzling to me. Some of this can be explained by the investment paradigm. The more you have invested in an idea, lifestyle, or commitment, the less likely you are to discard it. We see this is particularly true in child-rearing. In addition to the instinctual love and care we have for our offspring, we invest more in them than in any other person or thing. So it’s rare for a parent to completely give up on a child, no matter the situation. For a current example, see the Boston Bomber’s mother.
But I believe there is more to it than that, because nearly every person who has left the LDS church has had a significant investment into it. I certainly did, despite the fact that I converted at 19. This was early enough in my life that I proceeded to make every major decision within the framework of the truth of the LDS gospel. When you are nineteen years old, you’re making some mighty significant decisions in the near future. Who will you marry? Where will you live? How will you spend your money? How many children will you have? And so on. So by the time I left the church at the age of 36, my investment was extremely high. The same is true with most other exmormons.
My current working theory is that some people are able to process historical information about the LDS church and allow it to affect their beliefs due to the fact that they feel less connected to the LDS tribe for some reason. That emotional disconnect interrupts the tribal phenomenon, which, in turn, triggers more intense motivated reasoning. How much it interrupts the tribal phenomenon is variable. I’ll use my own life as an example.
I live in an area of the US where there are very few LDS people. The church, of course, tries to create opportunities for young single adults to meet other faithful young single adults in the hopes of encouraging marriage. In my case, my stake president actually gave my name to a young man whom I have never met because he lived in an entirely different city, but we were in the same stake. He called me, we went out and were engaged three dates later. By the time we were married in the temple, we had known each other for exactly three months. He was charming, committed to the church, and I had prayed about it, so why wait and expose ourselves to sexual temptations?
The marriage was a complete disaster. I now see all the alarming red flags, but since I had never been exposed to abuse before, I was ignorant of what they meant. My ex-husband was abusive. He was suffering from undiagnosed and untreated bipolar (later diagnosed when it was too late to save the marriage) and had been raised in an abusive environment. The emotional aspect of the abuse began on the honeymoon. Yet I stayed for fifteen years and had three children with him. Why? Because of the teachings of the LDS church.
The LDS church community failed me during that period. The reasons for that are complex. I am not saying that my LDS community did not care about me. They did. So did my family, who had all become LDS. But, as anyone with experience with abusive relationships knows, it is often difficult for outsiders to fully recognize and respond effectively to the situation. I reached out for help to every bishop I ever had, and I finally managed to convince my then-husband to see the LDS counselor. (side note: a huge mistake, an utter failure – the episode only strengthened my then-husband’s power and my helplessness)
During this entire period, I was faithful and observant. I prayed every day, pleading with God to help me and my children. I pleaded with God to protect my children from the effects of his abuse. (I thought those prayers worked to an extent because he hid his abuse of the children from me, a fact I only found out years after my divorce, from my kids. They were protecting me from the truth.) God failed me.
I am not sharing this to condemn the LDS community. Given the circumstances, I’m not sure what they could have done. (although calling him to the bishopric seemed weird to me, even as a believer) I am sharing this to demonstrate how I think my emotional connection to the LDS church was interrupted enough to allow me to truly register information that threatened my belief in the truthfulness of the LDS church. This interruption allowed me to escape the trap of motivated reasoning and the boomerang effect.
More from the boomerang article:
Many exmormons have wondered how believers can continue to believe after exposure to problematic historical information. But it soon becomes clear that the difference isn’t education or intelligence. It is apparent that well educated and intelligent individuals continue to believe in the church, and use their intelligence to engage in motivated reasoning that persuades them even more. To a complete outsider, the reasoning will seem weak, but to another motivated believer, it’s convincing. Other studies have shown that the MORE educated and intelligent the individual, the HIGHER the likelihood that he/she will be able to engage in successful motivated reasoning that allows them to retain erroneous or highly questionable beliefs.
Returning to the boomerang paper, the author’s hypothesis for their experiment was as follows:
Here’s how they set up the experiment:
The results were as follows:
In other words, if one already identified with the group that accepts climate change (democrats), the identity cues did not have an impact. But identity cues impacted the groups that did not automatically accept the premise of climate change (independents and, more strongly, republicans). This doesn’t mean that republicans are less empathetic than democrats. It does mean that empathy can be neutered by ideology, when the negatively impacted people are not within your perceived tribe.
The lessons for climate change advocates are clear. Additional information does not increase support for climate change action and may have a boomerang effect. It may be that presenting the negative impact of climate change on people who are perceived to be members of one’s own group may increase support for climate change action.
The take-away lesson for political discussion is pretty clear. Simply providing more information is ineffective. Resolving that problem depends upon the specific topic under consideration. Depending on the topic, it may be possible to reframe the conversation by using examples that provoke group identification with the targeted individual. I imagine that would be hard to put into practice, so we may be doomed to increased polarization as information itself increases due to the internet and cable TV.
What’s the take-away for interaction with LDS believers? I’m not sure, other than the understanding that the fact that LDS people continuing to believe despite being aware of the historical problems does not mean they are stupid or unintelligent is a good thing.
My latest interaction with Droopy and ldsfaq in the off-topic forum in regards to the former conservative opposition to the landmark civil rights act of 1964 triggered a desire to learn more about the psychology of ideology and motivated reasoning. Once again, the information I presented made absolutely no difference in their position – even when I went out of my way to find sources that should have been acceptable to them – ie, fellow conservatives. The most mind-boggling event in that interaction was when Droopy completely ignored a conservative source I heavily cited and praised from Claremont, and then, a few days later on the exact same thread, turned around and cited and praised the same source, while declaring a liberal like me would never think of using such a reasoned source. It was one of the most stunning and unsettling moments I’ve ever had in regards to trying to dialogue with someone of opposing views. Even more stunning was Droopy’s subsequent inability to see that there was something wrong in what he had done, that it signaled a need to change his approach (ie, actually read his opponent’s comments).
This reminded me, of course, of the studies done into motivated reasoning, which all human beings are suspect to. Gary Marcus, the author of a book I’d previously read and enjoyed in the subject (Kluge: The Haphazard Evolution of the Human Mind) wrote a concise summary of the topic for the New Yorker. Because it’s relatively short, I’m going to share the entire article.
http://www.newyorker.com/online/blogs/e ... ology.html
On a four-point scale, from one (strongly disagree) to four (strongly agree), please rate the following statements: “The Apollo moon landings never happened and were staged in a Hollywood film studio”; ”Princess Diana’s death was not an accident but rather an organized assassination by members of the British Royal Family who disliked her”; “The Coca-Cola Company intentionally changed to an inferior formula with the intent of driving up demand for their classic product, later reintroducing it for their financial gain”; and “Carbon dioxide emissions resulting from human activities cause climate change.”
Questions like those formed the core of one of the most intriguing studies I have seen in a long time, a brand-new study, just published in Psychological Science, that investigated the dynamics of science doubters. The Australian psychologist Stephan Lewandowsky and two collaborators surveyed over a thousand visitors to online climate blogs (all relatively positive toward science), and asked them questions about free-market ideology and their views on climate science. The investigators also probed for their “conspiracist ideation” by asking questions like the ones above about faked Apollo moon landings and the assassination of Princess Diana. Some subjects were eliminated because they appear to have lied about their age (it is doubtful that anyone under five completed the survey, for instance), and as a precaution, to prevent ballot-box stuffing, the experimenters also eliminated answers where more than one response came a single I.P. address.
In principle, you could imagine that people’s answers to these questions might be logically independent. One could be a conspiracy theorist about Coca-Cola without having any particular views about climate change, or vice versa. And indeed, some subjects really did believe in climate change even as they doubted the intentions of the sugar-water company from Atlanta, and vice versa.
But, over all, the trends were clear. The more people believed in free-market ideology, the less they believed in climate science; the more they accepted science in general, the more they accepted the conclusions of climate science; and the more likely they were to be conspiracy theorists, the less likely they were to believe in climate science.
These results fit in with a longer literature on what has come to be known as “motivated reasoning.” Other things being equal, people tend to believe what they want to believe, and to disbelieve new information that might challenge them. The classic study for this came in the nineteen-sixties, shortly after the first Surgeon General’s report on smoking and lung cancer, which suggested that smoking appeared to cause lung cancer. A careful survey revealed that (surprise!) smokers were less persuaded than nonsmokers were. Nonsmokers believed what the Surgeon General had to say. Smokers heaped on the counterarguments: “many smokers live a long time” (true, but ignores the statistical evidence), “lots of things are hazardous” (a red herring), ”smoking is better than being a nervous wreck,” and so forth, piling red herrings on top of unsupported assumptions. Other research has shown a polarization effect: bring a bunch of climate change doubters into a room together, and they will leave the room even more skeptical than before, more confident and more extreme in the their views.
There may be some evolutionary advantage to having minds that reason in this way, bobbing and weaving and often avoiding the truth, but elsewhere, in my book “Kluge: The Haphazard Evolution of the Human Mind,” I have speculated that it is more bug than feature—a neural glitch of how our memories are retrieved (mainly by finding matches to retrieval queries, which leads to confirmation bias, rather than through more systematic searches that might reveal disconfirming evidence that could potentially challenge one’s beliefs). A parallel phenomenon can contaminate our ability to listen to others; we tend to dismiss that which challenges our beliefs, while accepting confirming evidence. Cass Sunstein, of “Nudge” fame, has an interesting new technical paper on this.
Given that we live in a country in which the theory of evolution—one of the most powerful theories in all of science—is routinely dismissed, and one in which climate-change experts have struggled for years to persuade the public that there is a clear and present danger despite reams of data supporting them, serious investigations into the logic of crowds in real-world situations may represent an important step forward in understanding how to reason with less-than-reasonable masses.
I found another excellent article on the subject called Boomerang Effect in Science Communication: How Motivated Reasoning and Identity Cues Amplify Opinion Polarization About Climate Mitigation Policies, by P. Sol Hart and Erik C.Nisbet.
The authors begin by explaining that the traditional model for science communication is the deficit model: ie, people have a deficit of information leading them to adhere to erroneous ideas, and that deficit can be corrected by the presentation of additional information from reliable sources. But clearly this model hasn’t worked. Despite years of attempts at providing more information from reliable sources (scientists), the polarization over climate change has actually increased. The introspection on why this is so has resulted in some truly interesting ideas regarding human reasoning.
I admit that I also adhere to the deficit model when interacting with others about controversial topics. This is especially clear on the internet, where we are often interacting with other people for a specific reason: Mormonism, religious beliefs, or politics. I think that is why the internet has such a wild west flavor. Whereas in real life, we interact with people around us for a multitude of reasons (coworkers, family, friends, etc), on the internet it’s often with one specific purpose that involves targeted information or ideas. So there is no tempering effect on communications (and, of course, faceless anonymity allows people to be more blunt and sometimes rude).
But I know by now that the deficit model doesn’t work. I learned this long ago with true believing Mormons, and forums such as this one make it clear that it also doesn’t work with political or policy discussions.
So I know it doesn’t work, but why doesn’t it work? How do some people so effectively disregard factual information that, for all intents and purposes, they might as well not even read it? Here are some quotes from the Boomerang article that help clarify.
http://www.climateaccess.org/sites/defa ... cation.pdf
The proposition that partisan audiences are motivated to interpret and process information in a biased manner that reinforces their predispositions is termed motivated reasoning and has been found to operate across a wide range of contexts (Kunda, 1990; Taber & Lodge, 2006). Though scholars often point to self-selection into partisan information sources (Bennett & Iyengar, 2008) as a reason for issue polarization, Mutz (2008) asserts exposure to any information, regardless of the source, about contentious issues such as climate change is likely to activate political predispositions and increase issue polarization due to motivated reasoning among audiences. Thus, we expect that audiences with strong partisan beliefs may interpret the same message about a controversial scientific issue in very different manners, reinforcing their preexisting beliefs and increasing public polarization rather than promoting the consensus postulated by the deficit model.
One factor that may interact with motivated reasoning is the identity of who will be affected by a scientific issue or policy. For example, newspaper stories often focus on different population groups that will be affected by climate change, such as communities in the Mekong Delta (Mydans, 2009), the Himalayas (Chhibber & Schild, 2009), and the United States (Broder, 2009). These stories each focus on how climate change may affect different groups of potential victims with various types of social identity cues embedded in messages (i.e., location, pictures, names, headlines, etc.) possibly influencing the degree of social identification between the reader and victims/exemplars highlighted within the message (Kogut & Ritov, 2005a, 2005b; Slovic, 2007). As will be explained below, how individuals respond to these identity cues may depend, in part, on individual predispositions such as political partisanship. We argue that the degree of identification with groups featured in a message or story can play a role in whether individuals are willing to help those in need and what problem responses they may support (Cuddy, Rock, & Norton, 2007).
Scholarship in political psychology and public opinion have long recognized the important role that group identification and social identity cues may play in shaping public opinion and policy preferences, especially when made salient in messages (Converse, 1964; Nelson & Kinder, 1996; Sniderman, Brody, & Tetlock, 1991). Most citizens are cognitive misers with limited resources for decision making about complex policy issues and often rely on a range of heuristic devices or mental shortcuts to make decisions about policy preferences (Downs, 1957; Popkin, 1991; Sniderman et al., 1991). Group-centric decision making has been demonstrated across a range of policy contexts and groups (e.g., Brewer, 2003; Hutchings & Valentino, 2004; Kinder & Sanders, 1996; Kinder & Winter, 2001; Nelson & Kinder, 1996; Sullivan, Piereson, & Marcus, 1982). In some cases, the degree of identification with the group(s) that may be affected, either favorably or unfavorably, by government policy may be a heuristic device that citizens employ to form policy preferences and decisions (Nelson & Kinder, 1996; Sniderman et al., 1991).
I have long believed that human beings are particularly driven by tribal tendencies. I think this is indisputable. I also think it is evolutionarily hard-wired into our species due the obvious fact that humans who lived, worked, and traveled together with a reliable group had higher chances of survival and reproduction, so whatever underlying genetic trait predisposed human beings to do so was evolutionarily favored. So it makes perfect sense that motivated reasoning would be highly impacted by identity cues. Are we talking about “my tribe” here? If so, concern is heightened. If not, concern is muted.
The cited article is referring to climate change, so when scientists talk to American audiences about the devastating impact of climate change on third-world, already impoverished nations, concern is muted. That’s why a recent trend has been to emphasize the effect of climate change on Americans themselves, such as the devastating effect of extreme weather. I predict that as weather becomes more extreme, that as the worst hurricanes that used to be thought of as occurring every fifty years or more start occurring every other year, then we will be at the point where Americans are more open to making changes.
Examining the interplay between these two mechanisms—motivated reasoning and social identification—highlights the potential for boomerang effects in science communication. A boomerang effect occurs when a message is strategically constructed with a specific intent but produces a result that is the opposite of that intent (for a review, see Byrne & Hart, 2009). For example, antismoking messages can increase predispositions to smoke (Wolburg, 2006), antilitter messages can increase predispositions to litter (Reich & Robertson, 1979), and appeals for donations to impoverished children can lower donation rates (Small, Loewenstein, & Slovic, 2007). Furthermore, boomerang effects may be specific to only certain segments of an audience based on individual predispositions or context. For example, Schultz, Nolan, Cialdini, Goldstein, and Grisevicius (2007) found that appeals to increase energy efficiency created a boomerang effect among households that were already very energy efficient and increased average energy use for this opulation segment.
The boomerang effect may occur because unintended constructs are activated in the receiver and drive the resulting attitude and behavioral change (Byrne & Hart, 2009). The Downloaded from crx.sagepub.com at American University Library on August 15, 2011Hart and Nisbet 5 integrated theoretical framework proposed by Byrne and Hart (2009) states that when an individual receives a message, he or she will engage in competitive processing of different components of the message with certain aspects of the message becoming more salient than others. Within this framework, motivated reasoning and identity cues may both play a role.
Motivated reasoning among strong partisans may lead to the activation of unintended constructs that reinforce partisan predispositions. For example, Nyhan and Reifler (2010) conducted a series of experiments examining whether exposure to factual information would correct participants’ misperceptions and factually incorrect beliefs about a range of issues, including stem cell research. Their results indicated that exposure to factual information failed to correct misperceptions among ideological partisans and in some cases resulted in boomerang effects on beliefs and attitudes and greater ideological polarization. Similarly, Gollust, Lantz, and Ubel (2009) found exposure to health messages about the social determinants of diabetes aimed at influencing public support for diabetes-prevention programs lead to a boomerang effect on attitudes about diabetes prevention and a greater ideological polarization between Republicans and Democrats.
Likewise, social identity cues may activate the unintended construct that an issue or problem is not applicable to the group to which a message receiver belongs, and thus the message may be ineffective or result in a negative impact (from the sender’s perspective) on audience attitudes. Furthermore, motivated reasoning and social identity cues may interact, with motivated reasoning generated by political partisanship amplifying or dampening the potential effect of social identity cues on audience attitudes. We assert that political partisanship will motivate individuals to process available identity cues embedded in a message in a biased manner that influences their degree of identification with potential victims of featured in that message. In turn, their degree of social identification with the potential victims will influence their level of support for policies or behaviors that may aid the featured victims. For example, due to motivated reasoning, individuals exposed to counterattitudinal messages, such as Republicans receiving a message promoting climate change as problem and calling for climate mitigation policies, may be motivated to interpret available social identity cues, such as geographic location or group membership, in ways that lower their social identification with potential victims of climate change featured in the message. In turn, low social identification with potential victims may decrease the effectiveness of the persuasive impact of the message, possibly resulting in a boomerang effect.
Thus, within the context of a controversial science issue, science communication has the potential to boomerang and (a) increase political polarization rather than create consensus and (b) dampen, rather than increase, support for policies addressing a science-based issue or problem among some segments of the public.
I think anyone who has interacted with believing Mormons in the internet can testify to this effect. When I first left the church, I naïvely believed that any person with access to the same historical information I had just accessed would also leave the church, or remain as a nonbeliever for social reasons alone. As others before me have accurately stated, not only is the church not true, it is obviously not true. If an LDS person were exposed to a fictitious religion with the same history as the LDS church, that person would most likely conclude that it was irrational for anyone to believe in that religion. If a person down the street engaged in the same sexual behavior as Joseph Smith, or engaged in the chicanery of Joseph Smith, they would be disregarded as an untrustworthy person of low moral character. But when the same history is connected to Mormonism, belief is sometimes heightened (see all the internet defenders of the faith who claim that this has only made their faith stronger).
For a long time, this was puzzling to me. Some of this can be explained by the investment paradigm. The more you have invested in an idea, lifestyle, or commitment, the less likely you are to discard it. We see this is particularly true in child-rearing. In addition to the instinctual love and care we have for our offspring, we invest more in them than in any other person or thing. So it’s rare for a parent to completely give up on a child, no matter the situation. For a current example, see the Boston Bomber’s mother.
But I believe there is more to it than that, because nearly every person who has left the LDS church has had a significant investment into it. I certainly did, despite the fact that I converted at 19. This was early enough in my life that I proceeded to make every major decision within the framework of the truth of the LDS gospel. When you are nineteen years old, you’re making some mighty significant decisions in the near future. Who will you marry? Where will you live? How will you spend your money? How many children will you have? And so on. So by the time I left the church at the age of 36, my investment was extremely high. The same is true with most other exmormons.
My current working theory is that some people are able to process historical information about the LDS church and allow it to affect their beliefs due to the fact that they feel less connected to the LDS tribe for some reason. That emotional disconnect interrupts the tribal phenomenon, which, in turn, triggers more intense motivated reasoning. How much it interrupts the tribal phenomenon is variable. I’ll use my own life as an example.
I live in an area of the US where there are very few LDS people. The church, of course, tries to create opportunities for young single adults to meet other faithful young single adults in the hopes of encouraging marriage. In my case, my stake president actually gave my name to a young man whom I have never met because he lived in an entirely different city, but we were in the same stake. He called me, we went out and were engaged three dates later. By the time we were married in the temple, we had known each other for exactly three months. He was charming, committed to the church, and I had prayed about it, so why wait and expose ourselves to sexual temptations?
The marriage was a complete disaster. I now see all the alarming red flags, but since I had never been exposed to abuse before, I was ignorant of what they meant. My ex-husband was abusive. He was suffering from undiagnosed and untreated bipolar (later diagnosed when it was too late to save the marriage) and had been raised in an abusive environment. The emotional aspect of the abuse began on the honeymoon. Yet I stayed for fifteen years and had three children with him. Why? Because of the teachings of the LDS church.
The LDS church community failed me during that period. The reasons for that are complex. I am not saying that my LDS community did not care about me. They did. So did my family, who had all become LDS. But, as anyone with experience with abusive relationships knows, it is often difficult for outsiders to fully recognize and respond effectively to the situation. I reached out for help to every bishop I ever had, and I finally managed to convince my then-husband to see the LDS counselor. (side note: a huge mistake, an utter failure – the episode only strengthened my then-husband’s power and my helplessness)
During this entire period, I was faithful and observant. I prayed every day, pleading with God to help me and my children. I pleaded with God to protect my children from the effects of his abuse. (I thought those prayers worked to an extent because he hid his abuse of the children from me, a fact I only found out years after my divorce, from my kids. They were protecting me from the truth.) God failed me.
I am not sharing this to condemn the LDS community. Given the circumstances, I’m not sure what they could have done. (although calling him to the bishopric seemed weird to me, even as a believer) I am sharing this to demonstrate how I think my emotional connection to the LDS church was interrupted enough to allow me to truly register information that threatened my belief in the truthfulness of the LDS church. This interruption allowed me to escape the trap of motivated reasoning and the boomerang effect.
More from the boomerang article:
As scientists have become increasingly certain about the human causes of climate change and the urgent need to address it, one might expect that public opinion about climate change would follow a similar pattern in beliefs about human causation, perceptions of the threat of climate change, and support for government policies mitigating climate change. However, polling data show modest changes across these measures for the public as a whole and an increasing polarization between Democrats and Republicans (Dunlap & McCright, 2008). For example, the partisan gap in opinion between Democrats and Republicans on whether “temperature changes over the last century are due more to human activities than natural changes in the environment” has almost doubled from 16 percentage points in 2003 to 29 percentage points in 2008 (Dunlap & McCright, 2008). Similar examples of political polarization over the past 10 years have occurred for beliefs on whether the effects of global warming have already begun, the scientific consensus on global warming, the threat that global warming will pose in the respondent’s lifetimes, and the exaggeration of global warming in the news (Dunlap & McCright, 2008).
This broad polarization in opinion about climate change is not only due to increased policy polarization in general between political parties (Layman, Carsey, & Horowitz, 2006) but also due to a specific party divide on environmental issues that has been developing since the 1980s (Dunlap & McCright, 2008). Although policy positions for a political party arise through an interactive process between party leaders, political activists, and members of the general public who identify with political parties, scholars (Fiorina & Abrams, 2008; Layman et al., 2006) suggest that the adoption of policy positions is driven primarily through a top-down process with party elites providing cues, or identity markers on what it means to be associated with a political party such as the Republicans or Democrats.
Identity markers may be any “characteristics associated with an individual that they might choose to present to others” to support an identity claim, or alternatively they may also be the “characteristics that people look to in others when they seek to attribute” an identity to them (Kiely, Bechhofer, Stewart, & McCrone, 2001, p. 35). These identity markers are woven into identity schema and provide the interpretive cues that differentiate the “self” from the “other.” In the case of climate change, based on the political context and polarization that has emerged in the last decade, we argue that opinions about climate change and climate mitigation, much like the issue of abortion, has become a fundamental identity marker for how Republicans and Democrats politically define themselves and others (Dunlap & McCright, 2008; Nisbet, 2009a). Thus, strong political partisans are likely to employ motivated reasoning when exposed to messages about climate change with ideological predispositions moderating information effects on policy attitudes.
Hamilton and his colleagues (Hamilton, 2011; Hamilton, Colocousis, & Duncan, 2010; Hamilton & Keim, 2009) have demonstrated across a series of studies that political partisanship moderates the influence of education on beliefs and attitudes about climate change. For example, as educational attainment increased among Democrats, the perceived threat of climate change increased while the converse was true for Republicans (Hamilton, 2011).
Schuldt et al. (2011) found similar results—political partisanship moderated framing effects on belief that climate change was occurring. Framing anipulation had no impact on Democrats’ and Independents’ belief whether climate change was occurring or not, but Republicans’ belief in climate change varied significantly depending on the framing manipulation. In their framing experiment, the term global warming activated Republican skepticism about whether climate change was occurring more than the term climate change. They partially credit this difference in Republican response to the fact that the term global warming “entails a directional prediction of rising temperatures that is easily discredited by any cold spell, whereas ‘climate change’ lacks a directional commitment and easily accommodates unusual weather of any kind” (Schuldt et al., 2011, p. 122). In other words, Republican participants were motivated to employ available cues in the “global warming” condition to discount the counterattitudinal information, whereas the lack of available cues in the “climate change” condition mitigated motivated reasoning and discounting by Republicans.
Many exmormons have wondered how believers can continue to believe after exposure to problematic historical information. But it soon becomes clear that the difference isn’t education or intelligence. It is apparent that well educated and intelligent individuals continue to believe in the church, and use their intelligence to engage in motivated reasoning that persuades them even more. To a complete outsider, the reasoning will seem weak, but to another motivated believer, it’s convincing. Other studies have shown that the MORE educated and intelligent the individual, the HIGHER the likelihood that he/she will be able to engage in successful motivated reasoning that allows them to retain erroneous or highly questionable beliefs.
Returning to the boomerang paper, the author’s hypothesis for their experiment was as follows:
Hypothesis 2 (H2): Political partisanship will moderate the influence of social distance cues on identification with the potential victims of climate change.
Hypothesis 3 (H3): Republican participants exposed to messages with embedded high social distance identity cues will be less supportive of climate change mitigation policies than Republicans participants who are not exposed to any message (i.e., a boomerang effect will occur for Republicans from exposure to climate change messages with high social distance identity cues).
Here’s how they set up the experiment:
Our study to test the proposed hypotheses was conducted by randomly assigning participants to one of two stimulus conditions or to a control condition. Participants were nonstudent adults (N = 240; mean age = 38.42 years; age range = 18-80 years; 54% female) recruited via mall intercepts in an upstate, rural New York state community and provided a US$5 gift card incentive for completing the experiment.
In the two stimulus conditions, participants read a simulated news story about climate change; no story was read in the control condition. The simulated new story was designed to be “nonpolitical” as it did not contain any explicit political partisan cues and focused on the potential health impacts of climate change, an increasingly salient and important aspect of climate change (Frumkin, Hess, Luber, Malilay, & McGeehin, 2008; Maibach, Nisbet, Baldwin, Akerlof, & Diao, 2010). The story discussed the potential for climate change to increase the likelihood that diseases such as West Nile virus will infect individuals who spend a lot of time working outdoors, like farmers. The news story was generated explicitly for the experiment but was based on facts reported by the Associated Press. The story included pictures and names of eight farmers who were potentially at risk.
The two experimental conditions varied by manipulating the identity of the potential victims and story exemplars into conditions of relative low and high social distance by altering the story’s headline, body text, and exemplar names while keeping the exemplar photos in each story constant in order to guard against different facial expressions or other individual cues. In the low social distance condition, the potential victims of climate change were described as being located in the general locality of where the experimental participants resided (upstate New York). In the high social distance condition the potential victims were located either in the state of Georgia or the country France. Multiple high social distance stimuli were used to help ensure the manipulation was influencing social identification Downloaded from crx.sagepub.com at American University Library on August 15, 2011Hart and Nisbet 9 with the exemplars rather than unintended group characteristics. Examples of the stimulus used in both conditions are provided in the appendix.
The results were as follows:
The nature of this moderation was further probed by testing bootstrapped (bias corrected and accelerated) conditional indirect effects at different levels of party (see Table 3). The results show that individuals who expressed greater Democrat partisanship (party levels of 0, 1, or 2)
expressed the same levels of identification of the victims regardless of the presence of high and low social distance cues in the messages. However, Independents (party level of 3) and Republicans (party levels of 4, 5, or 6) viewing messages with high social differentiation cues expressed lower levels of social identification with victims than Independents or Republicans who viewed messages with low social distance cues. The results demonstrate that compared with the low social distance message, the high social distance message increased political polarization on identification with potential victims of climate change, which in turn increased political polarization in policy support. These results provide the basis for understanding the mechanism of the potential boomerang effect of message exposure on Republican support for climate mitigation we test in Analysis 2.
In other words, if one already identified with the group that accepts climate change (democrats), the identity cues did not have an impact. But identity cues impacted the groups that did not automatically accept the premise of climate change (independents and, more strongly, republicans). This doesn’t mean that republicans are less empathetic than democrats. It does mean that empathy can be neutered by ideology, when the negatively impacted people are not within your perceived tribe.
The lessons for climate change advocates are clear. Additional information does not increase support for climate change action and may have a boomerang effect. It may be that presenting the negative impact of climate change on people who are perceived to be members of one’s own group may increase support for climate change action.
The take-away lesson for political discussion is pretty clear. Simply providing more information is ineffective. Resolving that problem depends upon the specific topic under consideration. Depending on the topic, it may be possible to reframe the conversation by using examples that provoke group identification with the targeted individual. I imagine that would be hard to put into practice, so we may be doomed to increased polarization as information itself increases due to the internet and cable TV.
What’s the take-away for interaction with LDS believers? I’m not sure, other than the understanding that the fact that LDS people continuing to believe despite being aware of the historical problems does not mean they are stupid or unintelligent is a good thing.