May 18, 2021
Elissa Kranzler, PhD, is a social scientist and former Postdoctoral Research Fellow at the Wharton Risk Center (2018-2020). She studies the relationship between media messaging and risk behaviors. We spoke with her this spring to hear about her current position and the connections to her past research. This interview has been edited and condensed.
What are you doing now? How does risk communication play an important role?
I am a Researcher at Fors Marsh Group working on a contract with the U.S. Department of Health and Human Services (HHS). The aim of the project is to develop, implement and evaluate a public education campaign to increase COVID-19 vaccine confidence while reinforcing basic prevention measures.
One of our targeted outcomes is to increase vaccine confidence – which in this case means that we want people who are eligible to be vaccinated to feel ready to get the vaccine, and to complete vaccination if multiple doses are required.
In my position, I lead the long-term evaluation of the HHS We Can Do This campaign. Other colleagues are responsible for developing and testing campaign materials, whereas my team evaluates the efficacy of campaign content through survey research and other data.
Where would you see this campaign? Is it being used by other entities like the city government?
It is a national campaign and not administered by other entities. Campaign funding is allocated to run ads through several mass media channels including – social and digital media, radio, print, TV, in Spanish and English and other languages and to reach different segments of the population with consideration for culturally appropriate, tailored messaging.
How are you evaluating the campaign?
We are administering a nationally representative, longitudinal survey. Through this survey we can assess people’s beliefs and behaviors related to vaccination generally, COVID-19 vaccination specifically, and other preventive measures (like mask wearing, social distancing, and avoiding crowds) and other outcomes of interest (knowledge about COVID-19). We also ask people about their exposure to the campaign broadly (not specific ads) and how frequently they may have seen it.
This survey contacts the same individuals every four months for a period of two years to see if their exposure changes over time, and if, as a result of their exposure, their beliefs and ultimately their behaviors related to COVID-19 prevention change.
Since we are also following where and how frequently the ads are being aired, another way we can account for exposure is to look at campaign dissemination across different media sources and by geography. This means we can estimate how much a person could have seen an ad based on the broader area of the country in which they live.
We expect, based on theories of behavior change and empirical evidence from other work, that the more we can influence people’s beliefs, awareness, and intentions/plans to change their behavior, the more likely they are to actually change their behavior. If they are not sure that the vaccines are safe, then we want to give them the information so they can make an informed decision. We anticipate that some of the behaviors of interest may not change for quite some time as a result of the campaign.
Personally, I’ve been thinking a lot over the past year about how perception of risk drives behavior, sometimes entirely independently of objective levels of risk. For example, I hate flying. I am anxious about flying. Every time I fly I think “this plane could crash.” The chance of dying while flying on an airplane is incredibly low — it is much safer than driving, and yet I have this irrational and emotional response every time I fly. I am a pretty rational person but I cannot rationalize that. I think that is really illustrative of how – as much as we might try to overcome our perceptions and our biases with rational information – it doesn’t always work.
How do evaluations of these kinds of campaigns work?
We use statistics to evaluate the efficacy of the campaign. We have only collected one wave of data so far, so we are focusing on understanding the study sample at baseline. For example, how many respondents are vaccinated against COVID-19, what do they think of vaccination, how many of them want to get vaccinated soon or want to wait, and if so, why? We also want to know whether they have seen campaign content. Over time, we’ll be able to build on those questions and answers.
Are there other campaigns going out in the same places that this is being administered?
Yes. That’s a huge obstacle because we have to show that our ads and exposure to our campaign associates with the outcomes that we’re interested in while controlling for all of these other things – like accounting for the fact that the overarching population that gets vaccinated is going to influence people’s beliefs (regardless of the campaign), as will seeing their friends and neighbors getting vaccinated.
So how do you know if your campaign is working?
We have to consider all the other things that could influence people’s beliefs and behaviors outside of our campaign. This campaign is about a very specific issue – there have been other efforts to address emerging infectious diseases – but this challenge is happening in real time. The message development, and there for the evaluation, have had to adapt to a changing timeline.
How has your time at Penn during your doctoral studies and your postdoc fellowship at the Risk Center prepared you for your current role?
Both of those experiences were very influential. My doctoral work laid the foundation for the types of analyses planned in my current role, since I focused on evaluating another national campaign that aimed to prevent adolescents from initiating smoking. A different behavior, for sure, but there is a good deal of overlap in terms of the methods used to evaluate both campaigns.
My time at the Risk Center was instrumental in several ways. It forced me to think outside the box of ‘smoking behavior’ that I had been focusing on during my doctoral studies, and previously as a research coordinator in the Perelman School of Medicine. At the Risk Center, I worked on other types of risks with different costs. Retrofitting a house against flood damage, for example, has a much more immediate financial cost and different timeframe, which are variables that I hadn’t considered before. The formative research I did at the Risk Center was geared towards understanding what kind of messages would be effective at persuading people to retrofit their homes. What we found is that the messages that highlighted normative beliefs (beliefs about what your friends and family and community members would do) showed the most promise for influencing behavior. This made me more attuned to normative beliefs when working on the survey for my current job. Being able to make that connection is one of the most important things I learned from my work at the Risk Center.