Skip to main content
Faculty Viewpoints

How do healthcare consumers make decisions?

Like consumers of other goods and services, healthcare consumers don’t always make decisions that are in their own best interests. Four experts — a psychologist, an organizational behaviorist, a behavioral economist, and a clinician — discuss the challenges of helping people make healthy choices.

Peter Salovey: My work is from the perspective of a social psychologist, and I'm particularly interested in how we can apply the theories that social psychology as a discipline provides to guide us in figuring out what predicts engagement in prevention activities, care-seeking activities, and early detection for a whole range of health issues and diseases. But we're also very interested in looking at how that applied question feeds back to the theory of social psychology and might help us modify it.

The two areas we've worked in most intensely have to do with cancer prevention and HIV/AIDS prevention and early detection. We look at the framing of messages designed to encourage some kind of relevant health behavior — framing in terms of gain and loss terms — and then we use prospect theory to help predict under what conditions gain-frame versus loss-frame messages would be more or less effective in promoting those kind of behaviors. The behaviors can be getting a mammogram, using sunscreen at the beach, getting a Pap test, eating fruits and vegetables.

Erica Dawson: Can you give an example of gain and loss frames?

Salovey: Sure. Let's take mammography screening. Mammography is an interesting behavior. It involves psychological risk. It's uncertain. You don't know the outcome. Because it's an uncertain behavior, loss framing — making people think about the downside of not engaging in this behavior — may motivate them to take the risk, as compared to a gain-framing message — making them think about the benefit of engaging in this behavior. So a gain-frame message for mammography would be something like "If you get a mammogram, you can feel healthy." A loss-frame message might be "If you don't get a mammogram, you might have undetected breast cancer, and you'll leave your children orphaned and your husband widowed." Those aren't parallel and equivalent in their content otherwise, but they are examples of gain- and loss-frame messages.

We do some of this work in laboratory settings, but mostly in community settings. We work with the public health clinics; we've worked in the housing developments in New Haven, randomly assigning, for example, housing developments to different campaigns, based around gain- or loss-framing messages, and then tracking people for six months or a year to see if they engage in the desired behavior. We do the same thing at Hammonasset Beach on sunscreen.

More recently, the second issue that we've looked at is what we might call psychologically tailoring messages. There's a lot of work on tailored communications. In marketing, you often see strategies where there's something about the recipient of the message that is incorporated into the message. The old-fashioned one was where you'd get a note that says, "As an owner of a Volkswagen, maybe you've wondered whether upgrading to a Porsche would be a good thing." A Super Bowl ad versus an ad on daytime television. What we try to do is tailor health messages in the cancer and AIDS domain to psychological characteristics of the recipient. We've done this with many different kinds of characteristics, but one that we've had very good luck with is a construct developed by Tory Higgins at Columbia University called prevention versus promotion focus. Prevention focus has to do with a person's need for safety and security. Promotion focus is really a person's need or desire to accomplish things and achieve things. And different people emphasize one of those needs over the other. Some people are very oriented toward being safe and secure, others toward achieving and accomplishing. And you could imagine that if we could identify that difference in people, we could construct a campaign around not smoking or engaging in physical exercise or eating fruits and vegetables — those are some of our recent ones — that emphasizes not getting sick versus feeling as healthy as you can.

When we've designed campaigns that are consistent with someone's emphasis, someone's personal desire to prevent problems versus promote achievement, we're more likely to get behavior change.

Lynn Sullivan: If you look at something like promoting HIV testing — which is an area that I've become very involved with — and you look at whether it's best to have it as a gain or a loss framing, do you focus on the detection part of it, or the prevention part of it? My understanding is that they beg for different types of framing.

Salovey: We actually have done that study. We recruited about 500 women from a couple of the health clinics and housing developments in town, and exposed them to videotaped messages with print follow-up about the value of HIV testing. And we gain-framed them or loss-framed them; we did them in English and in Spanish. Now, I would have thought that HIV testing is best thought of as a detection behavior with some risk. You think you're healthy. You risk finding out that you're sick. And so it would be amenable to a loss-framed marketing strategy.

As it turned out, we got an interesting interaction. Some of the women felt that HIV testing was in fact a psychological risk for them: "I don't know my HIV status. I've been involved in behaviors that might have put me at risk. And if I take this test, I might find out something that will be unpleasant." For women who had those attitudes, the loss-frame videos and messages did work better. But for women who said, "You know, I don't think I've done anything to put me at risk. In my view, there isn't any possible way I could be HIV positive," for them, getting tested posed no risk. There's no uncertainty.

Sullivan: Are there any actual psychological gains to be had from testing for HIV?

Salovey: I would guess there are. Knowing that you're healthy, feeling even more relaxed — we always say in our messages, "You can have sex feeling more relaxed that you're not passing on HIV to a partner." Of course, the behaviors we want people to engage in are the same. We still want them to use condoms.

In any case, those people were more motivated to go get tested with gain-frame messages.

Sullivan: So does that mean that part of assessing that, off the bat, is to figure out whether they consider themselves at risk or not?

Salovey: For behaviors like that one, yes, absolutely. We think of behaviors as being risk-oriented, early-detection behaviors or risk-averse, prevention behaviors, but what you really have to look at is how individuals construe those behaviors.

We have studies where we have manipulated that construal. So, for example, we've described Pap testing as either a behavior that helps you detect early stages of cervical cancer or as a behavior that helps you prevent cancer — because it's detecting abnormalities, but we don't emphasize that.

Sullivan: It's the level of risk.

Salovey: And you get the interaction that you would expect in both of those studies. Construe it as a prevention strategy and gain-framing works better. Construe it as a detection strategy and loss-framing works better.

Keith Chen: That interaction would be particularly interesting to economists who are interested in the normative issues with respect to testing, because it also says that the framing has an interaction with the selection effect of who gets tested, right?

Salovey: Absolutely right.

Chen: So if you especially want to target an increased take-up of testing to those who are most at risk, you can provide information as to what frames will most efficaciously achieve that.

Salovey: Exactly right. You have to know something about your population.

Chen: How they do a personal risk assessment.

Salovey: And how they think about this behavior.

Sullivan: But part of the current challenge with the new CDC recommendation about routine HIV testing — that everyone should be tested, no matter what their risk is — is how you engage somebody who is coming in for their routine care, and all of a sudden you're saying, "The CDC recommends we test you for HIV." How do you gauge where they are, because this may not even be on their radar? So you have to figure out how to frame it.

Salovey: That may be why gain-framing works in that situation. They don't think they're at risk, so telling them, "You might have this disease, and if you have this disease, you might pass it on to others" has no meaning.

Sullivan: Right. So, say, "to keep you healthy"—

Salovey: "You want to feel good, psychologically. You want to feel reassured. You don't want to feel anxious. You want to have better sex." That's what you say to them.

Chen: It's the socially responsible thing to do.

Dawson: I'm guessing that a big part of the equation on who gets tested or not is individual differences in healthcare. We know that it's insurance driving when you get tested for things and how often.

Salovey: As a social psychologist, I'm much likelier to emphasize the way an individual thinks about a problem, and then the immediate social circumstances in which they find themselves. But these kinds of structural variables make a huge difference. If somebody else is paying for it, you're more likely to do it. If your physician is recommending it, you're more likely to do it. If it's normative in the community in which you live in — that is, everybody is going out and getting tested, why aren't you? These are huge variables in predicting this kind of behavior.

Chen: I'm an economist first, and then a behavioral economist. That's brought me to thinking about health prevention behaviors and testing behaviors because my research as a behavioral economist focuses primarily on the primitive judgment and decision-making questions. I've done a lot of work on loss aversion, and a lot of new work on cognitive dissonance. Relatively recently, what I've been researching and thinking a lot about is integrating traditional economic cost-and-benefit analysis of gathering information with psychological theories of very intuitive effects: subjects being averse to self-threatening information; subjects being averse to uncertainty in some situations, being attracted to uncertainty in other situations.

My recent work on mammography behavior tries to integrate these two into almost a meta-rational model of economic decision-making. Contrary to what we might think of as the most naïve economically rational model of information acquisition, people don't like information that's potentially disturbing or aversive, that says they're at risk of negative health consequences. At the same time, on some kind of meta-cognitive level, they appear to make cost-and-benefit analyses with respect to gathering information, taking into account both the rational effects that gathering information will have on both their ability to seek effective treatment and plan effective medical behaviors, and these kinds of disquieting psychic costs. And they weigh those against each other with respect to how they form beliefs.

For example, there has often been, in many medical studies, an unfortunately low correlation between risk factors and preventative behaviors that you'd recommend for people with those risk factors. So, those most at risk for certain types of cancers are more likely to smoke. There's sometimes very low correlation between breast cancer risk and mammography behavior, unfortunately.

What I find in some of my research, if you examine self-perceived risk of breast cancer, in the broad cross-section, you find that that is very complexly related to medically estimated risk of breast cancer. There's a relatively underexplored but known fact that self-perceived risk of breast cancer is an upside-down U-shape with respect to medically estimated risk of breast cancer. So, when their medically estimated risk of breast cancer is low, people seem relatively well calibrated — their self-perceived risk increases with their actual medical risk. However, at some point there's an inversion, and there's a large population of people who are at a very high risk for breast cancer who report feeling that they are at a low level of risk.

Salovey: Who don't realize they're at risk. Or who aren't willing to report it.

Chen: That is one of the critical points, because that raises questions of causality. What's driving this? One natural, psychological assumption would be, well, when the information is very, very bad, we're aversive to it, and so you could see this kind of inversion coming out of that. Another explanation could be things like differences in knowledge about what's risky. You could imagine a story, for example, where some people don't know that smoking has negative health consequences. That results in them both smoking more and reporting being at very low risk for, say, lung cancer. That kind of difference in health knowledge could drive this inverse relationship, as opposed to an underlying motivating social-psychological theory.

So what I do, in a lot of my work, is try to isolate those two effects by looking for exogenous factors, things which you have very little control over which affect your breast-cancer risk, and whether or not self-perceived breast cancer risk is affected by that. For example, I find that, controlling for all other factors, including education, income, race, and religious beliefs, if you look at the number of female relatives who the subject knows who have had breast cancer — and this is also controlling for prior testing — still, self-perceived risk for breast cancer is upside-down U-shaped. So, for example, with the first, second, and third female relatives you know who have had breast cancer, your self-perceived risk of breast cancer is increasing, but with your fourth, fifth, and sixth known female relatives who have breast cancer, your self-perceived risk of breast cancer actually declines.

And then once you investigate further, to try and tease out how people are making this belief-level cost-benefit analysis, you can see it has impact on meta-cognitive beliefs as well. So, for example, in a recent survey in San Francisco, approximately 12% of women strongly agree with the statement that people who pray to God are less likely to get breast cancer — in some sense, that God can protect one from breast cancer. Now, what's interesting is that your willingness to say "yes" to that actually steeply declines with the number of female relatives you know who have had breast cancer. So there appears to be in subjects this complex interplay between higher-order beliefs about what drives medical risk, their self-perceived costs and benefits from getting tested, and their final probability assessments about whether or not they're likely to need such testing. What my research shows is that there actually appears to be, although I don't want to push the conclusion too hard, kind of a meta-cognitive level on which people actually take into account the costs and benefits of holding beliefs.

Sullivan: So, in terms of that sense, the perception of risk — can you parse out what is modifiable risk and what is not modifiable? You can't actually take a family history, but let's take...

Dawson: Smoking, eating, alcohol...

Sullivan: Yes, alcohol intake, or something else that has been related to a higher breast cancer risk. Is there a difference in terms of their perceived risk based on whether it's something they can do something about or not? Because, certainly it seems like psychologically, you can say, eight of my female relatives have had breast cancer. That should be pretty compelling. But there may also be a sense of, well, there's nothing I can do about it. Whatever is going to happen to me is going to happen to me, whether I get tested or not.

Chen: This is actually preliminary work, so the results are a little bit weaker. But there do seem to be ways in which these kind of exogenous factors can actually shift your beliefs about the efficacy of things which you actually have control over. So, for example, even though people understand that male relatives having liver cancer has no relationship to whether or not you're likely to have breast cancer, it does have an effect on things like your belief that prayer can protect you from cancer. So, for example, if you had a grandfather who died of liver cancer, you're much less likely to report that prayer to God can protect one from breast cancer. And, in fact, what I find is that that seems to be a driver of your willingness to exercise your accuracy of knowledge. So people who have had, for example, a grandfather die of cancer are more motivated and hence report more accurately whether hair-dye contributes to breast cancer. They're more likely to be on top of the research.

Dawson: Keith, I think this is where some of our research might make contact. We're looking at not necessarily characteristics of individual people, but at the situation that they're in and the kind of disease that we're talking about. We've researched mainly in the context of different diseases that you can or cannot do something about, basically. We're trying in the lab to see how this influences which kind of information people will go after, which, in the long run, means they're going to end up with different knowledge packets.

People have known for a while that disease treatability is a factor in whether you decide to go get tested, which is pretty logical. A lot of different models include this factor, always in this linear sense, meaning that, if there's nothing you can do about a disease, testing just zeroes out. And we found, instead, that it does have an impact, and it's an interaction, essentially. It's multiplicative. If people believe that they're at risk for a disease that they have very little control over, not only do they not want to get diagnostic testing, but they avoid any information. So we left them in a room with pamphlets about this disease that they thought they were at risk for, just to see if they would browse and learn a little bit more about this unusual thing. And they actually seemed to be less likely to even gather any sort of information.

Chen: It ties really nicely into a cost-benefit analysis of information.

Dawson: Exactly. It suggests how that might look, in the long run.

Salovey: It also sounds a little like loss-aversion. You're avoiding information that at least would make you entertain a loss from some reference point.

Dawson: Yes. And it does seem to interact, as Keith was saying, with some sort of calculus about what's the purpose of doing this. So there is a sort of trade-off between severity of the disease, and treatability, and when people have a severe but treatable disease, that seems to trump. It may be more of a gain-frame, in that sense.

Salovey: Sure.

Sullivan: But I think with treatability, there are certain conditions that are very treatable, such as HIV, but, you know, populations in Africa don't have the access to treatment that we have here. The likelihood of them being tested...they just see the whole endeavor as being futile. Why find out if I have this, if I'm not going to have any access to treatment?

Salovey: Has it changed in the U.S.? Where 15 years ago, a positive HIV test felt like a death sentence to someone, and now it certainly isn't. Has that increased interest in testing?

Sullivan: It's interesting. I'm an HIV provider, but I'm also a general internist, so I see patients who are not known to be HIV infected, and then also care for a panel of patients who are infected. I started in this work in the mid-1990s, right on the verge of highly active anti-retroviral therapy coming into the scene. So the first part of my training was all watching patients die of this disease. My outpatient clinic was very small, because most patients were seen in the hospital and did not do well. And then, in 1996, everything shifted. All of a sudden our inpatient service shrunk and we had all the patients coming into the clinic. And there are patients I still follow, 12 years later, for whom HIV is about eighth on their problem list. I take care of the hypertension and the diabetes and I get mammograms, and all the other stuff. So my perspective has been that this is a really treatable disease. And for folks who get treatment, and get it early enough, a good portion can do really well.

I've also had the experience, as a general internist, of diagnosing several patients recently with HIV, and it is absolutely viewed as a death sentence. No matter how I frame it. Three patients said this to me: "I know people who have had this. They had to take piles and piles of pills. Am I going to get really skinny? Am I going to be in the hospital all the time?" I'm not saying that's everybody's perception across the board, but I've been struck, especially since my view of this disease has changed so dramatically in the last 10 years, how a lot of people still see it. And if they think that they're at risk for it, and have that perception of it being life-threatening... Certainly this is a life-altering diagnosis, but in many ways not a life-threatening diagnosis anymore.

Dawson: The real opportunity costs are in this sort of information avoidance. With some diseases, like Huntington's, you can't do anything about it, so who can blame them. But Lynn and I teamed up because we're both interested in the idea that there are many diseases where people underestimate the degree to which they can be controlled. They underestimate, in other words, the value of learning, of early detection and preventive care. And so people who see HIV as a death sentence are precisely the ones who may need early testing and detection, and precisely the ones who may be avoiding it.

Salovey: Exactly right. Keith used the example of breast cancer, earlier — you have eight relatives who have all died of breast cancer, so you feel there's nothing you can do. But, in fact, early detections can lead to treatment options that you wouldn't have if it's detected later, and that are less invasive.

Dawson: And I think that has a lot of very practical implications about how you frame persuasive messages. I think we tend to emphasize the consequences, one way or another. And I think that you should also emphasize what can be done — the possibilities, in essence, giving somebody a reason to want to come in. So getting them correctly gauged on treatability might offer some additional persuasion for early detection behaviors.

Salovey: I'll ask a question of Keith — isn't it true that you're finding some of the roots of the kinds of behavioral decisions that you're describing in work with monkeys?

Chen: Well, for example, a lot of your work on how loss- and gain-frames affect how people kind of process their decisions, very high-order, important decisions: I don't want to claim that these decisions aren't made on a very rational basis. But then, also, some of the effects that you identify, my work with monkeys shows that they appear to be very evolutionarily ancient. So, in other words, even though as human decision-makers, we seem very sensitive to gains and treatability, and things that the rational model tell us, many of the basic ways that we process that information appear to be very deeply ingrained and inherited.

Salovey: Does it trouble people, in some way, that you can look at monkey behavior and actually predict human behavior from it?

Chen: Actually, I've gotten largely positive responses. I haven't spoken with medical practitioners about the monkey work, which is funny, because they would probably be more receptive to it. But, for example, when I've spoken to quantitative hedge fund managers and people who are engaged in behavioral finance, when you tell them that many of the strategies that they're already trading on, the psychological biases that they look for in the average American stock investor —

Salovey: Can't sell a stock at a loss.

Chen: Yes, exactly. When you tell them that, in fact, much of the research suggests that this is a very ancient thing, they're very receptive to that, and in fact, if anything, kind of take the right message from that, that this suggests that these biases are going to be more robust in new settings, and aren't going to be easy to arbitrage out of the market.

Dawson: JDM [judgment and decision-making] research has always taken the approach about how you recognize these biases and then try to minimize the damage. But the fact that they're not going away is, I think, pretty informative. Some of the most exciting work coming out of the field now is capitalizing on them. So instead of trying to suppress them or correct them, you use them. Can you frame people's choices to help them make good choices, based on what they would be doing anyway?

Salovey: It's certainly true, and it's a disappointment to educators in all fields when they learn this, that simply teaching people about biases, about what in economics might traditionally have been called irrational behavior, doesn't change it.

Dawson: They don't see themselves as biased to start with. Peter, I don't know if you've tracked what George Lowenstein is up to, these days. He was just here giving a talk.

Salovey: George and I were graduate students together here at Yale. He was in the economics department and I was in the psychology department. And we ran studies in the same lab, literally. So we talked all the time about these kinds of things. This was 30 years ago.

Dawson: He's talking about this idea of asymmetric paternalism, where you structure people's choices to take advantage of what they're going to do anyway. And he looks, in particular, at the healthcare domain and preventive care, capitalizing on people's love of gambling, for example. We know they like small rewards frequently and the chance for a big pay-off. So he structured this reward system for people taking medicine. They're working with some pharmaceutical companies who have a financial interest in this working — it's in the interest of pharmaceutical companies to have patients take medicine that they need to be on consistently, and it's in the interest of the patient. And it seems to be really effective.

Sullivan: How do you build in the rewards for something as basic as medicine-taking?

Dawson: Well, right now it's money. It's a study, so they're actually paying them. What other rewards might be there, I don't know.

Salovey: Apparently, paying people for adhering to some medical recommendations may actually be a cost-effective strategy.

Sullivan: From a clinical standpoint, it's much easier to convince a patient to take a medication that makes them feel better. You know, one of the struggles with anti-retrovirals is that, in a lot of cases, it's not something that they can actually identify as making them feel better.

Chen: And that can help guide innovation policy. So, for example, from a health-benefit perspective, the marginal research dollar may be much better spent for, say, doing research on reducing the negative side effects from an effective drug than trying to push the marginal effectiveness of this drug even further.

Salovey: One of the constructs that I think is a little bit ignored in this area is the role of emotion and anticipated emotions. Sometimes I think about my own preventive behavior. When I bought the house that I live in now, we tested our basement for radon. It was five picocuries per liter, and four picocuries is the "action" level, so we're over the action level. And the action you can take is to put in, essentially, an exhaust fan. So the radon coming from the rocks and soil under your basement doesn't accumulate. If you put the fan in, picocuries per liter will drop to two.

Now, most of the literature on the link between radon exposure and lung cancer was done on uranium miners exposed to huge amounts of radon — I mean, a hundred picocuries. People were assuming a linear effect. And certainly if you've got substantial household radon, 20 or 30 —

Dawson: You're living in a mineshaft.

Salovey: — you should probably put the thing in. But why did I spend $1,000 putting this thing into my basement, when I was really just at the borderline, and, epidemiologically, the risk was still relatively low? The reason, I think, and this is introspection, this is not scientific, is that every time I open the door to my basement...

Chen: You're taking short breaths.

Salovey: Exactly. I find myself thinking about dying of lung cancer. It's like this automatic association.

Dawson: That could have been prevented.

Salovey: Right. And so that's where the emotions come in. I found myself getting anxious about the idea that I might be putting myself at risk. And then I started feeling guilty about putting my wife at risk. And at that time we had a cat, so when the cat would walk downstairs to the basement, I'm thinking I'm putting my cat at risk. For $1,000, I could buy an insurance policy against anticipated guilt and current anxiety having to do with dying of lung cancer from radon exposure in my basement. And is it worth $1,000 to me? And I'm not saying I actually had this train of thought, but is it worth $1,000 to me to not have to feel those emotions every time I open the door to my basement and walk down the stairs? And the answer is yes, that's a great investment.

To me, it's one of the relatively underexplored areas in health decision-making from the JDM perspective, but also from a delivery perspective and from a behavioral economics perspective: What is the role of emotion and anticipated emotion as a motivator of behavior? After all, we evolved a system of emotions because emotions energize behavior. They get us to do things. They help us survive. Run away when you're afraid. Fight when you're threatened.

Sullivan: I wonder what the spectrum is in terms of people's tolerance for anxiety about those things.

Salovey: There are certainly differences.

Sullivan: I'm on the same page as you. I'm a big believer that if I do a test, I'm going to actually use the data from the test. There's no point in doing a test if you're not going to actually do something with the data. But also I have a very low tolerance for anxiety. So I'd much rather put away whatever the issue is, even if some would say that the risk statistically is almost zip.

But I think other people, and certainly patients I see, have a much higher level of tolerance for anxiety. So I'm wondering about individual differences in what people can tolerate as far as anxiety about risk...

Dawson: This is another example of where you could use this natural tendency we have to do something and use it to your advantage. We know that people are pretty bad at forecasting how they're going to feel in response to certain events. They tend to think their reaction is going to be more dramatic, and that it's going to last longer, than it does. The classic example is, what is it going to be like if you don't get tenure? They think it's going to be horrible, for a really long time. But then if you ask people who were just denied, it's just not that bad.

So I'm wondering if you could capitalize on that forecasting bias. People may not feel anxious or upset or worried in the moment, but if they think they would down the line, you might be able to capitalize on their anticipated future states to convince them to take a test or stick to a treatment that could prevent that bad thing from happening.

Photographs by Julie Brown

Department: Faculty Viewpoints