How can we prevent individuals from urinating in open areas?
In the Nudge TV show “The Power of Habit”, Sille Krukow, a behavioural expert based in Denmark, designed a nudge to help the Copenhagen Central Station. The problem they were faced with was that many men would urinate in hidden corners outside the building, despite the close location of public (and clean) toilets.
Staff has to clean the area several times a day
According to Ms. Krukow, this phenomenon can be explained by the broken-window theory of policing (Wilson and Kelling, 1982). When individuals observe others misbehaving, they tend to act in the same way, instead of doing the right thing.
The solution adopted in the show was to add a urinal to the area in question and stickers on the sidewalks signaling the direction and distance to the closest WC.
Implementation of the WC stickers
Urinal installed in the most critical zone
Around 500 people had been observed urinating in two corners of the station during the week before the experiment. After the intervention, half of the people did the right thing, i.e. they instantly used the urinal, while the other half started to urinate on the street, but changed their behaviour once they saw the urinal. This means that the cleaning staff and station customers were saved of 5,000 liters of urine a week!
To learn more about this nudge (and other experiments), you can find the episodes online here.
When are nudges most effective? A study by Pelle Guldborg Hansen, founder of the Danish Nudging Network, a non-profit organisation in Copenhagen, suggests that nudges may work only if they are in line with social norms. They tested two potential “social nudges” in partnership with the local government, both using symbols to try to influence choices:
–In one trial, green arrows pointing to stairs were put next to railway-station escalators, in the hope of encouraging people to take the healthier option. This had almost no effect.
–The other experiment had a series of green footprints leading to rubbish bins. These signs reduced littering by 46% during a controlled experiment in which wrapped sweets were handed out.
“There are no social norms about taking the stairs but there are about littering,” said Mr Hansen. Hence, perhaps existing social norms must be studied before designing a nudge!
How can we nudge people to donate to charities? There are many ways to do so, but we would like to share one in particular which is very simple and surprisingly powerful.
It seems that peer effects are an effective tool to change people’s behaviour. We want to do what people like us are doing. If teenagers have friends that smoke, they are very likely to start smoking themselves (and much more likely than if their parents smoke). The same holds for donations – if our colleagues donate, we would like to donate as well.
This is what has been tested by the UK´s Behavioural Insights Team in cooperation with Her Majesty’s Revenue and Customs (HMRC). The HMRC employees in Essex were sent postcards describing the donation efforts of their colleagues and encouraging them to do the same, to see if more people would start donating. However the experiment went even further (and this is where it gets interesting). One group of employees got postcards featuring a picture of the donor in addition to the above information (see Picture 1). An insignificant change, you may think, but 6.4 % of people signed up for the donation scheme in the latter condition, compared to 2.9 % in the no-picture condition.
Picture 1: Postcard featuring a photo of the donor
Carrots or French fries? Fruit salad or a chocolate bar? These are the dilemmas that children face when choosing their meals in school lunchrooms. From convincing them that veggies will give them superpowers to ominous threats of what will happen to their bodies if they don’t eat healthy, there are few options left unexplored on how to get kids to eat right.
Unsurprisingly, when all else fails, BE swoops in and saves the day. Discarding classical solutions such as information campaigns, it offers a much simpler alternative: make the healthy options more tempting.
How? By changing their names. Several research teams in the US have tried this strategy in various school canteens and they found that making the names “seductive”, catchy or funny can induce children to eat healthier.
Hence, instead of offering carrots as “vegetable of the day” or simply “carrots”, call them “X-ray vision carrots” or “twisted citrus-glazed carrots” and you will increase the probability that children will pick them!
Peter Ayton is a Professor of Psychology, Associate Dean of Research and Deputy Dean of Social Sciences at City University of London. His research interests cover behavioural decision theory, risk, uncertainty, affect and well-being.
In May, he visited Bocconi University as a part of seminar series co-organised by B.BIAS and BELSS (Bocconi Experimental Lab for Social Sciences) and he was kind enough to give us an interview.
BB: A cliche but necessary question: what got you interested in BE? Peter Ayton: It was a bit of an accident. After graduating in Psychology (which itself was a lucky outcome as I went to university from school not having much idea what Psychology was), I went on to do a PhD on the psychology of metaphorical language comprehension. At that time, there was almost no research that could explain how people understood metaphors and I found myself completely intrigued by it. However, due to a lack of opportunities in this field, I applied for a job as a postdoctoral research assistant on a project investigating subjective confidence in forecasts and was introduced to the world of decision research and have never looked back.
I became a Behavioural Economist the day that people decided that Psychologists who studied decision making could be called Behavioural Economists. In this way, I am a victim (or beneficiary) of a rebranding exercise. The term Behavioural Economics has been around for a long time but gained real momentum after Kahneman’s Nobel prize. I notice lots of my Psychologist colleagues describing themselves as Behavioural Economists and suspect that one reason they do this is because there is no Nobel prize in Psychology. Of course the use of this term also invites Economists to join in with the investigation of those behaviours that are not anticipated by classical Economics – and that is a tremendous benefit to the research. Before this time Economists and Psychologists viewed each other with suspicion. While governments around the world used to be advised by Economists – and no Psychologists at all – now we see both Economists and Behavioural Economists (aka Psychologists) in a position to influence policy.
BB: Could you tell us a little about your areas of research and the work you’ve done?
PA: After my PhD research on metaphors I did some work on memory retrieval, before working on judgment and decision making. I started out looking into subjective confidence in forecasts and then looked at probability judgment, the “calibration” of uncertainty judgments and decision making under uncertainty. I have also done work on risk perception and some cognitive illusions, e.g. the sunk cost effect and the hot hand fallacy.
More recently I have been studying human well-being, in particular people’s predictions of how happy they would be under certain circumstances, e.g. if they had a chronic illness, or suffered an amputation. These judgements can be compared with the experience of people under these circumstances. The comparison reveals that people appear to mis-predict the likely effects of these conditions on their own well-being. This has some implications for public policy – specifically how we determine how much money should be devoted to medical research or care for people suffering from particular health conditions. If the predictions of people without the conditions are used as a guide, the spending priorities will be different from the case where the evaluations of the people with the conditions are used.
I am also interested in the impact of computerised advice on decision making. Despite society’s increasing dependence on computerised tools which alert people to risks (e.g. cancers on X-ray images, weapons in air passenger luggage, spell checkers), the understanding of their potential harm is very limited. Sometimes decision aids cause decision errors: one example of this we have found is that when a computer alerting tool misses a “target” (e.g. cancer on X-ray, bomb in luggage, spelling error in your dissertation), then people can be less likely to spot the unprompted target than they would be if they weren’t using the decision support tool in the first place. A phenomenon called “automation bias” occurs whereby people become dependent on the computerised tool. That goes unnoticed because quite often it is easy to demonstrate that people detect more targets when they use the computer than when they don’t, and unfortunately the aggregate improvement conceals the particular errors. This kind of issue is at the junction between Computer Science and Cognitive Psychology and I have been collaborating with some Computer Scientists to try to understand how we can improve the influence of computers on people.
BB: Have you ever had a “professional failure” that was a turning point in your career?
PA: There are some who seriously propose a CV of failures as an endeavour (see this article), and mine would be much more extensive than my CV of successes. It’s unfortunate that failures are buried, because when you are starting out as a student, you tend to look at successful role models and think “How could I be as good as one of these guys?”, but actually they were pretty bad as well, they just don’t tell you.
Most of the things that I started doing, I didn’t finish. We just stopped because we realized we weren’t going anywhere, or it wasn’t interesting anymore. But sometimes those decisions can be rather questionable. I will give you one good example.
I did some research with a student few years ago about how one can use the compromise effect and the attraction effect in moral reasoning. The attraction effect occurs when you change the relative attractiveness of one option by introducing a new one that is definitely superior to it. For example more people prefer a nice pen to $6 if you add the option of a bad pen. The compromise effect is similar – when making a choice between, say, two cameras – a basic cheap one and a more elaborate expensive one, you may favour the cheaper one. But upon the introduction of a third highly advanced but extremely expensive camera, you are likely to change your preference to the one in the middle as a compromise. As for the moral choices, take the trolley problem, where you have a runaway train coming down a track where five people are working. You could press a button to divert the train to another track, and save the five people, but that would kill one person working on the other track. We tried to see if the answers that people give to these sort of problems would be similarly malleable like preferences are – maybe the attractiveness of a moral option would vary if you make something really bad close to it. But it didn’t “work”, it didn’t change people’s decisions. I remember being disappointed because I wanted to write a paper saying people’s moral decisions are really manipulable, that is, people like to think they’ve got moral sense but actually they can be manipulated. I realized only much later that I should have kept on with this, because if I had clearly established that there was no effect of context on moral choices, I could have written a more interesting paper about how context does affect consumer preferences, but not moral choices.
BB: What would you say is your favourite nudge?
PA: I’m not sure I have a favourite nudge, I’m a bit suspicious of the idea of identifying behaviours as “nudges”. Many “nudges” referred to even in the Nudge book are actually behavioural phenomena discovered by social psychologists many years ago, long before anyone referred to them as nudges! But one that makes me smile is the one with stairs and escalator, and then there is a thin matchstick man pointing to the stairs and a fat matchstick man pointing up to the escalator. You need a bit of nerve to get on the escalator after seeing that.
BB: Is there any finding from behavioural research that surprised you? As in, where you found results contrary to what you expected or to what is accepted as intuitive?
PA: When I read Joshua Miller’s paper on hot hand, I was so excited that I couldn’t sleep for about 3 days.
(Note by BB: “Hot hand” is the belief that a person who has just experienced success in a task, such as shots in basketball, has a greater probability of success in the upcoming rounds in the task. The hot hand fallacy refers to the finding that such a belief is wrong – for basketball at any rate – and it has been cited as a prominent example of a cognitive illusion by many researchers. However, the paper of Joshua Miller and Adam Sanjuro proves that there may have been flaws in the statistical analyses and that the hot hand indeed exists an so there is no fallacy).
This development is quite fantastic because the hot hand fallacy has been around since 1985 when it was originally discovered by a group including Tom Gilovich and Amos Tversky and (and, in decision research, you don’t get any higher than that – they are royalty). Famously, basketball coaches reacted by saying: “It’s all rubbish, I know that there is a hot-hand effect”. Some academics too have crashed and burned while trying to contest this phenomenon. Until I understood the Miller and Sanjurjo paper, I was quite certain that the case was rock solid. People have found that there are sequential dependencies in other areas, for other sports even. However, the case for a hot hand fallacy in Basketball has been scrutinised so much that it’s truly astonishing that somebody’s come up with such a game-changing analysis of the statistics. I got into trouble a few years ago, when I gave a talk called “The cognitive illusion illusion” which somewhat audaciously argued that while there are cognitive illusions, they are mainly suffered by Cognitive Psychologists who think that their subjects suffer from cognitive illusions, when they don’t. Feeling rather pleased with myself I had the nerve to give this talk at Princeton University with Daniel Kahneman in the room. He made it very clear he wasn’t very impressed with my argument which admittedly was a little overstated. If only Josh and Adam had got their paper out before, I might have been spared admonishment from Kahneman!
The discovery of cognitive illusions is of particular interest for the agenda of business schools. The idea that there is a problem with the way people think is popular for two reasons. Firstly, people need to learn how to run businesses rationally – you don’t want business personnel making mistakes. But also, and more disturbingly, maybe you could exploit the irrationalities of your competitors or the consumer and exploit their vulnerability.
Judd B. Kessler is an Assistant Professor of Business Economics and Public Policy at The Wharton School, University of Pennsylvania. His research interests cover Experimental Economics, Public Policy and Market Design.
In March, he visited Bocconi University as a part of seminar series co-organised by B.BIAS and BELSS (Bocconi Experimental Lab for Social Sciences) and we had the honour to interview him about his career.
B.BIAS: What would you say first ignited your interest in BE?
Judd Kessler: I first got interested in Economics in high school, where we had a semester of Economics and our teacher made us keep an Economics journal. We were supposed to write about things we saw in the world through the lens of how an economist would think about it. I remember vividly the first time I understood why in a movie theatre popcorn is so expensive. That kind of thinking made me excited about Economics. When I got to undergrad and then graduate school, the thing that drew me to BE was that, in standard Economic Theory, humans are very simple. You can organize how they behave just with mathematical equations. That did not seem realistic to me, particularly in domains that interested me such as charitable giving, organ donation, and volunteering. That made me wonder what drives this behaviour and set me on the path of doing BE.
BB: Could you tell us a little more about your own research interests and the work you’ve done?
JK: I’m interested in what people call pro-social behaviours, basically a personal sacrifice that has a benefit to other people. In particular, I’m interested in understanding how social forces influence pro-social behaviours. For example, when I learn that other people are behaving generously — say I learn that others are donating to charity or taking up jobs that pay less but are good for society — then I’m more likely to do the same. This kind of response really fascinates me.
BB: Which of your research did you enjoy the most and why?
JK: It is a tough question, because I do three kinds of research, three methods really. The first is analysis of pre-existing data. The second is laboratory experiments, which are controlled experiments where you recruit people who know they are in a study. The third is field experiments, which are experiments where you do interventions in the “field” with people who do not know that they are a part of an experiment. They are all fun for different reasons.
One of the projects I’ve done recently is with “Teach for America”, an organization in the US that takes recent college graduates and people who are switching careers and helps them to get into jobs as teachers. We did a study with them, where we randomly added a line to the acceptance letter of people who have been admitted into the two-year program, saying: ” Last year, more than 84% of admitted applicants made the decision to join the corps, and I sincerely hope you join them”. We followed them for two years to see whether they stuck with the program. We were worried that we might get people who didn’t really want to be in the program to say yes and then they would drop out immediately. But that didn’t happen, and it was really cool — we did the experiment, added one small line, and we got to see in the data that the effect persists.
BB: Do you perceive any difference in the importance that BE has gained in the US versus other countries or regions (e.g. Europe)?
JK: I don’t think so, although I’m judging this based mostly on the extent to which academics are publishing BE work and the extent to which governments are using BE insights in their operations and practices. Both Europe and America have seen an increase over time. There are nudge units here in Europe, and also in the US, and there is lots of academic work done in both places. My hope is that it will continue to increase in both places.
BB: Maybe we perceive differences because of the heterogeneity of countries in Europe. For instance, here in Italy we see a few researchers working on BE, but it hasn’t picked up as much speed as in the UK.
JK: There is a lot of heterogeneity in the US as well. There are some universities in the US that have Econ departments that don’t do much behavioural work, so I think that’s probably not unlike Europe, in the sense that there are some places where lots of great behavioural people are and there are some places where it hasn´t come in yet. It could be that in equilibrium some universities don’t do behavioural.
BB: What do you think is the future of the BE?
JK: While lots of the early work focused on questions such as “Do people have this bias?” or “Is it possible for this behavioural phenomenon to arise in practice?”, I think the next set of work that will come out of BE will be more focused on identifying where behavioural biases are particularly relevant in affecting behaviour. Regarding nudges, I think we will start to see models designed to understand why nudges influence behaviour. This should help us understand when nudges will be effective and also when they will increase welfare.
BB: Apart from academic research, what are the career options available in the field of BE?
JK: There are academic-style jobs doing research for think tanks and government organisations. I also think BE is quite useful in consulting jobs. There is a lot BE can say about how consumers are thinking or how firms should operate. Understanding BE can help consultants make better recommendations. Within firms, I think of departments focused on pricing, advertising, or marketing as places where behavioural knowledge could be quite valuable.
BB: Do you think there is a threat of companies abusing this?
JK: Like any tool, BE can be used for good or bad. Think of a nudge. When deciding to implement a nudge, you should worry about its welfare effects — you should only like it if it makes people better off. Once you’re asking those questions, you’re on the right track.
BB: What advice would you give to young students interested in BE? What courses should they take and what experiences should they try to gain?
JK: I would advise them to take both Econ classes — to understand the traditional Econ way of thinking — and psychology classes so that they can see both sides. If you just do behavioural and you don’t know the way psychologists and economists think about it, there’s a gap in understanding. What I ask my graduate students to do, when they are developing new behavioural ideas is to think first about what would happen in a non-behavioural world. How would their intervention affect behaviour in the traditional, rational-agent model? Only then do we move on to how behavioural agents would respond.
BB: How did you feel after being mentioned in the Forbes’ list of under 30s?
JK: It was quite nice actually. No one in my family had done a PhD before and it wasn’t that common a thing among my friends, either. So there was this sense that I was still a student and in school and my friends made fun of me for that, even after I got my first job as a professor. So, it was nice to have some validation that research work could influence policy — and, as a bonus, my friends stopped making fun of me.