Just start with few questions for coffee-addicted people: how many coffees do you take per day and how much sugar do you usually consume per coffee?
We all know that the world divides into two types of people: those who like their coffee natural, and those who like it sweet (sometimes very sweet).
If you happen to be in the second group, we have bad news for you. According to the WHO, sugar is one of the major causes of several diseases, especially obesity. Given that the limit of sugar that an adult male can ingest is about 50g per day (7-10 teaspoons of sugar), charging your coffee with extra sugar may not be the best choice for your health, especially if you drink more than simply one or two coffees per day.
Changing habits is easier in words than in action, but there is also good news for all sweet coffee lovers: several kinds of nudges have been developed to address the issue, and have the potential to find further applications in the food sector, for our health’s sake. Here are two simple but effective examples.
Do you like to take your coffee at the cafeteria?
Have you ever realised that every time you go to the cafeteria taking a coffee, a tea or a cappuccino, automatically you accompany it with one or two sugar bags?
A study conducted in Italy has investigated whether coffee consumers are aware of how much sugar they consume each time they take a break.
The behaviour of two groups of people has been monitored for 6 days – each one before and after reducing the weight of sugar in each packet from 7g to 4g. The research has shown that the reduction in sugar consumed per client is statistically significant.
This means that reducing the weight of sugar does not affect the number of packets clients take, as people tend to pick up a fixed number of packets out of habit.
Or are you just looking for a quick break?
Most of the times, busy students and workers prefer to just grab a coffee at the nearest vending machine. In this case, how often do they actually pay attention to choosing the amount of sugar?
Assuming that students do not change the default amount of sugar that is set by the machine (e.g. leaving the selection to 3 out of 5 units of sugar), a first simple way to reduce sugar consumption may be reducing the default level of sugar from 3 to 2. In this way, increasing sugar would be an active choice, that is likely not to happen. In fact, behavioural scientists have largely shown that people generally have a bias for maintaining the status quo.
A nudge has been hypothesized by Albert Gascon, a participant in a Nudge Competition on edx.org. It consists of two intelligent strategies tested in a single nudge intervention.
Firstly, the choice of sugar may be put before the choice of the drink; secondly, instead of reporting “more sugar” and “less sugar”, the extremes of the scale may report the options “healthier” and “less healthy”. In this way, consumers are induced to pay more attention to their choice – instead of relying upon the default option – and are nudged towards a better choice through highlighting the consequences of their choices, that are too often not taken into consideration.
Guglielmo Briscese is a Senior Advisor at the Behavioural Insights Team (BIT) in Sydney, Australia. He did his Bachelor’s in Economics from Università Politecnica delle Marche in Italy, MSc. in International Development from University of Glasgow and PhD in Economics from the University of Sydney. The main focus of his research and work are pro-social behaviours (e.g. charitable giving) and employment.
B.BIAS had the honour of interviewing him about his career and research!
B.BIAS: How did you get into Behavioural Economics and how did the work you did for international organisations lead you to it?
Guglielmo Briscese: When I was studying Economics I thought that Microeconomics was quite boring and didn’t see how it could have any practical implications since people are just not what these economic models say. That was when one of my professors at university recommended Freakonomics to me. It was around the same time when the Nudge came out as well. I kept Behavioural Economics (BE) as a side interest, because there was no Master’s degree anywhere in Europe in BE and after studying a Bachelor’s degree in Economics, I wasn’t ready to do Master’s in Psychology or so. One of my other interests was Development Economics, especially the work of Esther Duflo and others on Randomised Controlled Trials (RCTs). So I decided to enrol in a Master’s degree in International Development in the UK. After that I landed a job in the UN in Italy at the office of evaluation of the IFAD (International Fund for Agricultural Development). I thought that I’d try as much as I could to promote RCTs internally at IFAD, but there was still the BE element missing. Around 2010-2011, the UK government announced that they would launch a unit called the BIT, but it was still at very early stages. I started looking up PhD programs in this area and decided to do a PhD at Sydney University, in Australia. That was more of a personal choice, because I really liked Sydney.
Then I also realised that one of the most known Behavioural Economists, Robert Slonim, was based at Sydney University. He’s done a lot of research on blood donations and charitable giving. By pure coincidence, a member of the BIT also moved to Australia at the same time and thought of opening the Sydney office. I applied as soon as they opened it, and got in with the first wave of people. That was almost 3 years ago. So that’s my story!
BB: We know that for a couple of years, you were working for the BIT while pursuing your PhD in Economics at University of Sydney. How did you manage to do both?
GB: It was pretty horrible to be honest, not fun at all. I barely slept. You don’t have a lifestyle that’s very sustainable, you can do it only for a few years at most. BIT is an amazing place to work, I can’t think of another place I’d rather work at right now. But it’s also obviously very demanding. You work the long working hours like in consulting, but you also have to apply the academic rigour and come up with good trials. Doing a PhD at the same time with someone who is considered to be the top professor for BE in Australia wasn’t exactly the easiest thing. But the good part is I was doing the same thing, as in the skills I was developing were the same. The fact that I could combine the skills that I learnt from the BIT and bring them into the PhD turned out to be very valuable. I was able to run field experiments that ended up being a chapter of my thesis. And obviously the other way around as well. I brought some expertise and skills that I developed during my PhD that helped me to do my job faster here.
BB: What was the topic of your final thesis?
GB: My PhD was about pro-social behaviour. One chapter was about microcredit. I was working with an NGO that encouraged people to do micro loans. What they found was that a lot of lenders would get the micro loans paid back, but wouldn’t do anything with that money anymore. They wouldn’t re-lend it or cash it out, maybe because the micro-loans felt like a donation or due to the hassle of having to choose a borrower again. So we did an experiment where we sent an email to people saying “Hey, you have some money left in your account that you’re not using. You should do something with it”. We tested 3 different variations:
(1) To the first group, we just provided information: “You have some money available, it’s yours. You can lend it again or cash it out.”
(2) To another group, we said the same except we added that if they did nothing , we’d lend it again on their behalf.
(3) To the third group, we told them that we’d consider their money to be a donation to the NGO if they did nothing with it.
What we found is that in the donation-default group, more people would opt out, and re-lend the money, whereas people in the loan-default group were more likely to go with the default. What we realised with this experiment is that people perhaps chose to join the micro-lending platform because they really like to give loans. If you all of a sudden tell them that you’re going to treat their loan money as a donation that conflicts with the very first reason why they joined the platform. So when you design defaults, you need to take into account people’s past preferences and choices. That was one chapter.
The other two chapters were lab experiments on Corporate Social Responsibility (CSR). There are studies saying that companies that invest in CSR are better at attracting millennials, but I argued is that even here there is a selection process. We conducted experiments and found that people always choose financial incentives over social incentives. But when companies provide the same level of financial incentives, those that provide the extra bit of CSR are more likely to be chosen. But we didn’t find that social incentives per se get people to work harder and can’t be a substitute to financial incentives.
BB: Which of your projects with the BIT did you like the most?
GB: At BIT I have been working on a large number of trials aimed at decreasing unemployment and improving job opportunities in Australia.
One of these trials aimed at increasing the uptake of government incentives to business to hire a long-term unemployed job seekers. Essentially, the government says: “If you hire this person that has been unemployed for some time, I’ll give you a bit of money”. Surprisingly, the uptake was really low. What we realised is that these sorts of incentives were sending the wrong signal about the qualities of the job seeker. Employers would think: “What is wrong with this job seeker that they have to pay me to hire him?”. So we changed some aspects of how these incentives were promoted and administered, and we framed it as a bonus to the businesses, more along the lines of “You have now an opportunity to hire this person and you will also be rewarded with a bonus if you hire this job seeker”. We increased the uptakeof these incentives, which in turn will lead to more people finding ajob. And it’s quite an interesting case, because it’s a typical scenario where the government has a program that could work on paper, it makes sense, but if you don’t take into account people’s reactions and behaviour, than it’s probably not going to work.
BB: What do you do in your free time and how do you cope with stress?
GB: When I was doing the PhD, there was no such thing as hobbies but I’ve been playing the drums since I was very little. When I finished high school and started university, I initially enrolled in a course to study Biotechnology. I did it for about a year, and then dropped out, because at that time I was playing with a band, and we signed a contract with a label, and we went on a tour in Central America, Italy, Spain, Germany… I thought I was going to be a musician for the rest of my life. But then I decided to enrol in Economics and get back into research. As I promised myself that at some point I’d start again, now that I finished my PhD, I have a band here in Sydney!
When we think about improving student performance, we usually think of major changes in the education system, improving infrastructure, hiring more qualified teachers, etc. All of which are important of course, but could we be missing something? Perhaps something less costly and easier to implement?
Having someone that supports you in your activities is important. There may be times when you feel demotivated, and you just need someone to be there and check on you.
This is exactly the idea that has been used by the Behavioural Insights Team (BIT) in the new UK in their new trial in the context of education. They looked at students who, at the age of 16, had failed Maths and English exams.
Such students were asked to choose their own “study supporter” (e.g. a friend or a relative) –who was supposed to send them text messages encouraging them to study or revise for the upcoming exams.
Could this really make such a difference, you may ask?
The answer seems to be an unambiguous YES. Students were 27% more likely to pass the exams. It seems like knowing that someone cared about their results helped them find the motivation to work harder!
How can we prevent individuals from urinating in open areas?
In the Nudge TV show “The Power of Habit”, Sille Krukow, a behavioural expert based in Denmark, designed a nudge to help the Copenhagen Central Station. The problem they were faced with was that many men would urinate in hidden corners outside the building, despite the close location of public (and clean) toilets.
Staff has to clean the area several times a day
According to Ms. Krukow, this phenomenon can be explained by the broken-window theory of policing (Wilson and Kelling, 1982). When individuals observe others misbehaving, they tend to act in the same way, instead of doing the right thing.
The solution adopted in the show was to add a urinal to the area in question and stickers on the sidewalks signaling the direction and distance to the closest WC.
Implementation of the WC stickers
Urinal installed in the most critical zone
Around 500 people had been observed urinating in two corners of the station during the week before the experiment. After the intervention, half of the people did the right thing, i.e. they instantly used the urinal, while the other half started to urinate on the street, but changed their behaviour once they saw the urinal. This means that the cleaning staff and station customers were saved of 5,000 liters of urine a week!
To learn more about this nudge (and other experiments), you can find the episodes online here.
When are nudges most effective? A study by Pelle Guldborg Hansen, founder of the Danish Nudging Network, a non-profit organisation in Copenhagen, suggests that nudges may work only if they are in line with social norms. They tested two potential “social nudges” in partnership with the local government, both using symbols to try to influence choices:
–In one trial, green arrows pointing to stairs were put next to railway-station escalators, in the hope of encouraging people to take the healthier option. This had almost no effect.
–The other experiment had a series of green footprints leading to rubbish bins. These signs reduced littering by 46% during a controlled experiment in which wrapped sweets were handed out.
“There are no social norms about taking the stairs but there are about littering,” said Mr Hansen. Hence, perhaps existing social norms must be studied before designing a nudge!
How can we nudge people to donate to charities? There are many ways to do so, but we would like to share one in particular which is very simple and surprisingly powerful.
It seems that peer effects are an effective tool to change people’s behaviour. We want to do what people like us are doing. If teenagers have friends that smoke, they are very likely to start smoking themselves (and much more likely than if their parents smoke). The same holds for donations – if our colleagues donate, we would like to donate as well.
This is what has been tested by the UK´s Behavioural Insights Team in cooperation with Her Majesty’s Revenue and Customs (HMRC). The HMRC employees in Essex were sent postcards describing the donation efforts of their colleagues and encouraging them to do the same, to see if more people would start donating. However the experiment went even further (and this is where it gets interesting). One group of employees got postcards featuring a picture of the donor in addition to the above information (see Picture 1). An insignificant change, you may think, but 6.4 % of people signed up for the donation scheme in the latter condition, compared to 2.9 % in the no-picture condition.
Picture 1: Postcard featuring a photo of the donor
Carrots or French fries? Fruit salad or a chocolate bar? These are the dilemmas that children face when choosing their meals in school lunchrooms. From convincing them that veggies will give them superpowers to ominous threats of what will happen to their bodies if they don’t eat healthy, there are few options left unexplored on how to get kids to eat right.
Unsurprisingly, when all else fails, BE swoops in and saves the day. Discarding classical solutions such as information campaigns, it offers a much simpler alternative: make the healthy options more tempting.
How? By changing their names. Several research teams in the US have tried this strategy in various school canteens and they found that making the names “seductive”, catchy or funny can induce children to eat healthier.
Hence, instead of offering carrots as “vegetable of the day” or simply “carrots”, call them “X-ray vision carrots” or “twisted citrus-glazed carrots” and you will increase the probability that children will pick them!
Peter Ayton is a Professor of Psychology, Associate Dean of Research and Deputy Dean of Social Sciences at City University of London. His research interests cover behavioural decision theory, risk, uncertainty, affect and well-being.
In May, he visited Bocconi University as a part of seminar series co-organised by B.BIAS and BELSS (Bocconi Experimental Lab for Social Sciences) and he was kind enough to give us an interview.
BB: A cliche but necessary question: what got you interested in BE? Peter Ayton: It was a bit of an accident. After graduating in Psychology (which itself was a lucky outcome as I went to university from school not having much idea what Psychology was), I went on to do a PhD on the psychology of metaphorical language comprehension. At that time, there was almost no research that could explain how people understood metaphors and I found myself completely intrigued by it. However, due to a lack of opportunities in this field, I applied for a job as a postdoctoral research assistant on a project investigating subjective confidence in forecasts and was introduced to the world of decision research and have never looked back.
I became a Behavioural Economist the day that people decided that Psychologists who studied decision making could be called Behavioural Economists. In this way, I am a victim (or beneficiary) of a rebranding exercise. The term Behavioural Economics has been around for a long time but gained real momentum after Kahneman’s Nobel prize. I notice lots of my Psychologist colleagues describing themselves as Behavioural Economists and suspect that one reason they do this is because there is no Nobel prize in Psychology. Of course the use of this term also invites Economists to join in with the investigation of those behaviours that are not anticipated by classical Economics – and that is a tremendous benefit to the research. Before this time Economists and Psychologists viewed each other with suspicion. While governments around the world used to be advised by Economists – and no Psychologists at all – now we see both Economists and Behavioural Economists (aka Psychologists) in a position to influence policy.
BB: Could you tell us a little about your areas of research and the work you’ve done?
PA: After my PhD research on metaphors I did some work on memory retrieval, before working on judgment and decision making. I started out looking into subjective confidence in forecasts and then looked at probability judgment, the “calibration” of uncertainty judgments and decision making under uncertainty. I have also done work on risk perception and some cognitive illusions, e.g. the sunk cost effect and the hot hand fallacy.
More recently I have been studying human well-being, in particular people’s predictions of how happy they would be under certain circumstances, e.g. if they had a chronic illness, or suffered an amputation. These judgements can be compared with the experience of people under these circumstances. The comparison reveals that people appear to mis-predict the likely effects of these conditions on their own well-being. This has some implications for public policy – specifically how we determine how much money should be devoted to medical research or care for people suffering from particular health conditions. If the predictions of people without the conditions are used as a guide, the spending priorities will be different from the case where the evaluations of the people with the conditions are used.
I am also interested in the impact of computerised advice on decision making. Despite society’s increasing dependence on computerised tools which alert people to risks (e.g. cancers on X-ray images, weapons in air passenger luggage, spell checkers), the understanding of their potential harm is very limited. Sometimes decision aids cause decision errors: one example of this we have found is that when a computer alerting tool misses a “target” (e.g. cancer on X-ray, bomb in luggage, spelling error in your dissertation), then people can be less likely to spot the unprompted target than they would be if they weren’t using the decision support tool in the first place. A phenomenon called “automation bias” occurs whereby people become dependent on the computerised tool. That goes unnoticed because quite often it is easy to demonstrate that people detect more targets when they use the computer than when they don’t, and unfortunately the aggregate improvement conceals the particular errors. This kind of issue is at the junction between Computer Science and Cognitive Psychology and I have been collaborating with some Computer Scientists to try to understand how we can improve the influence of computers on people.
BB: Have you ever had a “professional failure” that was a turning point in your career?
PA: There are some who seriously propose a CV of failures as an endeavour (see this article), and mine would be much more extensive than my CV of successes. It’s unfortunate that failures are buried, because when you are starting out as a student, you tend to look at successful role models and think “How could I be as good as one of these guys?”, but actually they were pretty bad as well, they just don’t tell you.
Most of the things that I started doing, I didn’t finish. We just stopped because we realized we weren’t going anywhere, or it wasn’t interesting anymore. But sometimes those decisions can be rather questionable. I will give you one good example.
I did some research with a student few years ago about how one can use the compromise effect and the attraction effect in moral reasoning. The attraction effect occurs when you change the relative attractiveness of one option by introducing a new one that is definitely superior to it. For example more people prefer a nice pen to $6 if you add the option of a bad pen. The compromise effect is similar – when making a choice between, say, two cameras – a basic cheap one and a more elaborate expensive one, you may favour the cheaper one. But upon the introduction of a third highly advanced but extremely expensive camera, you are likely to change your preference to the one in the middle as a compromise. As for the moral choices, take the trolley problem, where you have a runaway train coming down a track where five people are working. You could press a button to divert the train to another track, and save the five people, but that would kill one person working on the other track. We tried to see if the answers that people give to these sort of problems would be similarly malleable like preferences are – maybe the attractiveness of a moral option would vary if you make something really bad close to it. But it didn’t “work”, it didn’t change people’s decisions. I remember being disappointed because I wanted to write a paper saying people’s moral decisions are really manipulable, that is, people like to think they’ve got moral sense but actually they can be manipulated. I realized only much later that I should have kept on with this, because if I had clearly established that there was no effect of context on moral choices, I could have written a more interesting paper about how context does affect consumer preferences, but not moral choices.
BB: What would you say is your favourite nudge?
PA: I’m not sure I have a favourite nudge, I’m a bit suspicious of the idea of identifying behaviours as “nudges”. Many “nudges” referred to even in the Nudge book are actually behavioural phenomena discovered by social psychologists many years ago, long before anyone referred to them as nudges! But one that makes me smile is the one with stairs and escalator, and then there is a thin matchstick man pointing to the stairs and a fat matchstick man pointing up to the escalator. You need a bit of nerve to get on the escalator after seeing that.
BB: Is there any finding from behavioural research that surprised you? As in, where you found results contrary to what you expected or to what is accepted as intuitive?
PA: When I read Joshua Miller’s paper on hot hand, I was so excited that I couldn’t sleep for about 3 days.
(Note by BB: “Hot hand” is the belief that a person who has just experienced success in a task, such as shots in basketball, has a greater probability of success in the upcoming rounds in the task. The hot hand fallacy refers to the finding that such a belief is wrong – for basketball at any rate – and it has been cited as a prominent example of a cognitive illusion by many researchers. However, the paper of Joshua Miller and Adam Sanjuro proves that there may have been flaws in the statistical analyses and that the hot hand indeed exists an so there is no fallacy).
This development is quite fantastic because the hot hand fallacy has been around since 1985 when it was originally discovered by a group including Tom Gilovich and Amos Tversky and (and, in decision research, you don’t get any higher than that – they are royalty). Famously, basketball coaches reacted by saying: “It’s all rubbish, I know that there is a hot-hand effect”. Some academics too have crashed and burned while trying to contest this phenomenon. Until I understood the Miller and Sanjurjo paper, I was quite certain that the case was rock solid. People have found that there are sequential dependencies in other areas, for other sports even. However, the case for a hot hand fallacy in Basketball has been scrutinised so much that it’s truly astonishing that somebody’s come up with such a game-changing analysis of the statistics. I got into trouble a few years ago, when I gave a talk called “The cognitive illusion illusion” which somewhat audaciously argued that while there are cognitive illusions, they are mainly suffered by Cognitive Psychologists who think that their subjects suffer from cognitive illusions, when they don’t. Feeling rather pleased with myself I had the nerve to give this talk at Princeton University with Daniel Kahneman in the room. He made it very clear he wasn’t very impressed with my argument which admittedly was a little overstated. If only Josh and Adam had got their paper out before, I might have been spared admonishment from Kahneman!
The discovery of cognitive illusions is of particular interest for the agenda of business schools. The idea that there is a problem with the way people think is popular for two reasons. Firstly, people need to learn how to run businesses rationally – you don’t want business personnel making mistakes. But also, and more disturbingly, maybe you could exploit the irrationalities of your competitors or the consumer and exploit their vulnerability.
Judd B. Kessler is an Assistant Professor of Business Economics and Public Policy at The Wharton School, University of Pennsylvania. His research interests cover Experimental Economics, Public Policy and Market Design.
In March, he visited Bocconi University as a part of seminar series co-organised by B.BIAS and BELSS (Bocconi Experimental Lab for Social Sciences) and we had the honour to interview him about his career.
B.BIAS: What would you say first ignited your interest in BE?
Judd Kessler: I first got interested in Economics in high school, where we had a semester of Economics and our teacher made us keep an Economics journal. We were supposed to write about things we saw in the world through the lens of how an economist would think about it. I remember vividly the first time I understood why in a movie theatre popcorn is so expensive. That kind of thinking made me excited about Economics. When I got to undergrad and then graduate school, the thing that drew me to BE was that, in standard Economic Theory, humans are very simple. You can organize how they behave just with mathematical equations. That did not seem realistic to me, particularly in domains that interested me such as charitable giving, organ donation, and volunteering. That made me wonder what drives this behaviour and set me on the path of doing BE.
BB: Could you tell us a little more about your own research interests and the work you’ve done?
JK: I’m interested in what people call pro-social behaviours, basically a personal sacrifice that has a benefit to other people. In particular, I’m interested in understanding how social forces influence pro-social behaviours. For example, when I learn that other people are behaving generously — say I learn that others are donating to charity or taking up jobs that pay less but are good for society — then I’m more likely to do the same. This kind of response really fascinates me.
BB: Which of your research did you enjoy the most and why?
JK: It is a tough question, because I do three kinds of research, three methods really. The first is analysis of pre-existing data. The second is laboratory experiments, which are controlled experiments where you recruit people who know they are in a study. The third is field experiments, which are experiments where you do interventions in the “field” with people who do not know that they are a part of an experiment. They are all fun for different reasons.
One of the projects I’ve done recently is with “Teach for America”, an organization in the US that takes recent college graduates and people who are switching careers and helps them to get into jobs as teachers. We did a study with them, where we randomly added a line to the acceptance letter of people who have been admitted into the two-year program, saying: ” Last year, more than 84% of admitted applicants made the decision to join the corps, and I sincerely hope you join them”. We followed them for two years to see whether they stuck with the program. We were worried that we might get people who didn’t really want to be in the program to say yes and then they would drop out immediately. But that didn’t happen, and it was really cool — we did the experiment, added one small line, and we got to see in the data that the effect persists.
BB: Do you perceive any difference in the importance that BE has gained in the US versus other countries or regions (e.g. Europe)?
JK: I don’t think so, although I’m judging this based mostly on the extent to which academics are publishing BE work and the extent to which governments are using BE insights in their operations and practices. Both Europe and America have seen an increase over time. There are nudge units here in Europe, and also in the US, and there is lots of academic work done in both places. My hope is that it will continue to increase in both places.
BB: Maybe we perceive differences because of the heterogeneity of countries in Europe. For instance, here in Italy we see a few researchers working on BE, but it hasn’t picked up as much speed as in the UK.
JK: There is a lot of heterogeneity in the US as well. There are some universities in the US that have Econ departments that don’t do much behavioural work, so I think that’s probably not unlike Europe, in the sense that there are some places where lots of great behavioural people are and there are some places where it hasn´t come in yet. It could be that in equilibrium some universities don’t do behavioural.
BB: What do you think is the future of the BE?
JK: While lots of the early work focused on questions such as “Do people have this bias?” or “Is it possible for this behavioural phenomenon to arise in practice?”, I think the next set of work that will come out of BE will be more focused on identifying where behavioural biases are particularly relevant in affecting behaviour. Regarding nudges, I think we will start to see models designed to understand why nudges influence behaviour. This should help us understand when nudges will be effective and also when they will increase welfare.
BB: Apart from academic research, what are the career options available in the field of BE?
JK: There are academic-style jobs doing research for think tanks and government organisations. I also think BE is quite useful in consulting jobs. There is a lot BE can say about how consumers are thinking or how firms should operate. Understanding BE can help consultants make better recommendations. Within firms, I think of departments focused on pricing, advertising, or marketing as places where behavioural knowledge could be quite valuable.
BB: Do you think there is a threat of companies abusing this?
JK: Like any tool, BE can be used for good or bad. Think of a nudge. When deciding to implement a nudge, you should worry about its welfare effects — you should only like it if it makes people better off. Once you’re asking those questions, you’re on the right track.
BB: What advice would you give to young students interested in BE? What courses should they take and what experiences should they try to gain?
JK: I would advise them to take both Econ classes — to understand the traditional Econ way of thinking — and psychology classes so that they can see both sides. If you just do behavioural and you don’t know the way psychologists and economists think about it, there’s a gap in understanding. What I ask my graduate students to do, when they are developing new behavioural ideas is to think first about what would happen in a non-behavioural world. How would their intervention affect behaviour in the traditional, rational-agent model? Only then do we move on to how behavioural agents would respond.
BB: How did you feel after being mentioned in the Forbes’ list of under 30s?
JK: It was quite nice actually. No one in my family had done a PhD before and it wasn’t that common a thing among my friends, either. So there was this sense that I was still a student and in school and my friends made fun of me for that, even after I got my first job as a professor. So, it was nice to have some validation that research work could influence policy — and, as a bonus, my friends stopped making fun of me.