Interviews

Interview with Guglielmo Briscese

Guglielmo Briscese is a Senior Advisor at the Behavioural Insights Team (BIT) in Sydney, Australia. He did his Bachelor’s in Economics from Università Politecnica delle Marche in Italy, MSc. in International Development from University of Glasgow and PhD in Economics from the University of Sydney. The main focus of his research and work are pro-social behaviours (e.g. charitable giving) and employment.

B.BIAS had the honour of interviewing him about his career and research!

 

B.BIAS: How did you get into Behavioural Economics and how did the work you did for international organisations lead you to it?

Guglielmo Briscese: When I was studying Economics I thought that Microeconomics was quite boring and didn’t see how it could have any practical implications since people are just not what these economic models say. That was when one of my professors at university recommended Freakonomics to me. It was around the same time when the Nudge came out as well. I kept Behavioural Economics (BE) as a side interest, because there was no Master’s degree anywhere in Europe in BE and after studying a Bachelor’s degree in Economics, I wasn’t ready to do Master’s in Psychology or so. One of my other interests was Development Economics, especially the work of Esther Duflo and others on Randomised Controlled Trials (RCTs). So I decided to enrol in a Master’s degree in International Development in the UK. After that I landed a job in the UN in Italy at the office of evaluation of the IFAD (International Fund for Agricultural Development). I thought that I’d try as much as I could to promote RCTs internally at IFAD, but there was still the BE element missing. Around 2010-2011, the UK government announced that they would launch a unit called the BIT, but it was still at very early stages. I started looking up PhD programs in this area and decided to do a PhD at Sydney University, in Australia. That was more of a personal choice, because I really liked Sydney.

Then I also realised that one of the most known Behavioural Economists, Robert Slonim, was based at Sydney University. He’s done a lot of research on blood donations and charitable giving. By pure coincidence, a member of the BIT also moved to Australia at the same time and thought of opening the Sydney office. I applied as soon as they opened it, and got in with the first wave of people. That was almost 3 years ago. So that’s my story!

BB: We know that for a couple of years, you were working for the BIT while pursuing your PhD in Economics at University of Sydney. How did you manage to do both?

GB: It was pretty horrible to be honest, not fun at all. I barely slept. You don’t have a lifestyle that’s very sustainable, you can do it only for a few years at most. BIT is an amazing place to work, I can’t think of another place I’d rather work at right now. But it’s also obviously very demanding. You work the long working hours like in consulting, but you also have to apply the academic rigour and come up with good trials. Doing a PhD at the same time with someone who is considered to be the top professor for BE in Australia wasn’t exactly the easiest thing. But the good part is I was doing the same thing, as in the skills I was developing were the same. The fact that I could combine the skills that I learnt from the BIT and bring them into the PhD turned out to be very valuable. I was able to run field experiments that ended up being a chapter of my thesis. And obviously the other way around as well. I brought some expertise and skills that I developed during my PhD that helped me to do my job faster here.

BB: What was the topic of your final thesis?

GB: My PhD was about pro-social behaviour. One chapter was about microcredit. I was working with an NGO that encouraged people to do micro loans. What they found was that a lot of lenders would get the micro loans paid back, but wouldn’t do anything with that money anymore. They wouldn’t re-lend it or cash it out, maybe because the micro-loans felt like a donation or due to the hassle of having to choose a borrower again. So we did an experiment where we sent an email to people saying “Hey, you have some money left in your account that you’re not using. You should do something with it”. We tested 3 different variations:

(1) To the first group, we just provided information: “You have some money available, it’s yours. You can lend it again or cash it out.”

(2) To another group, we said the same except we added that if they did nothing , we’d lend it again on their behalf.

(3) To the third group, we told them that we’d consider their money to be a donation to the NGO if they did nothing with it.

What we found is that in the donation-default group, more people would opt out, and re-lend the money, whereas people in the loan-default group were more likely to go with the default. What we realised with this experiment is that people perhaps chose to join the micro-lending platform because they really like to give loans. If you all of a sudden tell them that you’re going to treat their loan money as a donation that conflicts with the very first reason why they joined the platform. So when you design defaults, you need to take into account people’s past preferences and choices. That was one chapter.

The other two chapters were lab experiments on Corporate Social Responsibility (CSR). There are studies saying that companies that invest in CSR are better at attracting millennials, but I argued is that even here there is a selection process. We conducted experiments and found that people always choose financial incentives over social incentives. But when companies provide the same level of financial incentives, those that provide the extra bit of CSR are more likely to be chosen. But we didn’t find that social incentives per se get people to work harder and can’t be a substitute to financial incentives.

BB: Which of your projects with the BIT did you like the most?

GB: At BIT I have been working on a large number of trials aimed at decreasing unemployment and improving job opportunities in Australia.

One of these trials aimed at increasing the uptake of government incentives to business to hire a long-term unemployed job seekers. Essentially, the government says: “If you hire this person that has been unemployed for some time, I’ll give you a bit of money”. Surprisingly, the uptake was really low. What we realised is that these sorts of incentives were sending the wrong signal about the qualities of the job seeker. Employers would think: “What is wrong with this job seeker that they have to pay me to hire him?”. So we changed some aspects of how these incentives were promoted and administered, and we framed it as a bonus to the businesses, more along the lines of “You have now an opportunity to hire this person and you will also be rewarded with a bonus if you hire this job seeker”. We increased the uptakeof these incentives, which in turn will lead to more people finding ajob. And it’s quite an interesting case, because it’s a typical scenario where the government has a program that could work on paper, it makes sense, but if you don’t take into account people’s reactions and behaviour, than it’s probably not going to work.

BB: What do you do in your free time and how do you cope with stress?

GB: When I was doing the PhD, there was no such thing as hobbies but I’ve been playing the drums since I was very little. When I finished high school and started university, I initially enrolled in a course to study Biotechnology. I did it for about a year, and then dropped out, because at that time I was playing with a band, and we signed a contract with a label, and we went on a tour in Central America, Italy, Spain, Germany… I thought I was going to be a musician for the rest of my life. But then I decided to enrol in Economics and get back into research. As I promised myself that at some point I’d start again, now that I finished my PhD, I have a band here in Sydney!

Advertisements
Interviews

Interview with Prof. Peter Ayton

Peter Ayton is a Professor of Psychology, Associate Dean of Research and Deputy Dean of Social Sciences at City University of London. His research interests cover behavioural decision theory, risk, uncertainty, affect and well-being.

In May, he visited Bocconi University as a part of seminar series co-organised by B.BIAS and BELSS (Bocconi Experimental Lab for Social Sciences) and he was kind enough to give us an interview.

BB: A cliche but necessary question: what got you interested in BE?
Peter Ayton: It was a bit of an accident. After graduating in Psychology (which itself was a lucky outcome as I went to university from school not having much idea what Psychology was), I went on to do a PhD on the psychology of metaphorical language comprehension. At that time, there was almost no research that could explain how people understood metaphors and I found myself completely intrigued by it. However, due to a lack of opportunities in this field, I applied for a job as a postdoctoral research assistant on a project investigating subjective confidence in forecasts and was introduced to the world of decision research and have never looked back.

I became a Behavioural Economist the day that people decided that Psychologists who studied decision making could be called Behavioural Economists. In this way, I am a victim (or beneficiary) of a rebranding exercise. The term Behavioural Economics has been around for a long time but gained real momentum after Kahneman’s Nobel prize. I notice lots of my Psychologist colleagues describing themselves as Behavioural Economists and suspect that one reason they do this is because there is no Nobel prize in Psychology. Of course the use of this term also invites Economists to join in with the investigation of those behaviours that are not anticipated by classical Economics – and that is a tremendous benefit to the research. Before this time Economists and Psychologists viewed each other with suspicion. While governments around the world used to be advised by Economists – and no Psychologists at all – now we see both Economists and Behavioural Economists (aka Psychologists) in a position to influence policy.

BB: Could you tell us a little about your areas of research and the work you’ve done?
PA:
After my PhD research on metaphors I did some work on memory retrieval, before working on judgment and decision making. I started out looking into subjective confidence in forecasts and then looked at probability judgment, the “calibration” of uncertainty judgments and decision making under uncertainty. I have also done work on risk perception and some cognitive illusions, e.g. the sunk cost effect and the hot hand fallacy.

More recently I have been studying human well-being, in particular people’s predictions of how happy they would be under certain circumstances, e.g. if they had a chronic illness, or suffered an amputation. These judgements can be compared with the experience of people under these circumstances. The comparison reveals that people appear to mis-predict the likely effects of these conditions on their own well-being. This has some implications for public policy – specifically how we determine how much money should be devoted to medical research or care for people suffering from particular health conditions. If the predictions of people without the conditions are used as a guide, the spending priorities will be different from the case where the evaluations of the people with the conditions are used.

I am also interested in the impact of computerised advice on decision making. Despite society’s increasing dependence on computerised tools which alert people to risks (e.g. cancers on X-ray images, weapons in air passenger luggage, spell checkers), the understanding of their potential harm is very limited. Sometimes decision aids cause decision errors: one example of this we have found is that when a computer alerting tool misses a “target” (e.g. cancer on X-ray, bomb in luggage, spelling error in your dissertation), then people can be less likely to spot the unprompted target than they would be if they weren’t using the decision support tool in the first place. A phenomenon called “automation bias” occurs whereby people become dependent on the computerised tool. That goes unnoticed because quite often it is easy to demonstrate that people detect more targets when they use the computer than when they don’t, and unfortunately the aggregate improvement conceals the particular errors. This kind of issue is at the junction between Computer Science and Cognitive Psychology and I have been collaborating with some Computer Scientists to try to understand how we can improve the influence of computers on people.

BB: Have you ever had a “professional failure” that was a turning point in your career?
PA:
There are some who seriously propose a CV of failures as an endeavour (see this article), and mine would be much more extensive than my CV of successes. It’s unfortunate that failures are buried, because when you are starting out as a student, you tend to look at successful role models and think “How could I be as good as one of these guys?”, but actually they were pretty bad as well, they just don’t tell you.

Most of the things that I started doing, I didn’t finish. We just stopped because we realized we weren’t going anywhere, or it wasn’t interesting anymore. But sometimes those decisions can be rather questionable. I will give you one good example.

I did some research with a student few years ago about how one can use the compromise effect and the attraction effect in moral reasoning. The attraction effect occurs when you change the relative attractiveness of one option by introducing a new one that is definitely superior to it. For example more people prefer a nice pen to $6 if you add the option of a bad pen.  The compromise effect is similar – when making a choice between, say, two cameras – a basic cheap one and a more elaborate expensive one, you may favour the cheaper one. But upon the introduction of a third highly advanced but extremely expensive camera, you are likely to change your preference to the one in the middle as a compromise. As for the moral choices, take the trolley problem, where you have a runaway train coming down a track where five people are working. You could press a button to divert the train to another track, and save the five people, but that would kill one person working on the other track. We tried to see if the answers that people give to these sort of problems would be similarly malleable like preferences are – maybe the attractiveness of a moral option would vary if you make something really bad close to it.  But it didn’t “work”, it didn’t change people’s decisions. I remember being disappointed because I wanted to write a paper saying people’s moral decisions are really manipulable, that is, people like to think they’ve got moral sense but actually they can be manipulated. I realized only much later that I should have kept on with this, because if I had clearly established that there was no effect of context on moral choices, I could have written a more interesting paper about how context does affect consumer preferences, but not moral choices.

BB: What would you say is your favourite nudge?
PA:
I’m not sure I have a favourite nudge, I’m a bit suspicious of the idea of identifying behaviours as “nudges”. Many “nudges” referred to even in the Nudge book are actually behavioural phenomena discovered by social psychologists many years ago, long before anyone referred to them as nudges! But one that makes me smile is the one with stairs and escalator, and then there is a thin matchstick man pointing to the stairs and a fat matchstick man pointing up to the escalator. You need a bit of nerve to get on the escalator after seeing that.

stairs_escalator_nudge
(Note by BB: Matchstick man nudge)

BB: Is there any finding from behavioural research that surprised you? As in, where you found results contrary to what you expected or to what is accepted as intuitive?
PA:
When I read Joshua Miller’s paper on hot hand, I was so excited that I couldn’t sleep for about 3 days.

(Note by BB: “Hot hand” is the belief that a person who has just experienced success in a task, such as shots in basketball, has a greater probability of success in the upcoming rounds in the task. The hot hand fallacy refers to the finding that such a belief is wrong – for basketball at any rate – and it has been cited as a prominent example of a cognitive illusion by many researchers. However, the paper of Joshua Miller and Adam Sanjuro proves that there may have been flaws in the statistical analyses and that the hot hand indeed exists an so there is no fallacy).

This development is quite fantastic because the hot hand fallacy has been around since 1985 when it was originally discovered by a group including Tom Gilovich and Amos Tversky and (and, in decision research, you don’t get any higher than that – they are royalty). Famously, basketball coaches reacted by saying: “It’s all rubbish, I know that there is a hot-hand effect”.  Some academics too have crashed and burned while trying to contest this phenomenon. Until I understood the Miller and Sanjurjo paper, I was quite certain that the case was rock solid. People have found that there are sequential dependencies in other areas, for other sports even.  However, the case for a hot hand fallacy in Basketball has been scrutinised so much that it’s truly astonishing that somebody’s come up with such a game-changing analysis of the statistics. I got into trouble a few years ago, when I gave a talk called “The cognitive illusion illusion” which somewhat audaciously argued that while there are cognitive illusions, they are mainly suffered by Cognitive Psychologists who think that their subjects suffer from cognitive illusions, when they don’t.  Feeling rather pleased with myself I had the nerve to give this talk at Princeton University with Daniel Kahneman in the room. He made it very clear he wasn’t very impressed with my argument which admittedly was a little overstated. If only Josh and Adam had got their paper out before, I might have been spared admonishment from Kahneman!

The discovery of cognitive illusions is of particular interest for the agenda of business schools. The idea that there is a problem with the way people think is popular for two reasons. Firstly, people need to learn how to run businesses rationally – you don’t want business personnel making mistakes. But also, and more disturbingly, maybe you could exploit the irrationalities of your competitors or the consumer and exploit their vulnerability.

I find it very exciting that maybe people have more competence than has been assumed, do know what they’re doing after all and perhaps some cognitive illusions have been slightly overplayed or misinterpreted. Take, for instance, the well known sunk cost fallacy: while there is an enormous amount of evidence that humans commit the fallacy it has been demonstrated in several studies that animals appear not to be susceptible to it.  There is evidence that animals do violate some rational principles – for example bees’ preference for flowers violate transitivity – but animals live in a tough world and if they behave in a markedly irrational way, evolutionary pressures will probably pick them off. So why, especially if animals don’t, do humans commit the sunk cost fallacy? Aristotle is remembered for claiming that what distinguishes humans from other animals is rationality. That may be true, but perhaps he got it the wrong way around!

Interviews

Interview with Prof. Judd Kessler

Judd B. Kessler is an Assistant Professor of Business Economics and Public Policy at The Wharton School, University of Pennsylvania. His research interests cover Experimental Economics, Public Policy and Market Design.

In March, he visited Bocconi University as a part of seminar series co-organised by B.BIAS and BELSS (Bocconi Experimental Lab for Social Sciences) and we had the honour to interview him about his career.

B.BIAS: What would you say first ignited your interest in BE?

Judd Kessler: I first got interested in Economics in high school, where we had a semester of Economics and our teacher made us keep an Economics journal. We were supposed to write about things we saw in the world through the lens of how an economist would think about it. I remember vividly the first time I understood why in a movie theatre popcorn is so expensive. That kind of thinking made me excited about Economics. When I got to undergrad and then graduate school, the thing that drew me to BE was that, in standard Economic Theory, humans are very simple. You can organize how they behave just with mathematical equations. That did not seem realistic to me, particularly in domains that interested me such as charitable giving, organ donation, and volunteering. That made me wonder what drives this behaviour and set me on the path of doing BE.

BB: Could you tell us a little more about your own research interests and the work you’ve done?

JK: I’m interested in what people call pro-social behaviours, basically a personal sacrifice that has a benefit to other people. In particular, I’m interested in understanding how social forces influence pro-social behaviours. For example, when I learn that other people are behaving generously — say I learn that others are donating to charity or taking up jobs that pay less but are good for society — then I’m more likely to do the same. This kind of response really fascinates me.

BB: Which of your research did you enjoy the most and why?

JK: It is a tough question, because I do three kinds of research, three methods really. The first is analysis of pre-existing data. The second is laboratory experiments, which are controlled experiments where you recruit people who know they are in a study. The third is field experiments, which are experiments where you do interventions in the “field” with people who do not know that they are a part of an experiment. They are all fun for different reasons.

One of the projects I’ve done recently is with “Teach for America”, an organization in the US that takes recent college graduates and people who are switching careers and helps them to get into jobs as teachers. We did a study with them, where we randomly added a line to the acceptance letter of people who have been admitted into the two-year program, saying: ” Last year, more than 84% of admitted applicants made the decision to join the corps, and I sincerely hope you join them”. We followed them for two years to see whether they stuck with the program. We were worried that we might get people who didn’t really want to be in the program to say yes and then they would drop out immediately. But that didn’t happen, and it was really cool — we did the experiment, added one small line, and we got to see in the data that the effect persists.

BB: Do you perceive any difference in the importance that BE has gained in the US versus other countries or regions (e.g. Europe)?

JK: I don’t think so, although I’m judging this based mostly on the extent to which academics are publishing BE work and the extent to which governments are using BE insights in their operations and practices. Both Europe and America have seen an increase over time. There are nudge units here in Europe, and also in the US, and there is lots of academic work done in both places. My hope is that it will continue to increase in both places.

BB: Maybe we perceive differences because of the heterogeneity of countries in Europe. For instance, here in Italy we see a few researchers working on BE, but it hasn’t picked up as much speed as in the UK.

JK: There is a lot of heterogeneity in the US as well. There are some universities in the US that have Econ departments that don’t do much behavioural work, so I think that’s probably not unlike Europe, in the sense that there are some places where lots of great behavioural people are and there are some places where it hasn´t come in yet. It could be that in equilibrium some universities don’t do behavioural.

BB: What do you think is the future of the BE?

JK: While lots of the early work focused on questions such as “Do people have this bias?” or “Is it possible for this behavioural phenomenon to arise in practice?”, I think the next set of work that will come out of BE will be more focused on identifying where behavioural biases are particularly relevant in affecting behaviour. Regarding nudges, I think we will start to see models designed to understand why nudges influence behaviour. This should help us understand when nudges will be effective and also when they will increase welfare.

BB: Apart from academic research, what are the career options available in the field of BE?

JK: There are academic-style jobs doing research for think tanks and government organisations. I also think BE is quite useful in consulting jobs. There is a lot BE can say about how consumers are thinking or how firms should operate. Understanding BE can help consultants make better recommendations. Within firms, I think of departments focused on pricing, advertising, or marketing as places where behavioural knowledge could be quite valuable.

BB: Do you think there is a threat of companies abusing this?

JK: Like any tool, BE can be used for good or bad. Think of a nudge. When deciding to implement a nudge, you should worry about its welfare effects — you should only like it if it makes people better off. Once you’re asking those questions, you’re on the right track.

BB: What advice would you give to young students interested in BE? What courses should they take and what experiences should they try to gain?

JK: I would advise them to take both Econ classes — to understand the traditional Econ way of thinking — and psychology classes so that they can see both sides. If you just do behavioural and you don’t know the way psychologists and economists think about it, there’s a gap in understanding. What I ask my graduate students to do, when they are developing new behavioural ideas is to think first about what would happen in a non-behavioural world. How would their intervention affect behaviour in the traditional, rational-agent model? Only then do we move on to how behavioural agents would respond.

BB: How did you feel after being mentioned in the Forbes’ list of under 30s?

JK: It was quite nice actually. No one in my family had done a PhD before and it wasn’t that common a thing among my friends, either. So there was this sense that I was still a student and in school and my friends made fun of me for that, even after I got my first job as a professor. So, it was nice to have some validation that research work could influence policy — and, as a bonus, my friends stopped making fun of me.