Intro: You are listening to Foreseeable. A production of Global-is-Asian, the flagship thought leadership digital platform of Singapore's Lee Kuan Yew School of Public Policy. Each episode we invite an expert for a conversation relating to their field of study or experience and to find out what they foresee happening in the future.
I’m David Austin, the host for this episode.
We sit down with Associate Professor Leong Ching, whose work delves into the complexities of apparently irrational environmental behaviour, exploring public attitudes and decisions around topics like recycled water, water utilities, and river management. She uses approaches that examine narratives, perceptions, and stories to understand collective public behaviour and environmental identities.
Today, however, we're focusing on a fascinating recent study she led that investigated a core aspect of public behaviour with significant policy implications: honesty.
Her study offers promising insights into designing policies that can encourage pro-social behaviour without coercion.
Thank you Professor Leong for joining us and sharing your time with us. Your recent study investigated methods to induce honesty and public behaviour. Could you elaborate on why understanding the psychological drivers of honesty is crucial for effective policymaking and what you were hoping to find with the study?
Professor Leong Ching: So, part of the motivation was this intrigue about whether we really know people are honest or not. And as an experimenter, you can't really see, right? I mean, once you see, you sort of spoil the experiment.
So we thought of a really elegant way to do it, in that we don't know whether any specific person is honest or not. But if the sample is large enough, you kind of know whether they're reporting accurately or they are gaming it.
I think if I asked you whether you believe that most people behave virtuously. You would say no, and your suspicion would be correct. But this experiment shows that even though most people don't behave virtuously, you can help or induce quite a lot of them to do so, and it's cheap and easy.
And it's their choice; you don't force them. You don't blackmail them or threaten them or anything. It's a choice they themselves can make to behave more virtuously. How do you do that?
So this is a dice roll experiment, and we did it over a large sample size. So, one person gets a dice, and they can use their own dice or they can roll it on a computer.
They can do anything they like, and we really have no idea whether they roll. Whatever they roll, they'll just report. And we'll tell them: if you roll one, you don't get any money, but if you roll two, you get a dollar. If you roll six, you get $5. So, the incentive is for them to say they rolled six.
Then you get the maximum amount of money, and nobody knows what you rolled. So that's the experiment. And it's a great way to see whether… Well, you can't tell whether any specific person is honest, but you can tell overall with 35,000 people, whether they were honest or not.
In the mathematical idea of honesty, if everyone were perfectly honest, you get 3.5 per cent.
That's the report evened out. And if everybody was rational and self-interested, you get five per cent; that’s when they lie to the maximum. So, it was really surprising to us first that, well, people were not perfectly honest, but equally surprising that people were not maximally dishonest.
And you thought, well, in for a penny, in for a pound, right? If you were dishonest, you might as well go the whole hog and report that you rolled six. But people don't, they report slightly more than what they rolled. And so not 3.5 per cent, but not five per cent either.
David Austin: And tell me about the opt-in monitoring portion of the study and, you know, how did you come up with that, what were you trying to find, and what did it show?
Professor Leong Ching: So, our first curiosity was whether people were honest, and it turned out they were not. So as a policy school, we kind of are always looking for: 'So what?' What does it mean for the government? So what if people were not honest? So, the obvious motivation for public value is to help people to become honest.
So I was thinking what would be a simple way, and it can't be that expensive. And so I was looking for a simple, cheap way to help people become more honest. I thought, why not offer people the opportunity to roll the dice on the NUS server? Remember the alternative was for them to use their own server, to use real dice, and nobody knew that you didn’t actually have to use the dice, right? You could just roll it in your head and I wouldn't know.
But the thing is, I offered this choice, and the instructions were, you can roll it on the NUS server where your dice roll 'may be' observed by the researchers, and we specified that ‘may be.’
Because I wasn't going to stand around and look at 35,000 people rolling dice. But even though we specified ‘may be,’ and in the end we didn't observe them. We just put the possibility of it. And we were surprised that so many people actually opted to roll on the NUS server.
So more than 80-85 per cent of people said, “Okay, we'll do it on the NUS server.” And as a result, they were incredibly honest. I mean, not perfectly so, but still, significantly more honest. So yeah, it was a revelation to me.
David Austin: Yeah. That's really fascinating. Thank you for explaining that. Now we can delve in a little bit deeper; first, could you just elaborate on why understanding the psychological drivers of honesty is crucial for effective policy making?
Professor Leong Ching: This experiment shows that there is a possibility of an intervention helping people to become more honest. But you're right. Understanding the mechanism is so important. But I did want to say, as a policy scholar, I was interested in the theory of policy behaviours.
In policy schools, we often teach about public goods and how governments design certain things and members of the public will comply, or not comply, depending on the legitimacy they perceive. There is a recognition now that there's a large behavioural element in our relationship with government policies, meaning things like biases, unconscious behaviours, and so forth.
That's such an interesting part of policymaking, and I thought this was a great way to demonstrate that.
David Austin: Now, in addition to being able to opt in for monitoring, you tested whether luck would affect the outcome as well. What did you find from that, and how did luck affect the study?
Professor Leong Ching: So, luck to me is always the neglected variable. Nobody ever thinks about it in policymaking; it’s usually framed as credit claiming or blame distancing. Luck is such an incredibly large part of our human experience, and we don't give it enough attention.
So I thought that luck was a way to achieve moral distance, like saying, “Well, you know, I'm not really that dishonest. There's some element of randomness in a universe that allows me to do this.” And I thought if we introduced an element of luck, we would see more dishonesty. So, the lottery is what you're talking about?
Yeah. So actually I was surprised that luck didn't correlate with increased dishonesty. That means if you say, “Well, it's not my fault, it's just luck that allowed me to have this windfall,” that's a way of giving permission to behave badly. But it turns out luck didn't make a difference.
David Austin: Very interesting. So, with that in mind, you know, the study suggests that people have an inherent desire to be seen as honest. Could you discuss the potential motivations behind this desire? And is it driven by a need for social recognition or self-recognition, or perhaps a combination of factors?
Professor Leong Ching: Yeah, so this is really interesting about social recognition, because in fact, there is no social recognition, like nobody knows. It's really between the person and God. And they don't know me from Adam. They don't know who I am. There's no reason for them to want to appear good in front of some faceless researcher.
So it could be that they know that we're conducting this experiment in their country, and they know it's the National University of Singapore, because it's in the consent form. And it could be that they wanted to appear [honest], if they were not. I mean, this study wasn't done in Singapore at all.
It's like nine different countries. Or it could be that, it's an identity theory, that “I want to be a good person, that this is a pre-commitment device. It helps me to become the person that I really want.”
So I have a follow up study called “Our Better Angels: Nudging The Better Angels Inside Ourselves”, and that's on Civil Servants. It's a suspicion really that maybe quite a number of us want to behave pro-socially. In this instance, I couldn't tell whether it's an innate motivation or to appear good or whatever.
And in a very pragmatic way as a public policy scholar, maybe it doesn't matter; maybe it's both, maybe it's neither. But as far as the government is concerned, we look at the intervention, look at whether it works and see the cost-benefit analysis. One of the biggest gains to me is that, you know, there's a lot of discussion now about nudging and whether it's something that preserves their autonomy, is really important from a political economy point of view, that you want to respect the person's choice.
So, this one, this specific thing is opt-in. So, it isn't someone hovering over you and saying, oh, you know, it's you. You decide whether you want to.
David Austin: So, there's the level of autonomy for the participants. You know, they, it was not mandatory. It was opt-in.
Professor Leong Ching: Very much so, yeah.
David Austin: Do you think there's any element of honest people choosing to opt in and dishonest people selecting to not opt in?
Professor Leong Ching: It could be, but it's 85 per cent [of participants who opted in for monitoring.] It's pretty good news either way. I get your question about whether it's a biased sample, if you're going to cheat. But it's 85 per cent.
David Austin: So that again, informs policymakers. Either way, it's something that they can use as they're going forward.
Speaking of nudges, I think in the study you also mentioned how previous studies about perhaps a pledge, like an honesty pledge, didn't seem to nudge people in that direction. Can you comment on that just a little bit?
Professor Leong Ching: Yeah. This is the excitement and the curse of being a behavioural economist. A lot of times, experiments work, or they don't work, and you really have to do the experiment; it’s an empirical question rather than an 'a priori' question. So, I'll give you an example: recently I gave a class to a room full of cybersecurity experts, and there are many organisations where the cybersecurity experts send out phishing emails, like fake phishing, to train the population, and NUS does it too. Like, we’d say, “Your colleague David sent you this email,” with a document for you to click on, and you're not meant to click it. Because otherwise, you know, you fall prey.
And then I asked the cybersecurity experts, do you know what the evidence is? Does it in fact work? And it was like, no, but it must do. It must, right? I mean, the fact that I've inoculated you, but the evidence actually shows it's unclear that it works.
And of course the work is ongoing, whether it's unclear because it's too good or it's not good enough, if people didn't have enough information or things like that, but it's an empirical question and you really have to test it before you run away with your intuition about certain things about how people ought to behave.
David Austin: Kind of along those same lines, this current study was conducted with an abstract setting using a die roll. So, how complex and what are the challenges and considerations when trying to translate that abstract finding to a more realistic policy setting, you know, such as tax reporting or health declarations, how do you make that connection?
Professor Leong Ching: There were two things. One is it gives us hope that an intervention that simple can increase honesty to that level. And to remember what the trade-off is. The trade-off is self-interest. So, what I'm asking people to do is to trade off specific gains to themselves against some abstract notion of being honest.
And you're right in that it's a clean, abstract case, but the fact is that it's real money. It's not fake, and it's real self-interest. So, I think that gives me hope that there is a way to tune up pro-social behaviours.
David Austin: Is that what the study refers to as the endogenous driver for transparency beyond preventing malpractice? And could you explain that further and maybe discuss how policymakers can leverage this motivation for transparency in their policy designs?
Professor Leong Ching: Yeah. I think I used the word so with hindsight, right? I used the word transparency kind of loosely now, okay? Well, often when we say transparency, we often mean it as a demand on governments to tell us everything.
We think there's something to hide. But in this case, transparency just means the ability to be seen a certain way — and you'll see in the paper we had written it as a specific utility function — that we as human beings who live in a group derive a certain utility from the very fact that we are seen, to be honest, and I mean it's a bit George Berkeley, right?
If nobody sees you, are you really seen? But I think it's a divide. I mean, that specific sentence that I told you about, that you may be seen by the researcher already works.
David Austin: Very fascinating. The research opens up several avenues for future exploration, and you already talked about another study that you're in the middle of, but including the relationship between external rewards and the desire for transparency, even if that's loosely stated and the applicability of opt-in monitoring, you know, based on these findings, what are some of the most pressing questions that future research could address to expand on this and learn more?
Professor Leong Ching: One study that I'm doing is, but the two follow-up studies I'm doing are on trade-offs, right? The first is on civil servants. So, there’s lots of public motivation literature, like what kinds of people become civil servants? Are civil servants inherently different or are they driven by a certain ethos?
And a lot of times the assumption is yes. And lots of experiments have tried to see, are you a different sort of chap because you've picked public over private service, right? Many of these experiments are surveys on prospective students, and the results are mixed. Some go one way and some go the other, and I thought, because I have a baseline general population study, it'll be super to see, to repeat this on a specific sample, just on civil servants.
David Austin: Okay.
Professor Leong Ching: Yeah. So, this is not published yet. It'll be, you know, it'll be interesting for me to hear your, your guess. So, I have like maybe 3000 civil servants in a specific country, which at the moment shall remain unnamed.
David Austin: Okay.
Professor Leong Ching: And I have the general population figures, which you've seen. Do you think that the civil servants are more or less dishonest than the general population?
David Austin: I would think they're less dishonest.
Professor Leong Ching: Less dishonest?
David Austin: Yes. More honest. Yeah.
Professor Leong Ching: Yeah. You're right.
David Austin: Okay. Yeah.
Professor Leong Ching: So, when nobody's watching, which means in the unmonitored dice roll, just like between us and God, our conscience, civil servants were indeed significantly more honest, which is, I mean, it sort of definitively answers the question. And you, I mean, it doesn't get away from the biased sample, like, are you a civil servant because you are more honest, or do you go into service and then you become more honest because of the institutional pressures, conformity, etc.
Again, this doesn't answer the question, but the fact remains, if you're a civil servant — but nobody's watching — that means in an unmonitored dice roll condition, you're significantly more honest.
So, the second question is, what about the intervention? The intervention of opt-in monitoring: will it work? And the answer is strange: it does work.
David Austin: Okay.
Professor Leong Ching: When we offered them this option, again, the vast majority, more than 80 per cent, chose to opt in. And as a result, they're more honest, but not perfectly honest. And not more honest than the general population.
David Austin: Oh, okay.
Professor Leong Ching: Yeah.
David Austin: Another interesting finding.
Professor Leong Ching: It's so fun, right? But basically, what it shows us is that honesty levels are higher within the civil service, but this intervention really is the maximum tune-up you're going to get. That's as good as we're going to get. You're not going to get angels.
But it was fascinating to me to find out that really policymakers have such power when we design policies because of these rules that regulate behaviour. They really do make a difference in the outcome. And we can use them to achieve the highest levels of public good.
David Austin: Very good. Are there any other future questions that you think would be worth good for follow-up research that you would like to discuss?
Professor Leong Ching: Yeah. So, I'm in the middle of a messy one right now. But one of my big dreams is to make a big difference in environmental behaviour.
David Austin: Okay.
Professor Leong Ching: And you know, this honesty experiment is a general level of honesty because who are they cheating? If they were dishonest, whoever they're cheating is basically NUS.
Like, if you report five when you rolled a one. You get more money. And you shouldn't because you kind of lied for it.
But my motivation for this second study is, if the person who loses is not a nameless NUS researcher, but let's say the environment, or a stranger, would they be more or less dishonest?
David Austin: Okay.
Professor Leong Ching: So, I was very, I wanted to find that out. I haven't found out yet. I've done a few small experiments, but I couldn't get a clean design that I was happy with. But wouldn't it be fun?
David Austin: Yeah. That's a very interesting question. Yeah.
Professor Leong Ching: If the other party, the other side-
David Austin: -was a real person or a real entity that they can visualise or relate to, it might really change their behaviour.
Professor Leong Ching: Right. So, I mean, I did the usual suspects: charity, stranger, and the environment.
David Austin: Okay.
Professor Leong Ching: I mean, I'm not sure the environment is the usual suspect, but in my books it ought to be. Because we often don't cost in like, the environment really, really ought to be a player.
David Austin: Absolutely, yes. I firmly believe that as well. And now, in our earlier conversation, you've said a few times to the effect that whether or not we know the psychological drivers, the effect is the same.
You still see that the, you know, 85 per cent of people opted in and that it does affect honesty. But that said, what are the key takeaways that you would think someone would take from this study?
Professor Leong Ching: So, the first is that this specific intervention is a useful nudge towards pro-social behaviour. That's the first thing I wanted to say. And the second in, you know, given the rise of what I call, sort of, maliciousness and inherent vice in the behavioural state, which is something we’re seeing all across the world now. The limits of rational technocratic calculations in the influence of public policy. Given the rise of that now, any intervention that encourages pro-social behaviours, I think ought to be taken pretty seriously.
David Austin: Very good.
What challenges do policy makers need to overcome to design policies that have a good chance of being successful while inducing honesty in the policy taker?
Professor Leong Ching: I think I can answer that in a general way. Like, some behavioural studies are more directly relevant than others. So, I'll give you an example.
So, when COVID first struck — and that was before mRNA vaccines became so widespread and available — that was new technology. So, from that research, it shows that social conformity is an important driver of acceptance of new science. And I thought, okay, from there, how could I help policymakers in influencing public acceptance of new science like mRNA vaccines?
Again, I did quite a large sample size study on whether information would help or, you know, what is it that would help?
As it turned out, a big variable that reduced vaccine hesitancy was social conformity. And all you need is 20 per cent. 20 per cent saw a big jump. Which means if at least 20 per cent of people accepted the specific test and I said, mRNA technology, then huge numbers would come. So, social conformity then helps; if you are a health minister, you would do that. You would say how many people have already done it, rather than hammering home ‘the science behind it is safe’ or go harping on the economic incentives. So, just like social conformity per se, it's so powerful.
So, I guess in that general point, behavioural experiments are useful in these applications and the specific insight on social conformities is not just for recycled drinking water. It's on new science in general, like mRNA vaccines.
So, specific to your questions on honesty, right? In what circumstance do I think this experiment will be useful: in all circumstances that you need people to tell you the truth.
David Austin: Okay.
Professor Leong Ching: Yeah. Whether it's asking if they have been traveling to other countries or whatever and things.
David Austin: Okay. So, in that health context or, you know, or tax filing, things like that.
Professor Leong Ching: You give them the option to appear to be honest. Right. And then, yeah. I mean, I'm not sure it works in all circumstances. But, I mean, it's worth a try.
David Austin: And you also mentioned that it's very low cost to offer this, especially if really you only have to say you may be monitored. That's like zero cost almost, you know?
Professor Leong Ching: It’s true. Also, I mean, even if you were more diligent than I was in this experiment, even if you were, I mean, monitoring it is so cheap, right? And you just basically do an audit of different things. You don't have to stay at your desk for 24 hours. You just pick a sample.
David Austin: So how does it work though, as far as when a study such as this is published, how does it filter through the policy making world? Does, first, some institution somewhere have to institute this programme, and then they publish a case study, and then other institutions follow suit? Is that how it works? Do you have any insight into that?
Professor Leong Ching: Yeah, no, all sorts. There is nothing that… you sort of try everything. So, I guess your question is something that the university's thinking about, all this research: what is their impact in the real world?
And your question is how do we impact the real world? And my answer is in every possible way we can. I present this at conferences, so I'm gonna present this over the summer in the UK at a behavioural public policy conference. We publish it, and we make it open access to any policy maker.
I haven't yet, but I will try and write an op-ed about it so it becomes like general, plain language. Which is our duty, as well as really good fun. I think if a government agency wants to implement it, it really has to do like a small pilot, like 60 people or so, and see how that lands. Because we are a public policy school, any policymaker that comes to us with a specific problem, we really roll out our hands.
David Austin: And offer all the solutions that you have.
Professor Leong Ching: But that's the point, right?
David Austin: Yeah, exactly.
Professor Leong Ching: That's the point of the work.
David Austin: And I think there's a lot of use cases in the private sector as well that could adopt this. In fact, I believe I saw a website devoted to HR — human resources — had cited your paper and was talking about that as something that they can use in the private sector.
Do you see that happening as well, with studies that are born in the public sector spilling over into the private sector?
Professor Leong Ching: Very much so. In fact, the private sector is really at the forefront of behavioural science. You only have to walk into a supermarket to see how sophisticated their thinking is, right? Yeah. So, in fact, I think the private sector, especially in environmental behaviour, they really are very invested in how to make people consume less energy, for example. And, also, you’re right, to behave honestly in a large organisation is so important. It's a function of norms as well as intrinsic motivation. But if you have a simple institution like opt-in monitoring, it would be so useful to do it.
David Austin: But are there any potential pitfalls that policymakers need to keep in mind if they're going to design a new policy and perhaps include opt-in monitoring? How, how do you predict, or try to avoid any, any negative spillovers?
Professor Leong Ching: I think the instinct will automatically kick in. I wasn't sure about using the word induce.
David Austin: Okay.
Professor Leong Ching: I think when you say the word monitoring or surveillance, they are too close.
David Austin: Okay. Yeah.
Professor Leong Ching: So, I think I would be very careful and maybe in the end I use the word induce just because it's an accurate and precise term. It just describes it exactly. But if you are a public official, I think you'll be careful about that, and you would really stress the voluntary nature of it. I think if I were a policymaker, a useful frame for it would be a pre-commitment device, and I didn't test for it as a pre-commitment device.
David Austin: Okay.
Professor Leong Ching: But you could. You could prime the test towards honesty. But primarily, I'm not a psychologist, I'm an economist. I'm really studying microeconomic behaviour and incentives towards that. My questions have already been answered, so that, that road has ended for me, but it leads on. As you rightly asked, “Why and how and why do you behave? What is the specific switch that you're turning on in the psyche of the person?”
David Austin: Yeah. Very good. Well, thank you so much. This has been a fascinating discussion. I really enjoyed it.
Professor Leong Ching: Thank you very much, David.
David Austin: And thank you all for listening. If you enjoyed this episode, please search for the Foreseeable podcast series on the YouTube channel of Lee Kuan Yew School of Public Policy, for this and more. You can also listen to us on Spotify or wherever you enjoy your podcasts. Thank you.