Why we should be more ignorant

Contrary to the popular belief that education is eradication of ignorance, learning is driven by ignorance.

While life-long learning is probably one of the top items on any CEO’s agenda, few really know how learning takes place. This leads to a continuation of flawed models and a replication of school/university systems.

What we really need is a fresh look at learning within organizations based on our understanding of learning. I write about this in the latest Mint article in “Behavior By Brain” series.

When governments lead us astray

Lotteries are also known as “stupidity tax”; a nod to their improbable odds. In India, lotteries are often run by state governments – its an easy way to cover for their budget deficits. What these governments don’t realize is that they are fueling an addiction.

But what are the reasons behind this addiction? In the article, I talk about the behavioral science of lotteries.

Lotteries generate many ‘near misses’ thus making people believe that she is a winner even when she has lost, thus inducing a a dopamine fueled craving. I also talk about the incorrect application of ‘regression to the mean’ mental model, and how governments make it easy for someone to rationalize their lottery addiction.

Read more on livemint site here.

Rethinking Behavioural Science Research

The past couple of years have been painful for Social Sciences, with the replicability crisis putting a dent on the credibility of multiple studies in the field – from social priming effects to power poses and will power. An effort to reproduce effects reported in more than 100 cognitive and social psychology studies in three journals, called the Reproducibility Project, has found that findings from around 60 studies do not hold up when retested. Even when effects were replicated, they were weaker than reported in the original studies.

The replicability debate has been focussed, to a large extent, on experimental design and effect sizes. It is suggested that low-power research designs (smaller sample sizes) and lower or weaker effect size studies were more likely unable to be replicated. Additionally, an inherent bias in publication favouring positive results is argued to contribute towards the replication crisis.

An often overlooked part of the discussion seems to be the social context of the experiment and it’s effect on the participants themselves. Currently, academic researchers are sticklers for controlled design, this way the effects of multiple factors on behaviour can be reduced to just one. In view of this, in most universities, the research lab, usually cubicles/ computer laboratory is a heavily controlled, isolated environment. Having a controlled physical environment, however, does not preclude the participants from coming in to the research with their own motivations, dispositions, expectations and emotions. These cannot be dismissed as irrelevant to the study at hand just because the study has been stripped of any context. On the other hand, they exert a large influence on outcomes of the study.

For example, aspects of the experimental setting can influence the participants’ reaction to stimuli presented by the experimenter. Participants in psychology studies get paid, and are motivated to play the role of ‘good’ subjects – ascribe to what they think the experimenter wants – these are termed ‘Demand Effects’.  Participants consciously try to recreate experimenters’ hypotheses using available cues. Any psychology experimenter will attest to this fact. As a student, when I conducted my research on Automatic Priming, I used the same testing protocol – picked solitary computer terminals, used a confederate to trick participants into believing they were engaged in two separate studies – one to deploy the priming intervention (‘professor’ versus ‘hooligan’) and another to study the effect it had on knowledge (IQ test). We did probe participants on what they thought the experiment was about and so on, but at the end of the day, the truth is that most participants had their own hypotheses about what we were trying to prove and played up to their hypotheses. Experimenters themselves unwittingly influence participants with their expectations – which participants want to play up to, dubbed ‘Experimenter Effects’.

Psychology is the study of human behaviour – in our anxiety to ensure that it is a strict science, we are using the same experimental models that we use to study physics to study human behaviour. It is time psychology experiments stop treating participants as passive receptors of stimuli. What we want to study are the motivations, the emotions, the beliefs and dispositions for different contexts – why try to make the participants leave those behind at home (which they won’t anyway). Our research will be richer if we simulate the real-life context that we are trying to study, rather than control for it, so the decisions and outcomes of research will be closer to home.

Research at FinalMile attempts todo just this. With our EthnoLab, we simulate real-life contexts as far as possible – we want the decisions in the Ethnolab to reflect decisions taken in real life, not create an alien context which leads to perceived ‘correct answers’. This might mean recreating the real-life environment – either physically, or virtually. The EthnoLab marries the practicality of a controlled laboratory with the ‘real-life’ness of Ethnography. As Smith and Semin (2004) put it :  “The true strength of the laboratory is not its supposed insulation of behavior from context effects, but its flexibility in allowing experimenters to construct very different types of contexts, suited to test different types of hypotheses.” Welcome to Behavioural research v2.0!

Image Credit: american.edu

How context cues behavior

We tend to believe that Indians behave ‘properly’ only in foreign countries – Singapore or USA; that Indians in India are boorish and have no civic sense.

Is that really so? Don’t we behave better in gleaming malls? Don’t we speak softly in libraries? Don’t Malayalees queue up in front of liquor stores?

I go into the behavioral science of civic sense in this article in Mint.

Cheating ourselves to Death?

India is often referred to as the diabetes capital of the world, with around 41 million people living with diabetes in 2007, and projected to reach 68 million by 2025. In one of our engagements we were trying to understand how people living with diabetes manage this disease. One of the perplexing observations was that many people had the belief that their diabetes is under control. This conflicts with most data and expert opinion which suggests that majority of diabetes cases are uncontrolled.

We were trying to understand the source of this belief and started interviewing close family members of patients. One of the most interesting factors that we heard when we spoke to family members of these patients was that these patients “prepared” themselves before going for a blood glucose test. A week before they get their blood sugar tested, they would change their lifestyle – they would exercise, go for walks and control their diet. So when testing happens they get a more favorable result than their actual condition. It looks so irrational that people would cheat themselves into believing that their condition is better than it actually is, thereby putting themselves at risk of not getting the right treatment.

What explains this seemingly irrational behavior? Why would intelligent people who are aware of the dangers of the disease that they have, not want to know the truth and provide their physician with more accurate data for better decision making?

One of the moderators of decision making is the kind of mental models that people create in life that helps them simplify the world. While this is often great to improve efficiency of decision making, it could be deadly if used in the wrong context. A very popular example of a mental model being used in the wrong context is in the case of diarrhea. As Sendhil Mullainathan explains in this video, 35-50% of the mothers in Rural India think that they should reduce fluids if their child has diarrhea. They use an intuitive mental model of a leaky bucket – that you should not pour water into a leaky bucket if it has to stop leaking. This makes diarrhea, something that can be easily managed to the status of a deadly condition.

In the case of diabetes, the mental model that patients have is that one should not to fail a test. People look at each blood test as a test of how well they are managing their condition, thereby framing the issue as a judgment on their own capabilities. And not one that objectively measures the status of their condition and as an input into a treatment regimen that would help their doctor take better decisions.

How do we address such a condition? Breaking mental models is often a high-investment long-term game. One of the approaches that we take at Final Mile is to see how one can work with existing mental models rather than fight it. In this case, simply by encouraging people to adopt HbA1C instead of a spot test can help address this behavioral issue and get a more accurate measure of their condition. It is a simple intervention, but one that addresses the inherent risks of misdiagnosis. The other intervention is to address how doctors and counselors frame the test – it is important that patients do not see this as a test that they fail or pass but one that helps calibrate the medication for a chronic condition.