Notes from Daniel Kahneman 'Thinking Fast and Slow' Part 1

 Here are some notes from Daniel Kahneman's book 'Thinking Fast and Slow':

1. Contrary to the assumptions of economic models, human beings are not rational. We often let our heuristics guide our conclusions rather than statistical facts. 

2.You find a person with the qualities that are more representative of a librarian than a farmer and you conclude that he must be a librarian. Perhaps you are blind to the statistical fact that there are more number of farmers than librarians in this world. Our perceptions can make us blind to realities.

3. Many of our water cooler conversations at office are based on our own perceptions rather than realities.

4. Halo Effect - you are charmed by a beautiful woman and assume everything she says is correct. You associate one good quality with many other qualities which may not be valid.

5. Expert intuition works like sixth sense. It is your brain responding, perhaps subconsciously. It's like a child looking at a dog and recognizing it. Expert intuition comes with experience and it is valid.

6. When faced with a tough question, the brain may choose the easy answer by substitution. Like the Investment executive picking stock on his gut feeling rather than reason.

7. The author describes two functions of our brain as system 1 and system 2. System 1 is for intuitive tasks whereas system 2 is for more focused work.
People who are cognitively busy make selfish choices because system 1 takes over. If you are mentally fully occupied then you may choose a 'sinful' chocolate cake over a healthy fruit. Think about those people who fall prey to vice, when they are stressed out.

8. If you are in a flow that is your mental faculties are at their peak, you may not have to stress or struggle more. Your brain is already in 'flow'.

9. Ego depletion occurs due to exertion of self control, concentration or system 2. Your brain absorbs glucose when you are doing something focused or refrain yourself from doing something 'tempting'. And once your energies are consumed on some mental exercise, then you may not be able to perform another absorbing task. Once your ego is depleted you may easily fall for your temptations.

10. Pupil dilation occurs when you are focused.

11.Your brain has an associative memory which is involuntary in its functions. For example, you think of a lion you can picture an image of a lion in your mind which is not in your control.

12. Reciprocal links. In an experiment some youngsters were asked to repeat sentences with words related to old age. Immediately after the task they were asked to move to another hall. Although they were young they took longer to walk to the hall. Things that you say and hear have an impact on you, your subconsciously without your knowledge. So surround yourself with positive thoughts and people.

13. Reciprocal priming effects. Our physical expressions are connected to our brain habitually. So if you are nodding your head and simultaneously listening to some message, you are accepting it. If you are shaking your head simultaneously then your mind has probably refused the message. Similarly smiling and frowning while working on something have effects on your brain.

14. People who are primed by money were found to be more individualistic, self reliant, selfish and lonely.

15. The lady macbeth effect: If you imagine doing something wrong or if you have done something wrong, then you have the urge to clean your hands or body.

16. Authoritarian leaders tend to use 'priming' psychology to exact obedience. Large hoardings of national leaders.

17. Words become thoughts. Thoughts become actions. Actions become habits. Habits can lead to vice.

18. There is a stranger inside of you the system 1 which constantly guides your actions and behaviors unconsciously, involuntarily.

19. Illusions of mind. Words that you have seen earlier seem familiar. Names that you have seen earlier sound familiar. Familiarity leads to acceptance. Marketers are aware of these techniques, therefore advertisements, signature tunes, jingles are often repeated to create a subconscious impact. Familiarity is often construed as truth by our brain. Even a part of a sentence that sounds familiar deceives our brain into thinking that it is true.

20. Muller- Lyer Ilusion: All that we see is not real, but our brain assumes that all that you see is all there is.

21. When system 1 receives an input and system 2 approves it you are led into thinking that it is true.

22. Cognitive ease is when something that is easy on the brain is readily accepted as a fact. So when you want to convince someone you can play on their cognitive ease. Use bright blue and simple language. Use rhyming words. Convincing source names are readily accepted as true.
 
23. People also choose stocks based on their names. Names that are easy to pronounce lead to cognitive ease and investors tend to believe that these stocks perform better. For example the name Apple sounds good.  

24. In a test conducted with different fonts, people who faced the exam with a clumsy font performed better. The bad font induced their system 2 to cognitive strain. The more beautiful font led to cognitive ease and therefore students performed worse.

25. Mere exposure effect or familiarity effect: Familiarity leads to a favorable view. Our mind is at ease with familiar stimuli whereas our mind refuses novel stimuli. This is probably true for every creature.

26. Creativity, intuition, gullibility are at their best when system 1 is in a good mood. When you are in a good mood you let your guard down. But vigilance, analytical thinking are associated with system 2. You are not in a good mood or you are sensing some danger.

27. Coherence leads to a good feeling. This can be judged by the impulses generated in a person's face when presented with a triad of words.

28. When you listen to good music you are in a good mood. Sometimes good feelings can be the cause of coherence. Coherence can be both the cause and consequence of your good mood.

29. Our brain tries to look for reasons. System 1 tries to explain away from the limited knowledge that we have. Bond prices rose after Saddam's capture. They fell after some time. Instead of looking for the real reason our brain looks for coherence. Try to connect two big events of the day.

30. The association of causality and attribution of emotions is an involuntary future of your brain. You try to attribute emotions to inanimate objects and even animals. These are all the activities of system 1.

31. In fact this kind of causal thinking is the basis for all religious beliefs.

32. Based on your previous experiences your brain forms associations and intuitions. That's mental economy. Your system 1 jumps to conclusions.

System 2 comes into play when there's ambiguity. System 1 doesn't keep a track of the multiple options left out before jumping to a conclusion.

33. System 1 is biased and trusting. System 2 questions abnormalities. It is responsible for logical or statistical thinking. But system 2 is lazy or occupied. It needs to be trained.
-----------------------------------------------------------------------------------------------------------------------------

LSE discussion with Daniel Kahneman:

1. In a world where there are more complexities it becomes difficult to gain expertise even through rigorous practice. For example, it is probably easier to become a chess player than a stock picker, because you have set rules in chess but you don't have any in the later.

2. Many people genuinely think they are experts in certain fields but it is their system 1 that is governing their false confidence.

3. From a policy making perspective, why not appeal to system 1 rather than system 2 for better outcomes.
System  1 likes to confirm to the norm while system 2 wants to be unique.
For example, instead of penalizing people for non payment of taxes you can advertise that 95% of the people have paid their taxes. You are appealing to System 1 rather than System  2.

4. Your system 1 cannot be changed in the way it functions. You can only train your system  2 to be more cautious particularly career or life related decision making.
Perhaps, you may slow down a little bit or ask for other people's opinions. But you cannot fundamentally change the way in which System 1 works.

5. The experiencing self is different from the remembering self. Often your remembering self determines your memories, not the remembering self.

6. As people's income levels grow beyond a certain point, the happiness derived begins to stagnate while the satisfaction may increase. Say the happiness derived from income over and above $75000 per annum is almost negligible or zero in the United States. This is based on real data 2012.

7. For the experiencing self time spent with your loved ones matters as it leads to happiness.

8. So from an economics perspective why not increase the total number of people surpassing a threshold of income rather than focusing on increasing aggregate income. Or are the two correlated as they say in trickle down effect. Again, behavioral economics leads us to the argument for universal basic incomes ??

9. Quality of decision makers can be improved by knowing how their system 1 works.

10. Paralysis of analysis is a phrase used to describe a condition where too much of analysis leads us nowhere.

11. In medicine we come across situations where the medical expert feels that she knows more than us about ourselves which may not always be true.

12. What you think you know could be a product of your intuition and not evidence. It's important to be self aware.

13. In public policy making again it becomes important to be aware of how system 1 works. Our system 1 reacts well to immediate dangers but not distant ones. For example climate change is a distant threat whereas election win or loss is an immediate issue.

-----------------------------------------------------------------------------------------------------------------------------

34. When system 2 is busy, system 1 is gullible or biased.
When you are taking a turn during driving you won't be able to multiply 456 * 179. Your system 2 is engaged in driving and needs attention when you are taking a turn.

35. Contrary to hypothesis testing methods where the objective is to refuse the null hypothesis, system 1 suffers from confirmation bias.
Even scientists and economists look for data that suits their own biased perceptions.
The job of system 2 is exactly is to question, doubt and create uncertainty.

36. Halo Effect: Suppose you meet a beautiful woman. You may presume that she is also generous. You are presuming and associating one good quality with the  other which is not true.

37. First impression is the best impression. A person may have many good and bad qualities. But if you happen to be positively impressed by a person which a matter of chance, then you assume all good qualities. This is called halo effect.

38. To overcome halo effect you need to 'de-correlate ' errors. It implies that each observation should be independent of the other. If the observations are biased then they won't be able to cancel out each other. If the observations are randomly generated then then errors cancel out each other. This is called the wisdom of crowds in the stock investment. If the crowds are biased or can influence each other then the wisdom of crowds doesn't hold.

39. In police procedures, witnesses are not allowed to discuss their testimonies. If you allow them to discuss then all of them tend to a similar testimony which compromises their independence. Some hostile witnesses may even collude.

40. In corporate meetings those who speak first tend to have a halo effect on others. So the next time you are in a meeting or a committee, allow all the participants to make a brief summary on the topic at hand. This can help in addressing halo effect.

41. Independence of sources is important in collection of data or samples.

42. What We See Is All There Is (WWSIATI)
System 1 works on the above principle. System 1 builds a story when there is limited information available. For example, if someone asks you Anna is strong and intelligent, do you think she fits the leadership role? System  1 quickly jumps to the conclusion that she will be good for a leadership role even without complete information.

43. Coherence and Confidence are the two factors affecting System 1. Completeness is not necessarily a prerequisite. In a Stanford experiment by Amos Tversky, two students were given incomplete information about a legal issue. They were told that the information was not complete. Even though crucial information about the case was missing, both students argued their cases very confidently. Their confidence was better than others who had complete details of the case.

44. It is clear that confidence of System 1 stems from the convincing story it builds, not from the completeness of the information.

45. Framing effects: the way sentences are framed also makes an impact on our interpretation. For example, this atta is 90% maida free makes a better impression than this atta contains 10% maida.

46. System 1 keeps constantly assessing the situation within and outside our brain. System  2 searches for answers to questions like what do you think is the performance of incumbent government or what is the height of the windows.

47. It needs to be noted that System 1 needs no effort whereas System 2 needs to focus or strain your brain.

48. From an evolutionary perspective, System 1 is designed in a way that quickly assesses the situation around you whereas System 2 is for more logical and structured thinking.

49. Framing effects: when you order words, recipients are more likely to be effected by the first few words in the order. Refer to Solomon Ash experiment.

50. Mental Shotgun: Mental  shotgun is a phrase coined by DK to describe an act of intuition of System 1. When faced with a difficult question, the brain chooses to answer the simpler question.
System 1 is good at jumping to conclusions. When you have limited time and information, System 1 is your best bet.

51. The quality of research of a researcher is severely effected by the sample size.
The example discusses the incidence of kidney cancer in different counties of the United States. It points out that the counties with the lowest incidence of kidney cancer are predominantly rural, sparsely populated, and located in the Midwest. Interestingly, the counties with the highest incidence of kidney cancer are also rural, sparsely populated, and in the Midwest.
This seems counterintuitive at first, but the key factor is the small population size of these counties. Small populations are more likely to show extreme outcomes due to statistical variability. In other words, when dealing with small samples, you’re more likely to observe unusual results—either very high or very low incidences of cancer—not because of any causal environmental factors, but simply due to randomness.
Kahneman uses this example to demonstrate how our System 1, which is fast, intuitive, and prone to jumping to conclusions, struggles with purely statistical facts that don’t have a causal story. Our minds are wired to make sense of the world by creating narratives, so when faced with statistical facts like the incidence of kidney cancer, we automatically search for a causal explanation, even when none exists1. This cognitive bias can lead to incorrect conclusions and highlights the importance of considering sample sizes and statistical principles when interpreting data.

52. The problem of small samples turns out to be an Achilles'' heel even for the most experienced and intelligent researchers.

53. The laziness of System 1 manifests itself in the form of acceptance of small sample sizes. Ideally a researcher needs to question extreme variations and therefore small sample sizes.

54. The human mind tries to make sense of random events. This is part of our evolutionary psychology.

55. The mind tries to build causality around chance events which is absurd.

56. When sample sizes are small there are chances of occurrence of extreme observations. Like in the example of kidney cancer patient patients in the US.

If you fall for this fallacy, you will try to make sense of a random event which is unscientific.

57. Anchoring bias is when you are misled by an initial random estimate. For example, did Gandhi die at 114? Your answer may be somewhere near the anchor as you are misled by it.

58. Stock prices give you an illusion of control. You are almost always trying to make sense of random price moments.

59. Are small schools likely to perform better than large schools? For many years people thought so but it was later found to be untrue. There were many worst performing small schools as well.

60. Small samples cannot give you an accurate estimate of large populations.

61. Selective activation of compatible memory.

52. Real estate agents were found to be influenced by anchor prices although they were not ready to acknowledge.

53. The anchoring index:
Difference between average high and low estimates /
Difference between the anchor numbers
0% indicates no anchoring effect and 100% indicates a lazy mind.

54. Anchoring to random numbers or estimates makes you gullible. In order to prevent anchoring effect, you need to activate System 2. Think of something that is exactly opposite.

55. We are constantly effected by stimuli around us and we need to be aware of it.

56. If you are purchasing a house then beware of high price anchoring effect. Suppose if the maximum limit of compensation from corporations as consumer damages is limited to $1000,000. All the large firms will be benefiting at the expense of smaller ones.
Marketers use anchoring tricks all the time. The maximum you can buy is 12 packs of soup. As soon as people see such a sign board they will start buying more assuming scarcity of the product. Auctions, charities, revenue statements and many more business transactions are effected by anchoring. The first estimate of cash flow or revenue submitted for credit is an example of anchoring.

57. Availability heuristics are mental short cuts that are used by System 1. For example, if you read about a recent Boeing plane crash your expectations of air travel accidents increases. Similarly if it has rained in the recent few days, you carry an umbrella while going out. Your brain remembers the most recent occurrences of such events and if they are fresh in your memory, then your judgements will be biased.

58. The ease with which you remember these events is more important than the frequency of such events. If you are a witness to a road accident, then your expectation of road accidents suddenly increases.

59. To counter such biases you need to put System 2 in action.

60. In the context of a team, each team member may feel that she has contributed more to the task because your estimate of your contribution is more readily available than other contributions, in your mind. In the context of a marriage, each partner may overrate their contribution to the cleanliness of their home.
In stock markets, people may buy some shares based on the most recent good performance of the company, not by the long term performance expectations.

61. The paradox of the availability heuristic is that sometimes it doesn't work. Sometimes people are more influenced by the content rather than the ease of retrieval.

For example, when you list out all the pros of a car you may end up disliking it at the end.
Other times, the ease of retrieval dominates your opinion. When the ease of use and frequency of use are pitted against each other, some times the frequency trumps. List out 6 situations in the recent past when you were assertive. If the number is increased to 12 then you may end up feeling less assertive about your self.
Here frequency has an impact on retrieval.

62. When ease of retrieval is explained away by some spurious explanation, the researchers found that the impact of the availability heuristic could be reversed.

63. The conclusion of these experiments is that ease of retrieval is a System 1 heuristic while content focus is a System 2 function. Whenever System 2 is activated people are left less biased by availability heuristic.

64. The emotional tail wags the dog. In a survey of different technologies, people rated those technologies that they like as beneficial and low risk. While others who hated those technologies rated them as less beneficial and high risk.

Clearly their judgments were swayed by emotion rather than rationality.

65. When the participants of the survey were asked to go through an article explaining the benefits of the technology without any evidence, their conception of risk changed favorably towards the technology very quickly. Quite clearly, there is an element of gullibility.

66. Perceptions of risk are different for experts and laymen. Experts are less effected by biases whereas laymen are most affected. Experts often measure risks in units such as number of lives saved or number of life years saved. Or deaths per million people due to the use of the technology or deaths per million units of product produced. These risk definitions are not limited to some remote hilltop laboratories. Every policy maker will have to make value judgements on behalf of the people.

67. Do you listen to experts or common people? One branch of thought is that you need to weigh in people's opinions as well. Statisticians and experts think in terms of numbers whereas common people are subject to biases. For example, a statistician may define deaths per million or number of life years saved whereas common people may distinguish between good deaths and bad deaths. Say accidental deaths or deaths due to old age or deaths while skiing, pursuing their hobbies.
This branch of thought strongly argues that common people have insights and the wisdom of the crowds has to be taken into account.

68. There is another branch of thought which says that risk or policy decisions should not be guided by popular opinions which can get swayed by emotions rather than realities.

69. A policy maker makes value judgements on behalf of the people. It is important for her to avoid personal biases. At the end of the day, risk is a concept invented by humans. Is there any thing called 'objective' risk? Every policy maker needs to make a decision which most likely will be effected by personal biases. Therefore why not respect and consider the wisdom of crowds?

70. Availability cascade is a situation where people's opinions and fears that may stem from a faulty media report or popular perception get overblown. Probability neglect is when you grossly overestimate the occurrence of an event. When availability cascade(based on availability heuristics of a few people) is combined with probability neglect, irrational exuberance takes the center stage. Bird flu was an event that was blown out of proportion. Sometimes, environmental concerns get exaggerated.

71. For a policy maker, the first important step is assuage people's fears. Policy making cannot be restricted to some unelected officials who are least interested in people's opinions or fears. Proper communication becomes critical.

72. Basing your estimates or guesses on representativeness rather than statistical facts can be misleading. When asked about probability of an event our mind may make a mental shotgun and answer the easier question based on stereotyping. For example, a person who is very silent, reading New York Times is assumed to be intelligent even if she is not. A person who is shy and reclusive is assumed to be a librarian rather than a salesman even though there are more salespeople than librarians in this world.

73. On many occasions representativeness or stereotyping helps you in making the right decisions. For example, men drive cars faster than women, which is true in many cases. But sometimes not considering Statistics may land you in trouble. Representativeness and probability are not the same time. Moneyball Billy Bean example.

74. Base rate neglect can be reduced by activating System 2 by avoiding laziness, better knowledge. Those undergrads who were asked to frown performed better than those who were asked puff their cheeks. To some extent, frowning has activated System 2 while a neutral expression led to reliance on System 1.

75. In case of Tom W the right answer should have been to stick to the base rates, slightly adjusting them for the unreliable assessment provided by an uncertified psychologist. You should have adjusted the base rates slightly lower for the highly populated branches of study and higher for other branches which are more difficult.

76. Two rules for probability estimates:
a) Always anchor your assessments on base rates. Don't fall for the base rate fallacy by ignoring base rates.
b) Critically evaluate the specific case data in the light of base rate data. Question the diagnosticity of your evidence.
If you are misled by specific case data and ignore the base rate then you may  be grossly mistaken.

77. Conjunction fallacy: When the conjunction of two disjoint events is evaluated, the brain may fool us by choosing the more plausible scenario rather than the more probable one.
In the Linda example, a vast majority of participants choose Linda is a feminist activist and teacher over Linda is a teacher. Statistically speaking if you think of teachers and activist teachers,(venn diagrams) the set of activist teachers is a subset of teachers. Therefore the probability of Linda being a teacher should be higher than her being a teacher and activist.
This is again a case of System 2 being lazy and System 1' s fallacy.

78. System 1 is constantly looking for and is deceived by associative coherence.
Linda became very popular in academia and news paper articles. But no one was able to disprove the conjunction fallacy convincingly.

79. Christopher Ksee of the Chicago University conducted the Crockery Ware experiment.
There were two facets to the experiment joint evaluations and two separate evaluations.
In the joint evaluation, people rated set 2 higher than set 1 since set 2 had higher number of items in working condition. (Set 1 and Set 2 had similar crockery items except for a few additional items in System 2. Some of these additional items were broken while others were in a working condition.)

In the two separate evaluations, surprisingly set 1 was assigned more value than set 2 by customers. The first result makes economic sense but the second one doesn't.

This is like the conjunction fallacy. For items like crockery ware, norms set the price.
Similar observations were made with baseball cards experiments. The results challenged the assumptions of economic rationality. Set 1 should be less costly than set 2 but the wisdom of crowds doesn't concur with rationality, in this case.

80. When presented with two options: one of which is more likely and less coherent or verbose. And the other which is less likely but more coherent. People chose the second one even though it is less probable and more coherent. Our mind is constantly looking for associative coherence.

81. In the Bjorn Borg case, a larger number of participants chose  Option C over Option B as it is more coherent in appeal but not logical. For any player it is difficult to come back after losing the first set, let alone Bjorn Borg.

82. The mind likes causal stories but tends to ignore statistical facts. For example, in the green cab, blue cab example most people ignore the base rates. Bayes formula for conditional probability should be used in this case. By restating the statistical base rate as a causal base rate you can induce the mind into taking account of base rates. If the above example is restated as 85% of the green can drivers are responsible for accidents in the city. This statement is causal in nature and therefore induces System 1 to use the base rate. For Bayesian thinkers, both types - statistical base rates or causal base rates should induce the  use of base rates. They wouldn't differentiate between the two types of rates.

83. Daniel Kahneman rues about the fact that psychology students don't put their learnings into action. Although they may clear the psychology examination, they are not ready to change their behavior for greater good. Ex: the helping experiment. When the seizures student was about to die, only 4 out of 15 people responded. When responsibility is shared between multiple people it gets diffused. In the end, no body may take up the responsibility assuming that others would.

84. People don't change by learning new statistical facts. It has been observed that people's thinking can be affected by quoting individual examples. Our minds are quick to deduce from particular to general, but not from general to particular. We learn more when the examples are related to our own psychology rather than general statistics about the population.

85. Regression to the mean: Israeli flight cadets example. The instructor informed the author about the effects of rewards and reproaches in his training. Those cadets who perform very well are appreciated or rewarded. And those cadets who perform very poor are rebuked. The performance of those cadets who performed well and were appreciated falls immediately after the appreciation. Similarly those cadets whose performance sucked and were reprimanded, show immediate improvement. The Israeli trainer attributes these performance traits the appreciation or rebuke awarded by him. But the author recognizes it is statistical regression at play more than anything else. A sportsman who performs well on the first attempt may not be able to perform at the same level on the second attempt. This is because a large mean deviation may be followed by a regression to the mean.

86. This concept of regression to mean is ubiquitous around us. It is tough to explain it to people  because the general tendency of the mind is to build a story or a causal explanation.
Our mind looks for associative causation while observing random events. Random events are random in nature, there is no underlying explanation. A golfer who underperforms on the first day performs better on the second day. May be he was not lucky enough on the first day and was luckier on the second day. Luck is random and erratic. There's no causal explanation for luck.

87. Correlation doesn't necessarily imply causality. The correlation between two variables may range from -1 to 1. The correlation number tells you that there are some common factors affecting both variables. It doesn't have to be causality. For example, intelligent  women marry less intelligent men. Such a statement may evoke different reactions from different people.
There is no perfect correlation between intelligence of men and women in a marriage. Men and women are equally intelligent. So it's more of a statistical phenomenon of randomness and there is no causation. Statistically speaking, a highly intelligent man or woman may find a less intelligent spouse more likely than not.

88. You try to be good to everyone assuming that they will return the favor. But that's not true. People being good to you or not is a matter of statistical chance. People regress to mean, their true nature which is not related to your behavior.

89. A stock or a mutual fund may perform exceedingly well in a particular year. But just remember that due to mean reversion, their performance may drop in the immediate subsequent year. Similarly an economy may grow at an extraordinary rate in a particular year or period. But it is not possible to sustain the same kind of momentum forever due to mean reversion. This concept of regression to mean is ubiquitous around us. Even in revenue forecasting you need to take the regression to the mean into account.

90. When the correlation between two variables is less than perfect then regression to mean occurs inevitably. Suppose if the correlation between the scores in Math and English are less than perfect, then a student who performs well in Math may not perform very well in English and vice versa. This is due to regression to the mean.

91. When correlation between two variables is close to perfect then the regression to mean is less pronounced.

92. The placebo effect is a fascinating phenomenon where people experience real improvement after receiving a fake or nonexistent treatment, known as a placebo. Although the placebo itself cannot cure any condition, the beneficial effects reported are due to a person’s belief or expectation that their condition is being treated. Essentially, the mind convinces the body that healing is underway, leading to genuine physical and psychological changes. While placebos won’t lower cholesterol or shrink tumors, they can be effective for managing conditions like pain, stress-related insomnia, and cancer treatment side effects like fatigue and nausea. Remember, it’s not just positive thinking; the ritual of treatment and the brain-body connection play crucial roles in the placebo effect.(Microsoft Co-pilot)

93. A control group and a treatment group are the two types of groups used in research. For example, do energy drinks reduce depression in depressed children? In order to assess the impact of energy drinks, the treatment group will be treated with energy drinks. The control group is very similar in nature and characteristics to the treatment group. Unless you have a control group you cannot assess the impact of a drug or any other phenomenon.

94. In the above example, the group of depressed children is an extreme group. And extreme variabilities regress to the mean irrespective of the energy drinks consumption. Instead of energy drinks, they may hug cats or do something unrelated. But the point here is that extreme observations tend towards the mean.

95. A candidate who does extremely well in a first round interview may not replicate the same in the second round. The interviewer may conclude that the interviewee was nervous or perhaps trying too hard. In reality it is regression to the mean. The first round was above par and the second round is the usual level of performance.

96. Expert intuition comes only with experience because experts have been in these situations before and based on a diagnosis of the situation they make decisions. For example chess players, firemen, doctors can diagnose the situation and quickly come to a conclusion unlike us. Decision making is a blend of expert intuition as well as data driven. Non experts decisions may be misguided by System 1.
If decisions are made without taking mean regression into account they could be mistaken.

97. Counsellor freshman example: People substitute prediction with evaluation of current evidence which may not always result true. People don't take the regression to mean in to consideration. This can lead to systemic bias in prediction.

98. Kahneman suggested a four step process for making educated predictions:

Kahneman's Four-Step Method Daniel Kahneman’s approach to accounting for regression to the mean is a valuable concept in decision-making. Let’s break it down into four steps:

1.Start with an estimate of average GPA: Begin by considering the average GPA for a specific group or context (e.g., female seniors in a state university).

2.Determine the GPA that matches your impression of the evidence: Based on additional information (such as Julie’s advanced reading ability at age four), form an initial impression of what her GPA might be.

3.Estimate the correlation between reading precocity and GPA: Assess how closely reading ability at age four correlates with GPA. In Kahneman’s example, he assumes a correlation of 0.3.

4.Adjust the prediction based on the correlation: Move a percentage of the distance from the average GPA to the matching GPA. For instance, if the correlation is 0.3, adjust the prediction by 30% of the difference between the average GPA and the impression-based GPA.(Microsoft Co-pilot)


99. When you use this four step method you are regressing to the mean. This will improve your accuracy of predictions. When you are wrong you will come closer to the correct answer.

100. But this will not help you make extreme predictions. While regressing to the mean you are making more moderate predictions. You will not ne able to identify the next Amazon from a large number of startups most of which fail actually. When the quality of your evidence is poor or incomplete or when there is a large element of chance involved this method will push you towards the mean.

101. By removing biasedness from your estimate, you will miss out on making extreme predictions. You will never have the satisfaction of making extreme predictions.

102. One needs to note that being unbiased is not the objective all the time. When some errors are more costly than other errors, then it is not necessary to be unbiased. For the venture capitalist the risk of missing out on the next Google or Amazon is much bigger than making a few bets which make big losses. Therefore it is alright even if the venture capitalist is over optimistic on some startups. For a conservative banker the risk of one big borrower defaulting is much higher than refusing a number of potentially good clients. A rational person should not face an issue because she will try to maximize her outcomes given the options. A rational venture capitalist tries to maximize the returns by choosing the best projects among the existing ones.

103. While making investment decisions or any other important decisions in life it is important to not delude yourself with wrong expectations or information. The point of the 4-step exercise  is to make you think about how much really know about the prediction you are making.

104. Kim vs Jane example. You have a position open for a college lecturer. Kim is very attractive and brilliant in her presentation. She made a great impression in the short time she was interviewed for. Whereas Jane has an experience of 3 years in a post doctoral position. She is more sober than Kim. Who would you hire? You must go with Jane although your intuition is suggesting Kim. You do not have complete information about her. This is like the law of small numbers. When you have a small sample then you are bound to find extreme observations.

105. You may come across similar situations in real life. If you are a venture capitalist you have to choose between a startup which doesn't have a proven market but it is exciting. The other startup has a proven market and has a better chance of succeeding. Be sure about making this choice. Double check your evidence. Your intuition may fool you.

106. Human beings are supposed to be rational but they are not in all situations. System 1 deludes us with a false sense of confidence. A rational venture capitalist knows that the chances of success of even a promising start up are only moderate.

107. Narrative fallacy: Our mind likes to explain complex and random events in the form of simple stories. System 1 loves stories, tries to fix patterns and attribute effects to causes.
What out brain doesn't recognize is the chance that luck played. After reading Google Story you may think that you have figured out a way to build a successful business. But the story is so well narrated that we underestimate the role played by luck. Bad things happen to good people even while working with good intentions. So the point here is that we attribute success or failure to people and their actions but we tend to underestimate the role that chance plays in our lives. The idea is not to discredit the people involved but to understand the role played by luck. And the halo effect worsens the false sense of comfort that our mind gives us.

108. Many people said that 2008 crisis was bound to happen.'I knew this was going to happen'. But most of them were not able to show it before the crisis occured. Our mind tries to build a good story but the reality is that the world is less knowable then you think it is. There is an element of chance or luck in almost everything we do. And whenever the element of chance is higher,  opportunity to learn is lower.

109. Our knowledge of the past always remains incomplete and therefore we can not predict the future. By building coherent stories of the past we are misleading ourselves with a wrong sense of confidence. The role played by chance is often underestimated. Inconsistencies in our stories create a sense of discomfort, so we create stories that are seemingly inevitable.

110. The CIA had information that Al-Qaeda was planning on attacking the US beforehand. But at that point it didn't seem plausible. With the benefit of hindsight people criticize the authorities. This is known as hindsight bias. Events that seem impossible before occurrence look "very obvious" with the benefit of hindsight. The worse the outcome,the greater the hindsight bias.

111. When a doctor performs an operation which has negligible chances of death but the patient dies in the rare case, it is easy to remark that failure was obvious with the benefit of hindsight. This is another example of hindsight bias.

112. Post occurrence of an  event our minds are updated with the new information. And once updated all our previous versions are completely forgotten.

113. A group of experts were surveyed about outcomes of Nixon's visit to China in 1972. They were surveyed about 15 different possible outcomes of the meetings. The probability of the outcomes pre event were greatly different from the post event survey, due to the benefit of hindsight. With the benefit of hindsight people tend to overestimate the probability of outcomes - their own estimates as well others estimates.

114. Policy makers, politicians, financial consultants, physicians and many other professions are subject to the outcome biases of people at large. After the catastrophe, people can easily remark that the authorities were negligent with the benefit of hindsight bias.

115. Policy makers and politicians are blamed for unfortunate outcomes while they are not given enough credit for good outcomes. The net result is that policymakers and bureaucrats stick to standard operating procedures without taking any undue risks in order to avoid litigation or the wrath of the public at large. Therefore increased accountability is a double edged sword that can lead to stifling of innovation or risk taking and policy paralysis. So we should be careful in differentiating between cases of gross negligence and black swan events.

116. Similarly, CEOs are given too much of credit for the success or failure of their companies. Of course, the role played by business leaders is not unsubstantial, but the tendency of the business books and authors is to overstate their successes and failures. The role played by luck is grossly underestimated.

117. The business books are written in such a way that they appeal to our intuition, too tidy to be true. In many cases, reckless risk taking may be rewarded with success and the business leaders are revered for their successes. Those who question are looked at as timid and stupid.

118. What are the factors that are within the control of a CEO? What are the factors that are not in control? A strong CEO may be able to improve the chances of a company only marginally. Yet we tend to attribute success or failure of a company to the CEO. She is a weak CEO, therefore the company is not doing well. He is a strong CEO therefore the company is doing well. There are elements of halo effect, outcome bias and hindsight bias in both the statements. In reality it is the other way around. We are assuming that she is a good business leader because the company is performing well and vice versa.

119. In the book Built to last, the  role played by good business leaders and business practices is over stated. In the period immediately following the comparison period, the so called model companies underperformed, regressing to the mean profitability of the sector. Here we are comparing two companies with varying luck. Again there is no point overstating the role played by chance.

120. It is easier for system 1 to construct a story when the evidence is poor or the information is limited. Smaller the evidence greater the confidence in our story or intuition. Many beliefs that we have are not based on reason.

121. The illusion of validity: This illusion is a situation where a person places too much of confidence on the observations that she makes. The author worked in the psychology department of the Israeli army where his job was to evaluate the cadets based on a task given to them. The 'wall and log test' was a test in which in a group of 8 soldiers were given a log and asked them to use it to cross a wall. Based on the performance of the cadets the author and his colleague had to psychologically evaluate them. There were all sorts of behaviors  exhibited by the cadets- leaders, laggards, egos, team work , lack of team work, aggressors, weak links etc.
Later on the author found that his assessments did not in any way correlate with the performance of the cadets as an officer. His numbers were slightly better than random guesses. In spite of the lack of correlation, the author continued to conduct the psychological examination as it was a part of the military routine. The author continued to assess the cadets with great confidence.This is a case of representativeness heuristic or substitution bias. We place too much of confidence on the observations that we make with limited knowledge. What you see is all there is.

122. The illusion of stock picking: The skill of picking stocks is an illusion and an entire industry is based on it. Trading is a zero sum game there are always winners and losers and it is a matter of luck. In a study of 163000 trades conducted over a period of seven years found that the stocks that were sold gave a higher return(3.2%) than the stocks that were bought. The more you trade the greater your chances of losing your wealth. Stock prices already reflect all the information that is available and future expectations. So why do some people sell and others buy? What is the rational? It's intuition or subjective judgements that are at play here, there is no scientific or logical explanation to these actions. Professional money managers extract money from untrained individuals acting on their whims.

123. Even for professional asset managers it is difficult to consistently beat the market. In a study conducted by the author it was found that there was no correlation between performance of fund managers between any two years which implies that they are picking stocks at random. Had they followed a methodology or skills, the correlation would not be zero. If you confront the industry with these facts, they will just brush it under the carpet. Till today very few asset managers can actually beat the market. Mutual funds performance swings erratically year after year. In an efficient market a well educated guess is as good or bad as a wild guess. A monkey throwing darts would probably do a better job. Another point to note is that, just because you can analyze a security very well doesn't imply that the stock will perform well. The information is already available and reflected in the stock prices.

124. The illusion of pundits: There are many pundits on various media who make expert predictions about economic, political and financial issues. A study was conducted in which the expert predictions were compared with normal people. They were given three options of outcomes about major political and economic events. The results were devastating, again a monkey throwing darts would have done a better job. These experts and the media organizations backing them do genuinely believe in their ability to predict. The greater their belief, the greater the flamboyance. But the problem is that they are constructing coherent stories of the future based on the knowledge of the past. Halo effect, outcome bias and hindsight bias play their role. The role played by chance is underestimated. In this age of ultra specialization where experts, economists etc make predictions based on false confidence, predictions made by regular and unsophisticated news paper readers are almost equally good.

125. One must remember that at the end of the day the world is a complex place. It is difficult to make predictions. Flamboyance in making predictions can't be equated to accuracy. The author differentiates between hedge hogs and foxes. Hedgehogs are those people who know one big thing and build their discourse around it. Media producers love hedgehogs who are opinionated and can make strong opinions on TV etc. ( to sell their shows) Foxes are those people who recognize that the world is a complex place and know that luck plays a substantial role. Foxes are better than hedgehogs at making predictions but even foxes are not great. So don't expect too much from fund managers, economists, experts and pundits. They may on average be correct for only 20 to 30% of the time. That's the best case.

126. Statistical models are much more likely to make better predictions than experts in most fields including economics, finance, medicine, alternative assets like wine, engineering, sports, pathology, sciences, law and judiciary etc. Paul Meehl. Humans are subject to all kinds of vagaries. A cool breeze in a hot summer can result in a more optimistic prediction. A parole judge after a sumptuous lunch may be more favorable to the prisoner than before lunch.
In the wine industry, a formula was developed to predict the price of wine by Ashen Felter. The formula included three variables: the temperature during summer, the amount of rain fall during harvest and the rainfall during previous summer. Based on this formula the statistician able to predict the wine prices more accurately than most of the wine experts.
Simple statistical formulas beat experts all the time in almost all the fields. This could be due to out of the box thinking or over confidence or making things too complex. In many  cases experts override the statistical outcomes due to their overconfidence. In the famous broken leg thought experiment the subject who is very interested in movies will not go to the movies due to a broken leg. This is one of the few cases where the specifics trump statistics. But such cases are rare. 
Another issue with human predictions or decision making is inconsistency. People give different answers when they are asked the same question within a gap of 10 minutes.

Due to human inconsistency and unreliability many people suggest usage of algorithms for decision making. Statistical models are much more robust than clinical intuition. Human beings are affected by things around them which are not noticed explicitly. A well programmed machine can give you better decisions or predictions than humans.

127. Research has shown that instead of using multiple regression models with different weights for different causal factors, one can use equal weights. It has been found that models with equal weights can actually perform better than different weights. Virginia Apgar's equal weighted model to determine infant health has helped in preventing infant deaths. Similarly while choosing a stock, identify five important features that you are looking for. This simple model with equal weights can actually perform better than most other models. No statistical research is required.

128. Experts scoff at the usage of algorithms for decision making citing that they are mechanical.To avoid overconfidence and inconsistency algorithms can be used to make better predictions. Although clinicians are good at making short term predictions, formulas are almost always better in making long term decisions. If a driving software meets with an accident in trials we tend to overblow the problem forgetting about the fact that more people die in road accidents every year than in the warfield. We overreact to situations when a machine error is involved than when a human error is involved.

129. The bias against machines is probably due to human inclination towards natural rather than artificial. For example, when buying staples or vegetables many people prefer organic or chemicals free. Even beer sales have increased by adding the phrase 'no preservatives' on the label. However people's attitudes towards machines are steadily changing. Your credit scores are determined by machines. Most major streaming platforms use machines to suggest content to subscribers.
Programmed advertisements on search engines  are also suggested by software code. So machines are gaining acceptance steadily.

130. While conducting interviews to avoid halo effect and other subjective biases it is better to go by a more specific exercise where individuals are rated on five or six different but critical parameters. Based on the little book by Paul Meehl this exercise is much more accurate and useful in identifying suitable candidates.

131. The author doesn't discredit expert intuition. In fact, he collaborated with Gary Cline in understanding how expert intuition works. When does it work and when does it not? When can you rely on NDM natural decision making and when you cannot?

132. This is how expert intuition works. Over the years experts such as fire fighters or chess masters, store all their experiences in their memory. In a pressure situation, the expert quickly recalls the similar incident that occurred earlier, from his memory and recognizes the pattern. He quickly weighs all the options and chooses the best way out with or without modifications depending on the situation. Thus expert intuition is nothing but associative memory put to work and deliberate application to the current situation by human brain. We marvel at the intuition of a fire fighter who escapes a burning house just before its collapse. This is similar to recognizing your friend among a group of people. You know it but you cannot explain how.

133. The author's understanding of intuition was based upon observations made of clinicians, stock pickers and political pundits whereas Gary Cline's understanding of intuition was based on observations made of fire fighters, physicians, nurses, chess masters. The author's main objective is to reject subjective false confidence and Cline's objective is to recognize and appreciation of natural decision making. When do you rely on expert judgment (including your own judgment):
1. The environment in which the expert operates, does it exhibit observable regularities?
2. Can these regularities be identified over a period of time by experience.

In other words, expert intuition is a skill and it can be developed over time. The validity of the prediction is dependent on the orderly arrangements of a complex situation. For example physicians, athletes, fire fighters and chess masters operate in complex situations which are fundamentally ordered. Therefore the predictive power is higher whereas stock pickers, political pundits and clinicians work in complex situations which do not follow any particular order. Therefore the predictive power is lower.

134. When the quality of feedback is good and immediate, then over a period of time expertise can be developed. An anesthetist prediction are more valid than that of a radiologist or a psychotherapist. If an anesthetist says that something is wrong, the patient has to immediately attended.

135. The idea is not to discredit human brains but humans are overconfident, irrational and inconsistent from time to time. And the situations around us are much more complex than we think they are.

....to be continued....part 2

Comments

Popular posts from this blog

How Big Tech Firms have redefined the paradigms of economics!

Restating the Neoclassical Theory of Factor Income Distribution

Budget Wishlist 2024