Itzhak GILBOA

Nationality Israeli
Year of selection2009
InstitutionHEC
CountryFrance
RiskSocio-economic risks

Type of support

Chairs

Granted amount

1 500 000 €

Decision Science: Reconciling classical rationality and gut feelings

Life is replete with decisions, most of which have unknown consequences. As individuals and as societies, when we decide whether to undergo medical treatment, or act against global warming, we make decisions without knowing what will actually happen. Among these decisions, we distinguish between two main types: those in which probabilities are known, such as in the case of insuring one’s car and those in which probabilities are not known, and where expert advice may diverge, such as in the case of global warming or financial crises.
To better control this uncertainty, between the 17th and 20th centuries, probability theory saw the rise of two opposing camps, as often happens in science. One camp, which is typically associated with economics, formal models, and high-powered mathematics, argues that classical theory is the only way to make decisions rationally; that economic agents are probably smart enough to know it; and, in case they’re not, that they should be taught the theory. The other camp, associated with psychology, makes a strong appeal to intuition and argues, based on experimental work, that rational decision theory is close to useless, even as a lofty goal of how decisions ought to be made, let alone as a description of how people actually behave.
Professor Gilboa’s goal is to take the best insights from each school of thought. He takes a much more modest view of theory than do some theorists: theory provides tools to hone and sharpen intuition, but it is not designed to replace it. At the same time, he takes a more critical view of experiments than do some experimentalists: experiments are necessary to delineate the scope of theories, but they rarely constitute clear-cut refutations, as in the natural sciences. In Gilboa’s view, it would be a grave mistake to ignore psychological evidence and pretend that people do - or even that they can - make decisions, as the classical models suggest. But it would be an equally serious mistake to rely on intuition alone and to assume that the formal, mathematically based theory of decision making has nothing to teach us.
In the context of probability, there are situations, such as gambling in casinos, where there are known and well-defined probabilities that can be used for standard calculations. Such calculations can help us avoid mistakes – such as the “gambler’s fallacy” whereby people believe that successive occurrences of the same outcome (e.g., a sequence of coin tosses all coming up heads) makes it more likely that the next occurrence will be different (i.e., tails), as though to “compensate” for the past. This is a plain mistake, originating from a misunderstanding of the “law of large numbers”. Once explained (each coin toss is an independent event not caused by prior history), people see the logic and understand why they were wrong.
In contrast, there are situations in which people do not obey classical theory because they do not have access to well-defined probabilities, as is the case with stock market crashes. In these cases, people often insist that classical theory does not capture their intuition. As a result, they may not find classical theory very useful. This means that decision theory needs to develop a better understanding of how people behave in such situations. However, it does not mean that all of decision theory is useless.
In the final analysis, a good decision is one that is considered as such by the decision maker. Rationality is not a medal of honour bestowed on selected decision makers by decision theorists. Rationality is a matter of subjective coherence, of a person feeling that they have made the best decision they could have. In most situations, pure theory cannot dictate what the “correct” decision is; but it can, and should, help the decision maker in finding out what the best decision is for her in the situation at hand. That’s why Professor Gilboa tries to combine, with a sufficient degree of openness and critical thinking, mathematical theory, intuition, and psychological evidence in order to help people understand the decisions of others, and make better decisions .
Thanks to an academic background both in mathematics and economics at Tel-Aviv University, which led him to Northwestern and then Yale University, Professor Gilboa’s work is at the crossroads of various fields. This AXA Chair position at HEC helps him to advance decision science: not only does the HEC department of Economics & Decision Science encompass research activities in a large and fertile range of topics from robustness theory to financial economics, but HEC is also one of the best business schools in the world, where tomorrow’s decision makers are trained.

What is a good decision?

Itzhak Gilboa, HEC Professor, Department of Economics and Decision Science. He is holder of the AXA-HEC Chair for Decision Science.

Life is replete with decisions, most of which with unknown consequences. As individuals and as societies, when we decide whether to buy insurance, undergo a medical treatment, or cope with global warming, we make decisions without knowing what will actually happen. Among these decisions, we distinguish between two main types: those in which probabilities are known, such as in the case of insuring one’s car, and those in which probabilities are not known, and where experts advice might diverge, as in the case of global warming, financial crises, and so forth.

Starting with the pioneering works of Pascal, the Bernoullis, and others, dating back to the 17th and 18th centuries, probability theory has developed as a branch of mathematics that deals with the representation of uncertainty. However, the second half of the 20th century has also witnessed a growth in the accumulation of psychological evidence showing that many “mistakes” or “irrationalities” are not so easily dismissed, and that some people reject the classical notions of rationality even when these are explained to them.
This state of affairs led to the generation of two opposing camps, as often happens in science: one, typically associated with economics, formal models, and high-powered mathematics, that argues that the classical theory is the only way to make decisions rationally; that economic agents are probably smart enough to know it; and, in case they’re not, they should be taught the theory. The other camp, associated with psychology, makes a strong appeal to intuition to some extent supported by experimental work, holds that rational decision theory is close to useless, even as a lofty goal of how decision ought to be made, let alone as a description of how people actually behave.
Of course, the sensible thing to do is to take the best insights from each. Fortunately the field of decision science is now at a point that makes this possible. In my view it would be a grave mistake to ignore psychological evidence and pretend that people do - or even that they can - make decisions as the classical models suggest. But it would be an equally serious mistake to rely on intuition alone and to assume that the formal, mathematically based theory of decision-making has nothing to teach us.
There are situations, such as gambling in casinos, where there are known and well-defined probabilities that can be used for standard calculations. Such calculations can help us avoid mistakes such as the “gambler’s fallacy” where in people believe that successive occurrences of the same outcome (e.g., a sequence of coin tosses all coming up heads) makes it more likely that the next occurrence will be different (i.e. tails), as though to “compensate” for the past. This is a plain mistake, originating with a misunderstanding of the “law of large numbers.”

One can explain it (each coin toss is an independent event not caused by prior history) and people see the logic and understand why they were wrong. There are other situations that do not involve logical mistakes, but where people still feel that the theory can help them make better decisions. For example, due to “loss aversion” people can often undertake the risk of significant losses in order to avoid a sure, but moderate loss (for example, when buying car insurance, they prefer a small or no deductible). This type of behavior does not result from a mathematical mistake, but it is a mode of behavior that many people find irrational once they analyze it.
In contrast, there are situations in which people do not obey the classical theory because they do not have access to well-defined probabilities, as is the case with stock market crashes, election results, and so forth. In these cases people often insist that the classical theory, elegant though it is, does not capture their intuition. As a result, they may not find the classical theory very useful. This means that decision theory needs to develop to better understand how people behave in such situations. But it does not mean that all of decision theory is useless.
In the final analysis, a good decision is one that is so considered by the decision maker. Rationality is not a medal of honor bestowed on selected decision makers by the decision theorists. Rationality is a matter of subjective coherence, of a person feeling that they have done the best decision they could have.

In most situations, pure theory cannot dictate what the “correct” decision is; but it can, and should, help the decision maker in finding out what is the best decision for her in the situation at hand. With a sufficient degree of openness and of critical thinking, mathematical theory, intuition, and psychological evidence can be combined to help people understand the decisions of others, and to make decisions that they would like better.

To add or modify information on this page, please contact us at the following address: community.research@axa.com