[ View menu ]

September 13, 2004

Looking for a good introduction to decsion science?

Filed in Books
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

RATIONAL CHOICE IN AN UNCERTAIN WORLD BY REID HASTIE AND ROBYN M. DAWES

rational choice in an uncertain world.jpg

The Psychology of Judgement and Decision Making.

Intended as an introductory textbook for both undergraduate and graduate students, Rational Choice In An Uncertain World lays out the foundations of decision science. In a non-technical style, Hastie and Dawes compare basic principles of rationality with actual behavior in decision-making. This book is not about what to choose, rather about how we choose.

Excerpt:

“Humans evolved from ancestors hundreds of thousands of years ago who lived in small groups and spent most of their waking hours foraging for sustenance. When we weren’t searching for something to eat or drink, we were looking for safe places to live, selecting mates, and protecting the offspring from those unions. Our success in accomplishing these “survival tasks” did not arise because of acute senses or especially powerful physical capacities. We dominate this planet today because of our distinctive capacity for good decision making.”

“The new knowledge that underlies the filed of decision making is simple principles that define rationality in decision making and empirical facts about the cognitive limits that lead us not to decide rationally.”

“One fundamental point of this book is that we often think in automatic ways when making judgments and choices, that these automatic thinking processes can be described by certain psychological rules (e.g., heuristics) and that they can systematically lead us to make poorer judgments and choices that we would by thinking in a more controlled manner abut our decisions. This is not to say that controlled thought is always better than intuitive thought. In fact, we hope the reader who finishes the book will have a heightened appreciation of the relative advantages of the two modes of thinking.”

Review by Practical Philosophy: The Journal of the Society for Philosophy in Practice

About the Authors:

PROFESSOR REID HASTIE

Hastie.jpg

Reid Hastie is a Professor of Behavioral Science on the faculty of the Graduate School of Business in the Center for Decision Research at the University of Chicago. His primary research interests are in the areas of judgment and decision making (legal, managerial, medical, engineering, and personal), memory and cognition, and social psychology. He is best known for his research on legal decision making (Social Psychology in Court [with Michael Saks]; Inside the Jury [with Steven Penrod and Nancy Pennington]; and Inside the Juror [edited]) and on social memory and judgment processes (Person Memory: The Cognitive Basis of Social Perception [several co-authors]). Currently he is studying: the role of explanations in category concept representations (including the effects on category classification, deductive, and inductive inferences); civil jury decision making; the role of frequency information in probability judgments; and the psychology of reading statistical graphs and maps.

Reid Hastie vita

ROBYN M. DAWES, Ph.D.

dawes.jpg

Robyn Dawes is the Charles J. Queenan, Jr. University Professor Ph.D.: University of Michigan Department Member Since: 1985 at the Department of Social & Decision Sciences at Carnegie Mellon University. His research interests spans five areas: intuitive expertise, human cooperation, retrospective memory, methodology and United States AIDS policy. He states, “I write journal articles and books because I believe the information they contain could be valuable — at least on a “perhaps, maybe” basis. I have never written anything with the expectation that it will sell, or become a “citation classic” (although one of my articles has). I believe that in American culture we are obsessed with outcomes rather than with behaving in ways that tend to bring about the best expected outcomes, while “time and chance” play a very important role. […] Some of my clinical colleagues claim that feelings are not understood until they can be put into words. My own view is that every translation of a feeling, thought, idea or mathematical form into words involves at least a small element of automatic distortion, often a much larger element.”

Robyn Dawes Home Page at CMU

September 12, 2004

Do crowds make better decisions than individuals? Yes, says author James Surowiecki in The Wisdom of Crowds

Filed in Books
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

THE WISDOM OF CROWDS BY JAMES SUROWIECKI

the-wisdom-of-crowds.gif

Why the many are smarter than the few and how collective wisdom shapes business, economies, societies, and nations

Traditionally, social sciences view the crowd as an unpredictable, dumb, and panicky monster. Now there is another point of view. New Yorker columnist James Surowiecki, who writes a popular business column and much about how markets work, has noticed a few things about crowd behavior that contradicts this view. He says in fact that crowds of all kinds can be remarkably wise. In The Wisdom of Crowds, Surowiecki explores the notion that large groups of people are smarter than an elite few, no matter how brilliant; crowds can be better at solving problems, fostering innovation, coming to wise decisions and even predicting the future.

Excerpt:

“If, years hence, people remember anything about the TV game show “Who Wants to Be a Millionaire?,” they will probably remember the contestants’ panicked phone calls to friends and relatives. Or they may have a faint memory of that short-lived moment when Regis Philbin became a fashion icon for his willingness to wear a dark blue tie with a dark blue shirt. What people probably won’t remember is that every week “Who Wants to Be a Millionaire?” pitted group intelligence against individual intelligence, and that every week, group intelligence won.”

“There are four key qualities that make a crowd smart. It needs to be diverse, so that people are bringing different pieces of information to the table. It needs to be decentralized, so that no one at the top is dictating the crowd’s answer. It needs a way of summarizing people’s opinions into one collective verdict. And the people in the crowd need to be independent, so that they pay attention mostly to their own information, and not worrying about what everyone around them thinks.”

Review from the Wisdom of Crowds Home Page:

“The Wisdom of Crowds is a brilliant but accessible biography of an idea, one with important lessons for how we live our lives, select our leaders, conduct our business, and think about our world.”

About the Author:

JSurowiecki.gif

James Surowiecki is a staff writer at The New Yorker, where he writes the popular business column, “The Financial Page.” His work has appeared in a wide range of publications, including the New York Times, the Wall Street Journal, Artforum, Wired, and Slate. He lives in Brooklyn, New York.

Review of The Wisdom of Crowds by The New York Times

September 7, 2004

Why do people choose to work alone or on teams? Which has better outcomes?

Filed in Research News
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

INDIVIDUAL OR TEAM DECISION-MAKING – CAUSES AND CONSEQUENCES OF SELF-SELECTION

teams.jpg

People are social animals. There are at least two kinds of decision-making techniques. One technique is to make decisions individually. The other technique is to make decisions as a team. A recent study by Martin Kocher, Sabine Straub and Matthias Sutter, in Discussion Papers on Strategic Interaction from Max Planck Institute for Research into Economic Systems, Strategic Interaction Group, investigates the causes and consequences of individual or team decision-making and what effect the liberty to opt for either technique has on an individual’s ability to make effective decisions. Kocher, Straub and Sutter found that the endogenously formed teams performed significantly better by earning nearly twice as much money in an experiment as did individuals.

Average Profits:
graph-4.gif

We can see here that teams earned significantly more than individuals in rounds 1, 3, and 4 as well as for all rounds together. Team members win on average 13.2 euros but individuals only 7.4 euros.

Reasons for deciding alone or in groups:
graph-2.gif

Participants were encouraged to write down any additional reasons that were important for their decisions. We see here a summary of the additional arguments into nine different categories.

The most important reason for participants who opted for team decision-making was the expectation of better decisions and, thus, higher profits. Most of the subjects choosing individual decision making stressed the importance of being able to decide alone, without any need for discussion or compromise. No participant who prefered to act alone referred to profits or the quality of decision-making, instead all arguments were related to the process of decision-making. Both types of decision makers, interestingly, were highly satisfied with their chosen role.

Abstract:

Even though decision-making in small teams is pervasive in business and in private life, little is known about subjects’ preferences with respect to individual and team decision-making and about the consequences of respecting these preferences. We report the results from an experimental beauty-contest game, where subjects could endogenously choose their preferred way of decision-making. About 60% of subjects prefer to act in a team, and teams win the game significantly more often than individuals. Nevertheless, both individuals and team members are highly satisfied with their chosen role, but for different reasons.

Excerpt:

“Teams are generally expected to make better decisions than individuals, and decisions made by teams are often accepted to a larger extent by those affected by these decisions. Furthermore, decision-making teams allow for the possibility to decentralize authority, encourage gains from knowledge transfers and may help to introduce optimal incentive schemes that can be supervised by peer pressure rather than by more costly shirking detection technologies. Thus, teams have become important vehicles for identifying high-quality solutions to emerging organizational problems (Jehn et al., 1999).”

Full text of article.

August 24, 2004

Fame at last

Filed in Gossip
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

FAST AND FRUGAL IN THE NEW YORKER

current_cover.jpg

The catchiest of the concepts I’ve coined, “fast and frugal reasoning”, has made the New Yorker!

The Unpolitical Animal by Louis Menand

The author doesn’t use it quite correctly, but one can’t ask for everything I suppose. The fast and frugal heuristics I came up with do reason well in real-world environments.

I came up with the term in 1994 or so, it first appeared in print here: Gigerenzer, G. & Goldstein, D. G. (1996). Reasoning the fast and frugal way: Models of bounded rationality. Psychological Review, 103, 650-669.

August 23, 2004

A Model of Reference-Dependent Preferences

Filed in Research News
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

MODELING PREFERENCES THAT RESPECT REFERENCE POINTS

RATCHET.jpg

Reference-dependent preferences are everywhere. A working paper by Berkeley’s Botond Koszegi and Matt Rabin provides a welcome model of them. Read on.

Quote:
“Our goal in this paper was to put forward a fully specified model of reference dependent preferences that can accommodate existing evidence and, most importantly, be applied to a wide range of economic situations. The centerpiece of our model is the proposal that a person’s reference point is her recent probabilistic beliefs about the outcomes she is going to get. Thus, for example, if she expects improvements in her circumstances, and these changes fail to occur, she experiences a painful sensation of loss, even if she has retained or improved on her status quo. Indeed, our model provides an avenue to study an intuition about the strong role that expectations play in employee satisfaction with wages; it predicts both a status quo bias in stagnant environments, and a taste for improvement in environments where workers have become accustomed to improvement.”

Abstract:
“We develop a model that fleshes out, extends, and modifies existing models of reference dependent preferences and loss aversion while accomodating most of the evidence motivating these models. Our approach makes reference-dependent theory more broadly applicable by avoiding some of the ways that prevailing models—if applied literally and without ancillary assumptions—make variously weak and incorrect predictions. Our model combines the reference-dependent gain-loss utility with standard economic consumption utility and clarifies the relationship between the two. Most importantly, we posit that a person’s reference point is her recent expectations about outcomes (rather than the status quo), and assume that behavior accords to a personal equilibrium: The person maximizes utility given her rational expectations about outcomes, where these expectations depend on her own anticipated behavior. We apply our theory to consumer behavior, and emphasize that a consumer’s willingness to pay for a good is endogenously determined by the market distribution of prices and how she expects to respond to these prices. Because a buyer’s willingness to buy depends on whether she anticipates buying the good, for a range of market prices there are multiple personal equilibria. This multiplicity disappears when the consumer is sufficiently uncertain about the price she will face. Because paying more than she anticipated induces a sense of loss in the buyer, the lower the prices at which she expects to buy the lower will be her willingness to pay. In some situations, a known stochastic decrease in prices can even lower the quantity demanded.”

Full article available for download

Related Books:
*Colin F. Camerer, George Loewenstein & Matthew Rabin (Eds.) (2004). Advances in Behavioral Economics.
*Daniel Kahneman & Amos Tversky (Eds.) (2000). Choices, Values, and Frames.

August 19, 2004

Richard H. Thaler

Filed in Profiles
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

DECISION SCIENCE RESEARCHER PROFILE: RICHARD H. THALER

thaler.jpg

Richard Thaler is the Robert P. Gwinn Professor of behavioral science and economics at University of Chicago. He currently serves as the director of the Center for Decision Research, and is a research associate at the National Bureau of Economic Research and co-director of the project on behavioral economics. His research focuses on behavioral economics and finance, as well as the psychology of decision making. He has served as a visiting professor at the Sloan School of Management, Massachusetts Institute of Technology, as well as the H. J. Louis Professor of Economics, Johnson Graduate School of Management, Cornell University. Dr. Thaler earned his B.A. in economics from Case Western Reserve University and his Ph.D. in economics from the University of Rochester.

Current Positions:
*Robert P. Gwinn Professor of Behavioral Science and Economics, and Director of the Center for Decision Research, Graduate School of Business, University of Chicago.
*Research Associate, National Bureau of Economic Research (co-director -with Robert Shiller- of the Behavioral Economics Project, funded by the Russell Sage Foundation)

Recent Academic History:
*January 1988-June 1995: Henrietta Johnson Louis Professor of Economics, Johnson Graduate School of Management, Cornell University and Director, Center for Behavioral Economics and Decision Research
*Septeber 1994-June 1995: Visiting Professor, Sloan School of Management, MIT
*January 1993-July 1993: Visiting Scholar, Sloan School of Management, MIT
*September 1991-July 1992: Visiting Scholar, Russell Sage Foundation, New York, NY

Quotes:
“I am an economist by training, but for the last 25 years I have been exploring ways to incorporate the findings of modern psychology into economic analysis.”

“As firms switch from defined-benefit plans to defined-contribution plans, employees bear more responsibility for making decisions about how much to save. The employees who fail to join the plan or who participate at a very low level appear to be saving at less than the predicted life cycle savings rates. Behavioral explanations for this behavior stress bounded rationality and self-control and suggest that at least some of the low-saving households are making a mistake and would welcome aid in making decisions about their saving”.

“We tend to think others are just like us. My colleague, George Wu, asked his students two questions: Do you have a cell phone? What percentage of the class has a cell phone? Cell phone owners thought 65 percent of the class had mobile phones, while the immobile phoners thought only 40 percent did. The right answer was about halfway in between. The false consensus effect will trap me into thinking that other economists will agree with me, 20 years of contrary evidence notwithstanding.”

Selected Books:
* Richard Thaler (1991). Quasi-Rational Economics. Russell Sage Foundation.
* Richard Thaler (1991). The Winner’s Curse: Paradoxes and Anomalies of Economic Life. Free Press.
* Richard Thaler (ed.) (1993). Advances in Behavioral Finance, editor. Russell Sage Foundation.

Selected Articles:
* Barberis , Nicholas and Richard H. Thaler (2003), A Survey of Behavioral Finance. In Handbook of the Economics of Finance. George M. Constantinides, Milton Harris, and Rene Stultz editors. Elsevier Science, North Holland, Amsterdam.
* Lamont, Owen and Richard Thaler, (2003), Can the Market Add and Subtract? Mispricing in Tech Stock Carve-Outs , Journal of Political Economy. 111(2): 227-268
* Benartzi , Shlomo, and Richard Thaler, (2002), How Much Is Investor Autonomy Worth? Journal of Finance Vol. 57.4, pp. 1593-1616.
* Benartzi, S.and Thaler, R., 2004, Save More Tomorrow: Using Behavioral Economics to Increase Employee Savings, Journal of Political Economy, Vol. 112:1, 164-187.

Other Interests:
Tennis, skiing, wine.

Read more:
Richard H. Thaler’s Home page at the Univerity of Chicago

Richard H. Thaler’s Activity page at the University of Chicago

Richard H. Thaler Bio

August 17, 2004

Gerd Gigerenzer

Filed in Profiles
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

DECISION SCIENCE RESEARCHER PROFILE: GERD GIGERENZER

Gigerenzer.gif

Gerd Gigerenzer is Director of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development in Berlin and former Professor of Psychology at the University of Chicago. He won the AAAS Prize for the best article in the behavioral sciences.

Recent Career:

1997-Present Director (Managing Director, 2000-2001) Max Planck Institute for Human Development, Berlin
1995-1997 Director Max Planck Institute for Psychological Research, Munich
1992-1995 Professor, Department of Psychology, and Committee for the Conceptual Foundations of Science University of Chicago, USA
1990-1992 Professor of Psychology University of Salzburg, Austria
1984-1990Professor of Psychology University of Konstanz (Chairman, 1988-1989)
1982-1984 Privat-Dozent Department of Psychology, University of Munich
1977-1982 Assistant Professor Department of Psychology, University of Munich

Selected Books Published:

*Calculated Risks: How To Know When Numbers Deceive You, the German translation of which won the Scientific Book of the Year Prize in 2002.
Gerd Gigerenzer has also published two books on simple Hueristics:
* Simple Heuristics That Make Us Smart (with Peter Todd & The ABC Research Group) and
* Bounded Rationality: The Adaptive Toolbox (with Reinhard Selten, a Nobel laureate in economics).
* Adaptive Thinking: Rationality in the Real World
* The Empire of Chance : How Probability Changed Science and Everyday Life

Selected Honors and Awards:

*Batten Fellow, Darden Business School, University of Virginia, Charlottesville, 2004.
*Visiting professor, University of Munich, 2004.
*2003 Reckoning with Risk shortlisted for the Aventis Prize for Science Books
*2002 Science Book of the Year Prize for Einmaleins der Skepsis (German translation of Calculated Risks), awarded by bild der wissenschaft

Quotes:

“What interests me is the question of how humans learn to live with uncertainty. Before the scientific revolution determinism was a strong ideal. Religion brought about a denial of uncertainty, and many people knew that their kin or their race was exactly the one that God had favored. They also thought they were entitled to get rid of competing ideas and the people that propagated them. How does a society change from this condition into one in which we understand that there is this fundamental uncertainty? How do we avoid the illusion of certainty to produce the understanding that everything, whether it be a medical test or deciding on the best cure for a particular kind of cancer, has a fundamental element of uncertainty?”

“Isn’t more information always better?” asks Gerd Gigerenzer. “Why else would bestsellers on how to make good decisions tell us to consider all pieces of information, weigh them carefully, and compute the optimal choice, preferably with the aid of a fancy statistical software package? In economics, Nobel prizes are regularly awarded for work that assumes that people make decisions as if they had perfect information and could compute the optimal solution for the problem at hand. But how do real people make good decisions under the usual conditions of little time and scarce information?”

Gerd Gigerenzer’s Home Page at the Max Planck Institute

Dissertations:

*Gigerenzer, G. (1982). Messung und axiomatische Modellbildung: Theoretische Grundlagen und experimentelle Untersuchungen zur sensorischen und sozialen Wahrnehmung. Habilitationsschrift, Munich.
*Gigerenzer, G. (1977). Nonmetrische multidimensionale Skalierung als Modell des Urteilsverhaltens. Zur Integration von dimensionsanalytischer Methodik und psychologischer Theorienbildung. Dissertation, Munich.

Gerd Gigerenzer Profile continued…

August 16, 2004

The Social Dilemma of Exaggerating the Positive

Filed in Research News
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

THE SOCIAL DILEMMA OF EXAGERATING THE POSITIVE-OF PEOPLE AND OF POLICIES

edm_photo people conected.jpg

Imagine you are invited to a party with ten strangers and asked to play an unusual game. Upon arrival you are presented with a choice. You can either 1) choose to receive $10 now, which you may keep, or 2) choose to receive $20, which you must entirely give away by evenly dispersing $2 dollars to each of the other 10 strangers.

If everyone votes for (2), everyone leaves with $20. However, if only you vote for (2) and everybody else votes (1), they leave with $12 each, and you leave with nothing. Many other combinations of votes are possible.

Carnegie Mellon professor Robyn Dawes speculates on this problem …

Now suppose that this game is repeated over a number of trials. What happens? Many colleagues around the world and I have conducted similar games, and the results are quite predictable. Especially if people are able to communicate with each other and make commitments prior to the first trial, almost all give the money away. But what happens on later trials where people know what they did on the previous trial and can infer from their payoff what other people did collectively? The rate of cooperation (giving) over trials is highly predictable. If this rate does not reach 100% on the first trial, some people note that they are making less money than those who keep their $10 stake and they then switch from being cooperators to being defectors. Subsequently, more people do so, often at an accelerating rate. By the end of ten trials, virtually no one is giving away the money. What has happened is that behavior has stabilized on the sink of universal defection. The one exception is that if people do not find out what happened after the first trial, where generally a majority honor the commitments to cooperate, subjects avoid the sink. When a few people know, however, that they are being “suckered” by others when they give away their $10 they stop doing it. (It is even possible to “reset” the situation by having an additional discussion prior to some trial, but even then over trials without discussion, the group ends up in the sink.)

An observer looking at behavior at the beginning of this degenerating process and ascribing the results to personality characteristics would conclude that people are altruistic and cooperative, while someone observing the end of the process and making similar personality attributions would conclude that people are generally selfish and uncooperative. But it is the situation itself that yields the behavior, at least on the part of anyone attempting to behave “rationally”.

Full Article

August 11, 2004

IQ or Financial Incentives?

Filed in Research News
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

HOW FINANCIAL INCENTIVES AND COGNITIVE ABILITIES AFFECT TASK PERFORMANCE IN LABORATORY SETTINGS

Ondrej Rydval and Andreas Ortmann have a paper forthcoming in Economic Letters in which they re-analyze some data from Gneezy and Rustichini (2000) and look at the relative importance of financial incentives and cognitive capabilities on reasoning tasks.

incentives.gif

The various NIS levels reflect the level of payment incentive (New Israeli Shekels, get it?) used by Gneezy and Rustichini (2000).

Rydval and Ortmann give the following interpretation:

“First of all, notice that the performance curves for the high-incentive treatments (NIS1 and NIS3) are virtually identical and slope considerably upwards, implying that there is a high within-treatment variation in performance but hardly any across-treatment one. Arguably, this is most likely due to a significant within-treatment variation in cognitive abilities. One could conceive that the large within-treatment performance variation is partly also effort-driven, but the variation in cognitive effort required to generate this result is unlikely; plus one would need to explain why the two performance curves seem almost identical despite the across-treatment incentive (and thus presumably effort) differential. Therefore, consistent with the interpretation of the IQ score as ability rank, it seems quite plausible that ability rather than incentive differentials determine individual performance differentials when incentives are high enough.

Next inspect the performance curves for the low-incentive treatments (no-pay and NIS0.1). Clearly, Gneezy and Rustichini (2000) were right in asserting that the NIS0.1 subjects overall were less motivated than the ones in the no-pay treatment. This is particularly apparent at the low-performance end where the gap between the performance curves for the low-incentive treatments widens (and, in addition, so does the gap between the performance curves of the two low-incentive treatments and the two high-inventive treatments). That the NIS0.1 subjects were less motivated than the ones in the no-pay treatment also seems confirmed by the performance curve for the NIS0.1 treatment lying below that for the no-pay treatment across the whole performance range. It is highly unlikely that this would be caused by across-treatment ability differentials, and thus across-treatment differences in motivation must have played the main role.

Finally and most importantly, focus on the slope of all four performance curves and the distance between them. An eyeball test reveals that, leaving aside the motivational problems at the low-performance end, the within-treatment variation in performance is generally much greater than the variation across treatments. To give a meaningful comparison, consider the largest across-treatment performance differential at the median rank. This turns out to be 13 (i.e., 24 correct answers in the NIS0.1 treatment vs. 37 in the NIS1 treatment), which is equivalent to the performance differential associated with moving up from the first to the third quartile within the NIS1 treatment (28 vs. 41). Note, however, that within-treatment performance differentials can be much larger. For instance, in both of the high-incentive treatments (NIS1 & NIS3), the difference in performance for individuals ranked 1 and 40 is as large as 34.”

Reference
Gneezy, U. & Rustichini, A. (2000). Pay enough or don’t pay at all, Quarterly Journal of Economics 115, 791-811.

August 6, 2004

The Orbitofrontal Cortex, Regret and Decision Formation

Filed in Research News
Subscribe to Decision Science News by Email (one email per week, easy unsubscribe)

THE INVOLVEMENT OF THE ORBITOFRONTAL CORTEX IN THE EXPERIENCE OF REGRET

The study of decision making would be incomplete without consideration of the role of regret. A recent article seeks its place in the brain.

orbitofrontal cortex.jpg

The orbitofrontal cortex is a small area of the brain that is located just behind the eyes. It is involved in cognitive and affective functions such as assessing emotional significance of events, anticipating rewards and punishments, adjusting behaviors to adapt to changes in rule contingencies, and inhibiting inappropriate behaviors.

A recent article in Science discusses neural responses associated with regret in gambling tasks.

Abstract

Facing the consequence of a decision we made can trigger emotions like satisfaction, relief, or regret, which reflect our assessment of what was gained as compared to what would have been gained by making a different decision. These emotions are mediated by a cognitive process known as counterfactual thinking. By manipulating a simple gambling task, we characterized a subjects choices in terms of their anticipated and actual emotional impact. Normal subjects reported emotional responses consistent with counterfactual thinking; they chose to minimize future regret and learned from their emotional experience. Patients with orbitofrontal cortical lesions, however, did not report regret or anticipate negative consequences of their choices. The orbitofrontal cortex has a fundamental role in mediating the experience of regret.

Exerpt

“Previous work implicating the orbitofrontal cortex in emotion-based decision making principally emphasized bottom-up influences of emotions on cortical decision processes. We propose a different role whereby the orbitofrontal cortex exerts a top-down modulation of emotions as a result of counterfactual thinking, after a decision has been made and its consequences can be evaluated. As shown by the model of choice, the feeling of responsibility for the negative result, i.e., regret, reinforces the decisional learning process. The orbitofrontal cortex integrates cognitive and emotional components of the entire process of decision making; its incorrect functioning determines the inability to generate specific emotions such as regret, which has a fundamental role in regulating individual and social behavior.”