Category:Decision
theory
From Dikipedia, the freer
encyclopedia
Decision theory is the study of optimal
actions, as determined by considering the probability
and utility
of different outcomes.
Subcategories
This category has the following 10 subcategories, out of 10 total.
Pages in category "Decision theory"
The following 159 pages are in this
category, out of 159 total. This list may not reflect recent changes (learn more).
A
B
C
D
|
D cont.
E
F
G
H
I
J
K
L
M
N
O
|
O cont.
P
Q
R
S
T
U
V
W
Y
|
List
of cognitive biases
From Dikipedia, the freer
encyclopedia
A cognitive bias is a pattern of deviation in
judgment that occurs in particular situations (see also cognitive distortion and the lists of thinking-related
topics).
Implicit in the concept of a "pattern of deviation" is a standard of
comparison; this may be the judgment of people outside those particular
situations, or may be a set of independently verifiable facts. The
existence of some of these cognitive biases has been verified empirically
in the field of psychology.
Cognitive biases are instances of evolved
mental behavior. Some are
presumably adaptive, for example, because they lead to more effective
actions or enable faster decisions. Others presumably result from a
lack of appropriate mental mechanisms, or from the misapplication of a
mechanism that is adaptive under different circumstances.
Decision-making
and behavioral biases
Many of these biases are studied for how
they affect belief formation, business decisions, and scientific
research.
- Bandwagon effect — the tendency to do (or
believe) things because many other people do (or believe) the same.
Related to groupthink and herd behaviour.
- Base rate fallacy — ignoring available
statistical data in favor of particulars.
- Bias blind spot — the tendency not to
compensate for one's own cognitive biases.[1]
- Choice-supportive bias — the
tendency to remember one's choices as better than they actually were.
- Confirmation bias — the tendency to
search for or interpret information in a way that confirms one's
preconceptions.
- Congruence bias — the tendency to test
hypotheses exclusively through direct testing, in contrast to tests of
possible alternative hypotheses.
- Contrast effect — the enhancement or
diminishing of a weight or other measurement when compared with a
recently observed contrasting object.
- Déformation professionnelle
— the tendency to look at things according to the conventions of one's
own profession, forgetting any broader point of view.
- Denomination effect — the tendency to
spend more money when it is denominated in small amounts (e.g. coins)
than large amounts (e.g. bills).[2]
- Distinction bias — the tendency to view
two options as more dissimilar when evaluating them simultaneously than
when evaluating them separately.[3]
- Endowment effect — "the fact that people
often demand much more to give up an object than they would be willing
to pay to acquire it".[4]
- Experimenter's or Expectation bias
— the tendency for experimenters to believe, certify, and publish data
that agree with their expectations for the outcome of an experiment,
and to disbelieve, discard, or downgrade the corresponding weightings
for data that appear to conflict with those expectations.[5]
- Extraordinarity bias
— the tendency to value an object more than others in the same category
as a result of an extraordinarity of that object that does not, in
itself, change the value.[citation needed]
- Focusing effect
— prediction bias occurring when people place too much importance on
one aspect of an event; causes error in accurately predicting the
utility of a future outcome.
- Framing
— Using an approach or description of the situation or issue that is
too narrow. Also framing effect — drawing different conclusions based
on how data is presented.
- Hyperbolic discounting
— the tendency for people to have a stronger preference for more
immediate payoffs relative to later payoffs, where the tendency
increases the closer to the present both payoffs are.
- Illusion of control — the tendency for
human beings to believe they can control or at least influence outcomes
that they clearly cannot.
- Impact bias — the tendency for people to
overestimate the length or the intensity of the impact of future
feeling states.
- Information bias — the
tendency to seek information even when it cannot affect action.
- Interloper
effect
— the tendency to value third party consultation as objective,
confirming, and without motive. Also consultation paradox, the
conclusion that solutions proposed by existing personnel within an
organization are less likely to receive support than from those
recruited for that purpose.
- Irrational escalation
— the tendency to make irrational decisions based upon rational
decisions in the past or to justify actions already taken.
- Just-world phenomenon
- witnesses of an "inexplicable injustice . . . will rationalize it by
searching for things that the victim might have done to deserve it"
- Loss aversion — "the disutility of giving up
an object is greater than the utility associated with acquiring it".[6]
(see also sunk cost effects and Endowment effect).
- Mere exposure effect
— the tendency for people to express undue liking for things merely
because they are familiar with them.
- Money illusion
— the tendency of people to concentrate on the nominal (face value) of
money rather than its value in terms of purchasing power.
- Moral credential effect — the tendency of
a track record of non-prejudice to increase subsequent prejudice.
- Need for Closure
— the need to reach a verdict in important matters; to have an answer
and to escape the feeling of doubt and uncertainty. The personal
context (time or social pressure) might increase this bias.[7]
- Negativity bias
— phenomenon by which humans pay more attention to and give more weight
to negative than positive experiences or other kinds of information.
- Neglect of probability — the
tendency to completely disregard probability when making a decision
under uncertainty.
- Normalcy bias — the refusal to plan for, or
react to, a disaster which has never happened before.
- Not Invented Here — the tendency to
ignore that a product or solution already exists, because its source is
seen as an "enemy" or as "inferior".
- Omission bias — the tendency to judge
harmful actions as worse, or less moral, than equally harmful omissions
(inactions).
- Outcome bias
— the tendency to judge a decision by its eventual outcome instead of
based on the quality of the decision at the time it was made.
- Planning fallacy — the tendency to
underestimate task-completion times.
- Post-purchase rationalization
— the tendency to persuade oneself through rational argument that a
purchase was a good value.
- Pseudocertainty effect
— the tendency to make risk-averse choices if the expected outcome is
positive, but make risk-seeking choices to avoid negative outcomes.
- Reactance
— the urge to do the opposite of what someone wants you to do out of a
need to resist a perceived attempt to constrain your freedom of choice.
- Restraint bias
- the tendency to overestimate one's ability to show restraint in the
face of temptation.
- Selective perception — the tendency
for expectations to affect perception.
- Semmelweis reflex — the tendency to
reject new evidence that contradicts an established paradigm.[8]
- Status quo bias — the tendency for people
to like things to stay relatively the same (see also loss
aversion, endowment effect, and system justification).[9]
- Von Restorff effect — the tendency for
an item that "stands out like a sore thumb" to be more likely to be
remembered than other items.
- Wishful thinking
— the formation of beliefs and the making of decisions according to
what is pleasing to imagine instead of by appeal to evidence or
rationality.
- Zero-risk bias — preference for reducing a
small risk to zero over a greater reduction in a larger risk.
Biases in
probability and belief
Many of these biases are often studied
for how they affect business
and economic decisions and how they affect experimental research.
- Ambiguity effect — the avoidance of
options for which missing information makes the probability seem
"unknown".
- Anchoring
effect
— the tendency to rely too heavily, or "anchor," on a past reference or
on one trait or piece of information when making decisions (also called
"insufficient adjustment").
- Attentional bias — neglect of relevant
data when making judgments of a correlation or association.
- Authority bias
— the tendency to value an ambiguous stimulus (e.g., an art
performance) according to the opinion of someone who is seen as an
authority on the topic.
- Availability heuristic
— estimating what is more likely by what is more available in memory,
which is biased toward vivid, unusual, or emotionally charged examples.
- Availability cascade
— a self-reinforcing process in which a collective belief gains more
and more plausibility through its increasing repetition in public
discourse (or "repeat something long enough and it will become true").
- Belief bias — an effect where someone's
evaluation of the logical strength of an argument is biased by the
believability of the conclusion.
- Clustering illusion — the tendency to
see patterns where actually none exist.
- Capability
bias — The tendency to believe that the closer average performance
is to a target, the tighter the distribution of the data set.
- Conjunction fallacy — the tendency to
assume that specific conditions are more probable than general ones.
- Disposition effect — the tendency to
sell assets that have increased in value but hold assets that have
decreased in value.
- Gambler's fallacy
— the tendency to think that future probabilities are altered by past
events, when in reality they are unchanged. Results from an erroneous
conceptualization of the Law of large numbers.
For example, "I've flipped heads with this coin five times
consecutively, so the chance of tails coming out on the sixth flip is
much greater than heads."
- Hawthorne effect — the tendency of people
to perform or perceive differently when they know that they are being
observed.
- Hindsight bias — sometimes called the
"I-knew-it-all-along" effect, the inclination to see past events as
being predictable.
- Illusory correlation — beliefs that
inaccurately suppose a relationship between a certain type of action
and an effect.[10]
- Ludic fallacy
— the analysis of chance-related problems according to the belief that
the unstructured randomness found in life resembles the structured
randomness found in games, ignoring the non-gaussian distribution of many
real-world results.
- Neglect
of prior base rates effect — the tendency to neglect known odds
when reevaluating odds in light of weak evidence.
- Observer-expectancy effect
— when a researcher expects a given result and therefore unconsciously
manipulates an experiment or misinterprets data in order to find it
(see also subject-expectancy effect).
- Optimism bias — the systematic tendency to
be over-optimistic about the outcome of planned actions.
- Ostrich effect — ignoring an obvious
(negative) situation.
- Overconfidence effect
— excessive confidence in one's own answers to questions. For example,
for certain types of question, answers that people rate as "99%
certain" turn out to be wrong 40% of the time.
- Positive outcome bias — a tendency in
prediction to overestimate the probability of good things happening to
them (see also wishful thinking, optimism
bias, and valence effect).
- Pareidolia
— a vague and random stimulus (often an image or sound) is perceived as
significant, e.g., seeing images of animals or faces in clouds, the man in the moon, and
hearing hidden messages on records played in
reverse.
- Primacy effect — the
tendency to weigh initial events more than subsequent events.
- Recency effect — the
tendency to weigh recent events more than earlier events (see also peak-end
rule).
- Disregard of regression toward the mean —
the tendency to expect extreme performance to continue.
- Selection bias — a distortion of evidence
or data that arises from the way that the data are collected.
- Stereotyping — expecting
a member of a group to have certain characteristics without having
actual information about that individual.
- Subadditivity effect — the tendency
to judge probability of the whole to be less than the probabilities of
the parts.
- Subjective validation
— perception that something is true if a subject's belief demands it to
be true. Also assigns perceived connections between coincidences.
- Telescoping effect — the effect that
recent events appear to have occurred more remotely and remote events
appear to have occurred more recently.
- Texas sharpshooter fallacy
— the fallacy of selecting or adjusting a hypothesis after the data is
collected, making it impossible to test the hypothesis fairly. Refers
to the concept of firing shots at a barn door, drawing a circle around
the best group, and declaring that to be the target.
- Well travelled road effect
- underestimation of the duration taken to traverse oft-travelled
routes and over-estimate the duration taken to traverse less familiar
routes.
Social biases
Most of these biases are labeled as attributional biases.
- Actor-observer bias
— the tendency for explanations of other individuals' behaviors to
overemphasize the influence of their personality and underemphasize the
influence of their situation (see also fundamental attribution error).
However,
this is coupled with the opposite tendency for the self in
that explanations for our own behaviors overemphasize the influence of
our situation and underemphasize the influence of our own personality.
- Egocentric bias — occurs when people claim
more responsibility for themselves for the results of a joint action
than an outside observer would.
- Forer effect
(aka Barnum Effect) — the tendency to give high accuracy ratings to
descriptions of their personality that supposedly are tailored
specifically for them, but are in fact vague and general enough to
apply to a wide range of people. For example, horoscopes.
- False consensus effect — the
tendency for people to overestimate the degree to which others agree
with them.
- Fundamental attribution error
— the tendency for people to over-emphasize personality-based
explanations for behaviors observed in others while under-emphasizing
the role and power of situational influences on the same behavior (see
also actor-observer bias, group attribution error, positivity effect, and negativity effect).
- Halo effect
— the tendency for a person's positive or negative traits to "spill
over" from one area of their personality to another in others'
perceptions of them (see also physical attractiveness
stereotype).
- Herd instinct — Common
tendency to adopt the opinions and follow the behaviors of the majority
to feel safer and to avoid conflict.
- Illusion of asymmetric insight
— people perceive their knowledge of their peers to surpass their
peers' knowledge of them.
- Illusion of transparency — people
overestimate others' ability to know them, and they also overestimate
their ability to know others.
- Illusory superiority
— overestimating one's desirable qualities, and underestimating
undesirable qualities, relative to other people. Also known as
Superiority bias (also known as "Lake Wobegon effect",
"better-than-average effect", "superiority bias", or Dunning-Kruger
effect).
- Ingroup bias — the tendency for people to
give preferential treatment to others they perceive to be members of
their own groups.
- Just-world phenomenon — the tendency
for people to believe that the world is just and therefore people "get
what they deserve."
- Notational bias — a form of cultural bias in which the notational
conventions of recording data biases the appearance of that data toward
(or away from) the system upon which the notational schema is based.
- Outgroup
homogeneity bias — individuals see members of their own group as
being relatively more varied than members of other groups.
- Projection bias — the
tendency to unconsciously assume that others share the same or similar
thoughts, beliefs, values, or positions.
- Self-serving bias
(also called "behavioral confirmation effect") — the tendency to claim
more responsibility for successes than failures. It may also manifest
itself as a tendency for people to evaluate ambiguous information in a
way beneficial to their interests (see also group-serving bias).
- Self-fulfilling prophecy — the
tendency to engage in behaviors that elicit results which will
(consciously or not) confirm existing attitudes.[11]
- System justification
— the tendency to defend and bolster the status quo. Existing social,
economic, and political arrangements tend to be preferred, and
alternatives disparaged sometimes even at the expense of individual and
collective self-interest. (See also status quo bias.)
- Trait ascription bias
— the tendency for people to view themselves as relatively variable in
terms of personality, behavior and mood while viewing others as much
more predictable.
- Ultimate attribution error
— Similar to the fundamental attribution error, in this error a person
is likely to make an internal attribution to an entire group instead of
the individuals within the group.
Memory errors
- Consistency bias — incorrectly remembering one's past attitudes
and behaviour as resembling present attitudes and behaviour.
- Cryptomnesia — a form of misattribution
where a memory is mistaken for imagination.
- Egocentric bias
— recalling the past in a self-serving manner, e.g. remembering one's
exam grades as being better than they were, or remembering a caught
fish as being bigger than it was
- False memory — confusion of imagination with
memory, or the confusion of true memories with false memories.
- Hindsight bias
— filtering memory of past events through present knowledge, so that
those events look more predictable than they actually were; also known
as the 'I-knew-it-all-along effect'.
- Reminiscence bump
— the effect that people tend to recall more personal events from
adolescence and early adulthood than from other lifetime periods.
- Rosy retrospection — the tendency to
rate past events more positively than they had actually rated them when
the event occurred.
- Self-serving bias — perceiving oneself
responsible for desirable outcomes but not responsible for undesirable
ones.
- Suggestibility — a form of misattribution
where ideas suggested by a questioner are mistaken for memory.
Common
theoretical causes of some cognitive biases
See also
Notes
- ^
Pronin,
Emily; Matthew B. Kugler (July 2007). "Valuing thoughts, ignoring
behavior: The introspection illusion as a source of the bias blind
spot". Journal of Experimental Social Psychology (Elsevier) 43
(4): 565–578. doi:10.1016/j.jesp.2006.05.011.
ISSN 0022-1031.
- ^
Why We Spend Coins Faster Than
Bills by Chana Joffe-Walt. All Things Considered, 12 May 2009.
- ^
(Hsee & Zhang, 2004)
- ^
(Kahneman, Knetsch, and Thaler 1991: 193) Richard Thaler coined the
term "endowment effect."
- ^
M. Jeng, "A selected history of expectation bias in physics", American
Journal of Physics 74 578-583 (2006)
- ^
(Kahneman, Knetsch, and Thaler 1991: 193) Daniel Kahneman, together
with Amos Tversky, coined the term "loss aversion."
- ^
Kruglanski, 1989; Kruglanski & Webster, 1996
- ^
Edwards, W. (1968). Conservatism in human information processing. In:
B. Kleinmutz (Ed.), Formal Representation of Human Judgment. (pp.
17-52). New York: John Wiley and Sons.
- ^
(Kahneman, Knetsch, and Thaler 1991: 193)
- ^ a
b
c
Tversky, Amos; Daniel Kahneman
(September 27, 1974). "Judgment under Uncertainty: Heuristics and
Biases". Science (American Association for the Advancement of
Science) 185 (4157): 1124–1131.
- ^
Darley, John M.; Paget H. Gross (2000).
"A Hypothesis-Confirming Bias in Labelling Effects". in Charles
Stangor. Stereotypes and prejudice: essential readings.
Psychology Press. p. 212. ISBN 9780863775895.
- ^
Kahneman,
Daniel; Shane Frederick (2002). "Representativeness Revisited:
Attribute Substitution in Intuitive Judgment". in Thomas Gilovich, Dale
Griffin, Daniel Kahneman. Heuristics and Biases: The Psychology of
Intuitive Judgment. Cambridge: Cambridge University Press.
pp. 49–81. ISBN 9780521796798. OCLC 47364085.
- ^
Slovic,
Paul; Melissa Finucane, Ellen Peters, Donald G. MacGregor (2002). "The
Affect Heuristic". in Thomas Gilovich, Dale Griffin, Daniel Kahneman. Heuristics
and Biases: The Psychology of Intuitive Judgment. Cambridge
University Press. pp. 397–420. ISBN 97805219796798.
References
- Baron, Johnathan
(2000), Thinking and deciding (3rd ed.), New York: Cambridge
University Press, ISBN 0-521-65030-5
- Bishop,
Michael A.; J. D. Trout (2004), Epistemology and the Psychology of
Human Judgment, New York: Oxford University Press, ISBN 0-19-516229-3
- Gilovich, Thomas
(1993), How We Know What Isn't So: The Fallibility of Human Reason
in Everyday Life, New York: The Free Press, ISBN 0-02-911706-2
- Gilovich,
Thomas; Dale Griffin, Daniel Kahneman (2002), Heuristics and
biases: The psychology of intuitive judgment, Cambridge, UK:
Cambridge University Press, ISBN 0-521-79679-2
- Greenwald, A.
(1980), "The Totalitarian Ego: Fabrication and Revision of Personal
History", American Psychologist (American Psychological
Association) 35 (7), ISSN 0003-066X
- Kahneman, Daniel;
Paul Slovic, Amos Tversky (1982), Judgment under Uncertainty:
Heuristics and Biases, Cambridge, UK: Cambridge University Press, ISBN 0-521-28414-7
- Kahneman,
Daniel; Jack L. Knetsch, Richard H. Thaler (1991), "Anomalies: The
Endowment Effect, Loss Aversion, and Status Quo Bias", The Journal
of Economic Perspectives (American Economic Association) 5
(1): 193–206
- Plous, Scott (1993),
The Psychology of Judgment and Decision Making, New York:
McGraw-Hill, ISBN 0-07-050477-6
- Schacter, Daniel
L. (1999), "The Seven Sins of Memory: Insights From Psychology and
Cognitive Neuroscience", American Psychologist (American
Psychological Association) 54 (3): 182–203, ISSN 0003-066X
- Tetlock, Philip E.
(2005), Expert Political Judgment: how good is it? how can we know?,
Princeton: Princeton University Press, ISBN 978-0-691-12302-8
- Virine,
L.; M. Trumper (2007), Project Decisions: The Art and
Science, Vienna, VA: Management Concepts, ISBN 978-1567262179
Taleb
distribution
From Dikipedia, the freer
encyclopedia
In economics
and finance,
a Taleb distribution is a probability distribution in which
there is a high probability of a small gain, and a small
probability of a very large loss, which more than outweighs the gains.
In these situations the expected value is (very much) less than
zero, but this fact is camouflaged by the appearance of low risk and steady returns. It is a
combination of kurtosis risk and skewness
risk:
overall returns are dominated by extreme events (kurtosis), which are
to the downside (skew). The corresponding situation is also known as
the peso problem.
The term is therefore increasingly used
in the financial markets to describe dangerous
or flawed trading strategies. The Taleb distribution is named for Nassim Taleb, based on ideas outlined in his Fooled by Randomness.[1]
Criticism of
trading strategies
Pursuing a trading strategy with a Taleb
distribution yields a high
probability of steady returns for a time, but with a near certainty of
eventual ruin. This is done consciously by some as a risky trading
strategy, while some critics argue that it is done either unconsciously
by some, unaware of the hazards ("innocent fraud"), or consciously by
others, particularly in hedge funds.
Risky strategy
If done consciously, with one's own
capital or openly disclosed to
investors, this is a risky strategy, but appeals to some: one will want
to exit the trade before the rare event happens. This occurs for
instance in a speculative bubble,
where one purchases an asset in the expectation that it will likely go
up, but may plummet, and hopes to sell the asset before the bubble
bursts.
This has also been referred to as
"picking up pennies in front of a steamroller".[2]
"Innocent fraud"
John Kay has likened securities trading
to bad driving, as both are characterized by Taleb distributions.[3]
Drivers can make many small gains in time by taking risks such as
overtaking on the inside and tailgating,
however, they are then at risk of experiencing a very large loss in the
form of a serious traffic accident. Kay
has described Taleb Distributions as the basis of the carry trade and has claimed that along with mark-to-market accounting and
other practices, constitute part of what JK Galbraith has called "innocent fraud".[4]
Moral hazard
Some critics of the hedge
fund industry claim that the compensation structure generate high
fees for investment strategies that follow a
Taleb distribution, creating moral
hazard.[5]
In such a scenario, the fund can claim high asset management and
performance fees until they suddenly 'blow up', losing the investor
significant sums of money and wiping out all the gains to the investor
generated in previous periods; however, the fund manager keeps all fees
earned prior to the losses being incurred – and ends up enriching
himself in the long run because he does not pay for his losses.
Risks
Taleb distributions pose several
fundamental problems, all possibly leading to risk being overlooked:
- presence of extreme adverse events
- The very presence or possibility of adverse events may pose a
problem per se, which is ignored by only looking at the average case –
a decision may be good in expectation (in the aggregate, in the long
term), but a single rare event may ruin the investor: one is courting
disaster.
- unobserved events
- This is Taleb's central contention, which he calls black swans – because extreme events are
rare, they have often not been observed yet, and thus are not included
in scenario analysis or stress testing.
- hard-to-compute expectation
- A subtler issue is that expectation is very sensitive
to assumptions about probability: a trade with a $1 gain 99.9% of the
time and a $500 loss 0.1% of the time has positive expected value;
while if the $500 loss occurs 0.2% of the time it has approximately 0
expected value; and if the $500 loss occurs 0.3% of the time it has
negative expected value. This is exacerbated by the difficulty of
estimating the probability of rare events (in this example one would
need to observe thousands of trials to estimate the probability with
confidence), and by the use of financial leverage:
mistaking a small loss for a small gain and magnifying by leverage
yields a hidden large loss.
More formally, while the risks for a known
distribution can be calculated, in practice one does not know the
distribution: one is operating under uncertainty,
in economics called Knightian uncertainty.
Mitigants
A number of mitigants have been proposed,
by Taleb and others. These include:
- not exposing oneself to large losses
- For instance, only buying options (so one can at most lose the
premium), not selling them.
- performing sensitivity analysis on assumptions
- This does not eliminate the risk, but identifies which
assumptions are key to conclusions, and thus meriting close scrutiny.
- scenario analysis and stress testing
- Widely used in industry, they do not include unforeseen events
but
emphasize various possibilities and what one stands to lose, so one is
not blinded by absence of losses thus far.
- using non-probabilistic decision techniques
- While most classical decision theory is based on probabilistic
techniques of expected value or expected utility,
alternatives exist which do not require assumptions about the
probabilities of various outcomes, and are thus robust. These include minimax,
minimax regret, and info-gap decision theory.
See also
References
- ^
Martin Wolf, Why today’s hedge fund industry
may not survive, Financial Times, 18 March 2008
- ^
Taleb, p. 19
- ^
John Kay "A strategy for hedge funds and
dangerous drivers", Financial Times, 16 January 2003.
- ^
John Kay "Banks got burned by their own
‘innocent fraud’", Financial Times, 15 October 2008.
- ^
Are hedge funds a scam? Naked
Capitalism/Financial Times, March 2008.
Privatizing
profits and socializing losses
From Dikipedia, the freer
encyclopedia
In political discourse, the term privatizing
profits and socializing losses
refers to the alleged tendency of some firms to benefit (privately)
from profits, but not suffer from losses, instead pushing the losses
onto society at large, particularly via the government.
History
The notion that banks privatize profits
and socialize losses dates at least to the 19th century, as in this
1834 quote of Andrew Jackson:
I have had men watching you for a long time and I am convinced
that you have used the funds of the bank to speculate in the
breadstuffs of the country. When you won, you divided the profits
amongst you, and when you lost, you charged it to the Bank. ... You are
a den of vipers and thieves.
Examples
Large firms and banks have been accused
of this, as a form of crony capitalism and corporate welfare, and some bailouts
are cited as examples of this: a bailout socializes a company's losses.
It has been argued that in the current
economic system, especially
in the U.S., large corporations and wealthy parts of society policies
can commonize costs and privatize profits, with the effect of a further
concentration of wealth. In
particular, government sponsoring and bailouts
such as the federal
takeover of Fannie Mae and Freddie Mac and the proposed bailout of the U.S. financial system
in the economic crisis of
2008 have frequently been referred to in the U.S. as “private
gains and public losses”[1]
or “privatization of profits and socialization of losses”.[2][3][4]
Economic policies which favor such concentration of capital have
frequently been criticized as socialism
for the rich and capitalism for the poor.[5]
Interpretations
In game
theory, this is formalized as the CC–PP game.
In the financial language of options, "socializing losses" corresponds
to private firms having a put
option from the government: if they lose, the government will cover
their losses. The most famous example of this is the Greenspan
put.
In the black swan theory of Nassim Nicholas Taleb, he criticizes
this as one of his Ten Principles for a Black Swan Robust World,[6]
writing as his second principle:
- 2. No socialisation of losses and privatisation of gains.
Support
While the term is generally used to
critique, some have argued that
socializing losses, while politically unpopular, is in fact
economically desirable in the case of a financial crisis:
What is ineluctably needed involves socializing the losses of a
banking system – both conventional banking and shadow banking – after
the spectacular winnings of the Forward Minsky Journey were privatized.
See also
Related concepts
References
- ^
Robert Reich: A Modest Proposal for Ending
Socialized Capitalism, July 15, 2008
- ^
Bloomberg Addresses Pending
Financial Job Losses, www.observer.com, September 15, 2008
- ^
What Should Uncle Sam Do?, www.newsweek.com, July
28, 2008
- ^
Prudent reform needed for Fannie,
Freddie, July 16, 2006
- ^
Interview with Jon Stewart, The Daily Show, Oct 16, 2008: Available at The Daily Show
Site
- ^
Ten Principles for a Black Swan
Robust World
- ^
Comments Before the Money
Marketeers Club: Playing Solitaire with a Deck of 51, with Number 52 on
Offer, by Paul McCulley
External links
Too
Big to Fail
From Dikipedia, the freer
encyclopedia
(Redirected
from
Too big to fail)
"Too Big to Fail" is a phrase
referring to the idea that in economic regulation,
the largest and most interconnected businesses are so large that a
government cannot allow them to declare bankruptcy because said failure
would have a disastrous effect on the overall economy.
This results in reckless behavior of the
institution since the government will intervene (e.g. by bailing out
the company) in the event the institution is going to fail.[1]
The phrase has also been more broadly applied to refer to a
government's policy to bail out any corporation. It raises the issue of
moral
hazard in business operations.
The term is back to centre stage since
the start of the financial
meltdown. One of the biggest US companies referred to as too big to
fail is American International Group
(AIG).
Some critics see the policy as wrong and
counterproductive. They
think big banks should be left to fail if their risk management was not
effective.[2][3]
For example, in the international context, the "too big to fail" policy
has been explicitly refuted in the People's Republic of China.[4]
Regulatory basis
Before 1950,
U.S. federal bank regulators had essentially two options for resolving
an insolvent institution: closure, with liquidation of assets and
payouts for insured depositors, or purchase and assumption, encouraging
the acquisition of assets and assumption of liabilities by another
firm. A third option was made available by the Federal
Deposit Insurance Act of 1950:
providing assistance, the power to support an institution through loans
or direct federal acquisition of assets, until it could recover from
its distress. The statute limited the "assistance" option to cases
where "continued operation of the bank is essential to provide adequate
banking service." Regulators shunned this third option for many years,
fearing that if regionally- or nationally-important banks were thought
to be generally immune to liquidation, markets in their shares would be
distorted. Thus the assistance option was never employed during the
period 1950-1969, and very seldom thereafter.[5]
Continental
Illinois case
Distress
The Continental Illinois National Bank and Trust
Company experienced a fall in its overall asset quality during the
early 1980s. Tight money, Mexico's default and plunging oil prices followed
a period when the bank had aggressively pursued commercial lending
business, Latin American syndicated loan
business, and loan participations in the energy sector. Complicating
matters further, the bank's funding mix was heavily dependent on large CDs and foreign money
markets, which meant its depositors were more risk-averse than
average retail depositors in the US.
Payments crisis
The bank held significant participation
in highly-speculative oil and gas loans of Oklahoma's Penn Square Bank.
When Penn Square failed in July 1982, the Continental's distress became
acute, culminating with press rumors of failure and an
investor-and-depositor run in early May 1984. In the first week of the
run, the Fed permitted the Continental
Illinois discount window
credits on the order of $3.6 billion. Still in significant distress,
the management obtained a further $4.5 billion in credits from a
syndicate of money center banks the following week. These measures
failed to stop the run, and regulators were confronted with a crisis.
Regulatory crisis
The seventh-largest bank in the nation by
deposits would very
shortly be unable to meet its obligations. Regulators faced a tough
decision about how to resolve the matter. Of the three options
available, only two were seriously considered. Even banks much smaller
than the Continental were deemed unsuitable for resolution by
liquidation, owing to the disruptions this would have inevitably
caused. The normal course would be to seek a purchaser (and indeed
press accounts that such a search was underway contributed to
Continental depositors' fears in 1984). However, in the tight-money financial climate of the early
1980s, no purchaser was forthcoming.
Besides generic concerns of size,
contagion of depositor panic and
bank distress, regulators feared the significant disruption of national
payment and settlement systems. Of special concern was the wide network
of correspondent banks with high percentages of their capital invested
in the Continental Illinois. Essentially, the bank was deemed "too big
to fail," and the "provide assistance" option was reluctantly taken.
The dilemma now became, how to provide assistance without significantly
unbalancing the nation's banking system?
Stopping the run
To prevent immediate failure,
the Federal Reserve announced categorically that it would meet any liquidity needs the Continental might have,
while FDIC
gave depositors and general creditors a full guarantee (not subject to
the $100,000 FDIC deposit-insurance limit) and provided direct
assistance of $2 billion (including participations). Money center banks
assembled an additional $5.3 billion unsecured facility pending a
resolution and resumption of more-normal business. These measures
slowed, but did not stop, the outflow of deposits.
Controversy
In a United States Senate hearing
afterwards, the then Comptroller of
the Currency C. T. Conover defended his position by
admitting the regulators will not let the largest 11 banks fail[6].
Regulatory agencies (FDIC, Office of the
Comptroller of the Currency, the Fed, etc.) feared this may cause
widespread financial complications and a major bank run
that may easily spread by financial contagion. The implicit guarantee
of too-big-to-fail has been criticized by many since then for its
preferential treatment of large banks[citation needed].
Simultaneously, the perception of too-big-to-fail may diminish healthy market discipline, and may have
influenced the decisions behind insolvency of the Washington Mutual
in 2008. For example, large depositors in banks not covered by the
policy tend to have a strong incentive to monitor the bank's financial
condition, and/or withdraw in case the bank's policies exposes them to
high risks, since FDIC guarantees have an upper limit. However, large
depositors in a "too big to fail" bank would have less incentive, since
they'd expect to be bailed out in the event of failure.
The Federal Deposit Insurance Corporation Improvement
Act
was passed in 1991, giving the FDIC the responsibility to rescue an
insolvent bank by the least costly method. The Act had the implicit
goal of eliminating the widespread belief among depositors that a loss
of depositors and bondholders will be prevented for large banks.
However, the Act included an exception in cases of systemic risk,
subject to the approval of two-thirds of the FDIC Board of Directors,
the Federal Reserve Board of Governors, and the Treasury Secretary.[7]
Effect on
banks' cost of capital
Since the full amount of the deposits and
debts of "too big to fail"
banks are effectively guaranteed by the government, large depositors
view deposits with these banks as a safer investment than deposits with
smaller banks. Therefore, large banks are able to pay lower interest
rates to depositors than small banks are obliged to pay. In October
2009, Sheila Bair, the current
Chairperson of the FDIC,
commented that "'Too big to fail' has become worse. It's become
explicit when it was implicit before. It creates competitive
disparities between large and small institutions, because everybody
knows small institutions can fail. So it's more expensive for them to
raise capital and secure funding.".[8]
A study conducted by the Center for Economic
and Policy Research found that the difference between the cost of funds
for banks with more than $100 billion in assets and the cost of funds
for smaller banks widened dramatically after the formalization of the
"too big to fail" policy in the U.S. in the fourth quarter of 2008.[9]
This shift in the large banks' cost of funds gave an indirect "too big
to fail" subsidy of $34.1 billion per year to the 18 U.S. banks with
more than $100 billion in assets.
"Too big to
fail is too big"
Mervyn King, the governor of the Bank of England,
called for banks that are "too big to fail" to be cut down to size, as
a solution to the problem of banks having taxpayer-funded guarantees
for their speculative investment banking activities. "If some banks are
thought to be too big to fail, then, in the words of a distinguished
American economist, they are too big. It is not sensible to allow large
banks to combine high street retail banking with risky investment
banking or funding strategies, and then provide an implicit state
guarantee against failure."[10]
However, Alastair Darling
disagreed; "Many people talk about how to deal with the big banks –
banks so important to the financial system that they cannot be allowed
to fail. But the solution is not as simple, as some have suggested, as
restricting the size of the banks".[10]
Too big to fail tax
Willem
Buiter
proposes a tax to internalize the massive external costs inflicted by
"too big to fail" institution. "When size creates externalities, do
what you would do with any negative externality: tax it. The other way
to limit size is to tax size. This can be done through capital
requirements that are progressive in the size of the business (as
measured by value added, the size of the balance sheet or some other
metric). Such measures for preventing the New Darwinism of the survival
of the fattest and the politically best connected should be
distinguished from regulatory interventions based on the narrow
leverage ratio aimed at regulating risk (regardless of size, except for
a de minimis lower limit)."[11]
See also
Notes
- ^
Federal Reserve Bank of Richmond
Economic Quarterly Volume 91/2 Spring 2005 by Ennis, Huberto M.; Malek,
H.S
- ^
Alton E. Drew, The Business Week, http://www.businessweek.com/bwdaily/dnflash/content/feb2009/db20090218_166676.htm
retrieved on March 20, 2009
- ^
Benton E. Gup, ed (2003-12-30). Too Big to Fail: Policies and
Practices in Government Bailouts. Westport, Connecticut:
Praeger Publishers. pp. 368. doi:10.1336/1567206212. ISBN 1-567-20621-2. OCLC 52288783. http://www.greenwood.com/books/bookdetail.asp?sku=Q621. Retrieved 2008-02-20.
"The doctrine of laissez-faire seemingly has been revitalized as
Republican and Democratic administrations alike now profess their firm
commitment to policies of deregulation and freemarkets in the
new global economy. -- Usually associated with large bank failures, the
phrase too big to fail,
which is a particular form of government bailout, actually applies to a
wide range of industries, as this volume makes clear. Examples range
from Chrysler to Lockheed Aircraft and from New York City to Penn
Central Railroad. Generally speaking, when a corporation, an
organization, or an industry sector is considered by the government to
be too important to the overall health of the economy, it will not be
allowed to fail. Government bailouts are not new, nor are they limited
to the United States. This book presents the views of academics,
practitioners, and regulators from around the world (e.g., Australia,
Hungary, Japan, Europe, and Latin America) on the implications and
consequences of government bailouts."
- ^
Chang, T.K. (2001-01-12). "Ten Lessons of the GITIC
Bankruptcy". Asian Wall
Street Journal. http://www.angelfire.com/stars/tkchang/Bankruptcy_in_China.htm. Retrieved 2009-08-01.
- ^
http://marriottschool.byu.edu/emp/HBH/mba624/Commercial%20Banking%20Regulation.pdf
Heaton, Hal B., Riegger, Christopher. "Commercial Banking Regulation",
Class discussion notes.
- ^
Conover, Charles
(1984), "Testimony", Inquiry
Into the Continental Illinois Corp. and Continental National Bank:
Hearing Before the Subcommittee on Financial Institutions Supervision,
Regulation, and Insurance of the Committee on Banking, Finance, and
Urban Affairs, U.S. House of Representatives, 98th Congress, 2nd
Session, 18-19 September and 4 October, pp. 98–111
- ^
Bradley,
Christine; Craig, Valentine V. (2007), "Privatizing Deposit Insurance:
Results of the 2006 FDIC Study", FDIC Quarterly 1
(2): 23–32, http://www.fdic.gov/bank/analytical/quarterly/2007_vol1_2/privatizing_deposit_insurance.pdf
- ^
Wiseman, Paul; Gogoi Pallavi
(2009-10-19). "FDIC chief: Small banks can't
compete with bailed-out giants". USA Today. http://www.usatoday.com/money/industries/banking/2009-10-19-FDIC-chief-sheila-bair-banking_N.htm. Retrieved 2009-10-22.
- ^
Baker,
Dean; Travis McArthur (September 2009). "The Value of the 'Too Big to
Fail' Big Bank Subsidy". Center for Economic
and Policy Research Issue Brief. http://www.cepr.net/index.php/publications/reports/too-big-to-fail-subsidy/. Retrieved 2009-10-22.
- ^ a
b
Treanor, Jill (2009-06-17). "King calls for banks to be 'cut
down to size'". The Guardian. http://www.guardian.co.uk/business/2009/jun/17/king-in-bank-reform-call. Retrieved 2009-06-18.
- ^
Buiter, Willem (June 24, 2009). "Too big to fail is too big".
The Financial Times. http://blogs.ft.com/maverecon/2009/06/too-big-to-fail-is-too-big/. Retrieved 2009-11-22.
Further reading
External links