The Political Implications of Ignoring Our Own Ignorance
Do individuals’ cognitive biases provide a justification for government intervention? No.
Two human flaws that affect public policy are:1. Cognitive hubris: each of us believes that his map of the world is more accurate than it really is.
2. Radical ignorance: when it comes to complex social phenomena, our maps are highly inaccurate.
This essay is focused on exploring the causes and consequences of cognitive hubris. Cognitive hubris can explain large, persistent disagreements over such issues as financial regulation and Keynesian fiscal stimulus.
Cognitive hubris is particularly troublesome when combined with radical ignorance. Indeed, this configuration justifies limiting government intervention in order to avoid setting up systems that are excessively fragile.
Kahneman on Hubris
“Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance.” — Daniel Kahneman1
Cognitive psychologist Daniel Kahneman's new book, Thinking Fast and Slow, is a capstone to a distinguished career spent documenting the systematic flaws in human reasoning. He finds it useful to describe us as having two systems for thinking.
System One, as he calls it, is quick, intuitive, and decisive. It may be described as often wrong but never in doubt. System One is always active and plays a role in every decision that we make because it operates rapidly and unconsciously.
Prudence would dictate trying to find institutional arrangements that minimize the potential risks and costs that any individual can impose on society through his own ignorance.System Two is deliberative and logical. In principle, System Two can detect and correct the errors of System One. However, System Two has limited capacity, and often we do not invoke it before arriving at a conclusion. Even worse, we may deploy System Two to rationalize the conclusions of System One, rather than to question those conclusions and suggest appropriate changes.
Suppose you were to ask yourself how well you understand the world around you. How accurate is your map of reality?
If you interrogate System Two, it might reply, “There are many phenomena about which I know little. In the grand scheme of things, I am just blindly groping through a world that is far too complex for me to possibly understand.”
However, if you were to interrogate System One, it might reply, “My map is terrific. Why, I am very nearly omniscient!”
Evidently, in order to perform its function, System One has to have confidence in its map. Indeed, elsewhere Kahneman has told a story of a group of Swiss soldiers who were lost in the Alps because of bad weather. One of them realized he had a map. Only after they had successfully climbed down to safety did anyone discover that it was a map of the Pyrenees. Kahneman tells that story in the context of discussing economic and financial models. Even if those maps are wrong, we still feel better when using them.2
In fact, a number of the cognitive biases that Kahneman and other psychologists have documented would appear to serve as defense mechanisms, enabling the individual to hold onto the view that his map is the correct one. For example, there is confirmation bias, which is the tendency to be less skeptical toward evidence in support of one's views than toward contrary evidence.
System Two is evidently not able to overcome cognitive hubris, even in situations where one would expect System Two to be invoked, such as forecasting the difficulty of a major undertaking. Organizations are much more likely to be overly optimistic than overly pessimistic about the time and cost of completing projects. Kahneman calls this the “planning fallacy.” One plausible explanation for this is that planners over-estimate the quality of the maps that they are using to make their forecasts.
Radical Ignorance
Cognitive hubris is particularly troublesome when combined with radical ignorance.Political scientist Jeffrey Friedman uses the term “radical ignorance” to describe what he sees as the low quality of maps that all of us have in our complex social environment. He contrasts this radical ignorance with the assumptions that economists make, in which market participants and policymakers possess nearly perfect information.3 Indeed, this year's Nobel Prize in economics once again reinforced the popularity in mainstream economics of “rational expectations,” a particularly stringent assumption that economic actors possess uniformly high quality information.
Largely unwilling to consider ignorance, economists usually fall back on incentives as explanations for phenomena. For example, economists explain the buildup of risk in banks' portfolios in the years leading up to the crisis of 2008 as resulting from moral hazard, in which bankers knew that they were going to be bailed out if things went poorly. However, Friedman points out that if they had truly been seeking out high returns with high risk, they would not have been obsessed with obtaining the securities with the most pristine risk rating: AAA. Low-rated securities would have been used to exploit moral hazard even more effectively, since they paid much greater yields than higher-rated securities.
Rather than focus on incentives, Friedman's narrative would emphasize what I have been calling cognitive hubris. Mortgage lenders believed that new underwriting tools, especially credit scoring, allowed them to assess borrower risk with greater accuracy than ever before. Such knowledge was thought to enable lenders to discriminate carefully enough to price for risk in subprime markets, rather than avoid lending altogether. On top of this, financial engineers claimed to be able to build security structures that could produce predictable, low levels of default even when the underlying loans were riskier than the traditional prime mortgage.
Regulators, too, fell victim to the combination of cognitive hubris and radical ignorance. They believed in the quality of bank risk management using the new tools.4 They also believed in the effectiveness of their own rules and practices.
A common post-crisis narrative is that banking was de-regulated in the Reagan-Greenspan era. Some pundits make it sound as if regulators behaved like parents who hand their teenagers the keys to the liquor cabinet, leave for the weekend, and say “Have a good time.” In fact, regulators believed that they had stronger regulations in place in 2005 than they did in the pre-Reagan era.
—Before 1980, mortgage loans held by banks were illiquid assets subject to considerable interest-rate risk. These problems were alleviated by the shift toward securitization.
—Before 1980, insolvent institutions were opaque because of book-value accounting. This problem was addressed with market-value accounting, enabling regulators to take more timely corrective action to address troubled institutions.
—Before 1980, banks had no formal capital requirements and there were no mechanisms in place to steer banks away from risky assets. This problem was addressed with the Basel capital accords (formally adopted in 1988), which incorporated a risk-weighted measure of assets to determine required minimum capital. In the 2000s, these risk weightings were altered to penalize banks that did not invest in highly rated, asset-backed securities.
Thus, it was not the intent of regulators to loosen the reins on banks. On the contrary, from the regulators' point of view, it was the environment prior to 1980 that amounted to leaving the teenagers with the keys to the liquor cabinet. The post-1980 regulatory changes were believed to be in the direction of tighter supervision and more rational controls.It turned out that the regulators were radically ignorant of the consequences of their decisions. Securitization introduces principal-agent problems into mortgage lending, as the loan originator's interest in obtaining a fee for underwriting a closed loan conflicts with the interest of investors in ensuring that borrowers are properly screened. These conflicts proved to be more powerful than imagined. Market-value accounting makes financial markets steeply procyclical, because in a crisis a drop in market values forces beleaguered banks to sell assets, creating a vicious downward spiral. Finally, the risk-based capital rules helped drive the craze for financial engineering and misleading AAA ratings.
Political Disagreement
Regulators believed that they had stronger regulations in place in 2005 than they did in the pre-Reagan era.Political disagreement can be explained using the theories of cognitive hubris and radical ignorance. The basic idea is that nobody has a grasp on capital-T truth, but each of us believes that our own map of the world is highly accurate. When we encounter someone who holds a similar map, we think, “That guy knows what he is talking about.” When we encounter someone who holds a different map, we think, “That guy is an idiot.” When you overestimate the accuracy of your own map, it is very difficult to explain the existence of people with different maps, other than to impugn their intelligence or their integrity.
A metaphor for this might be a topographically complex terrain, which none of us can see in its entirety. Each of us is trying to find the highest mountain peak in the terrain, representing the capital-T truth.
Unable to look down at the entire terrain, each of us follows what mathematicians call a “hill-climbing algorithm.” We make small probes in the area right around us, and when the terrain slopes upward, we climb in that direction. We repeat this process until the probes in every direction slope down. Then we conclude that we are at the top.
The weakness of hill-climbing algorithms is that they can get stuck at a local maximum. Instead of finding the highest peak, you stop when you reach the top of one particular hill. From this vantage point, you are as high as possible, so you do not move.
When two ideological opponents wind up on different hilltops, neither can believe that the other has sincerely arrived at a different conclusion based on the evidence. As Friedman puts it,
Consider the most reviled pundit on the other side of the political spectrum from yourself. To liberal ears, a Rush Limbaugh or a Sean Hannity, while well informed about which policies are advocated by conservatives and liberals, will seem appallingly ignorant of the arguments and evidence for liberal positions. The same goes in reverse for a Frank Rich or a Paul Krugman, whose knowledge of the “basics” of liberalism and conservatism will seem, in the eyes of a conservative, to be matched by grave misunderstandings of the rationales for conservative policies.5
Indeed, our cognitive hubris is so strong that, according to David McRaney, people believe they understand other people better than others understand themselves. He calls this phenomenon “asymmetric insight.”6The illusion of asymmetric insight makes it seem as though you know everyone else far better than they know you, and not only that, but you know them better than they know themselves. You believe the same thing about groups of which you are a member. As a whole, your group understands outsiders better than outsiders understand your group, and you understand the group better than its members know the group to which they belong.
In our context, this would mean that liberals believe that they understand better than conservatives how conservatives think, and conservatives believe that they understand better than liberals how liberals think. According to McRaney, such beliefs have indeed been found in studies by psychiatrists Emily Pronin and Lee Ross at Stanford along with Justin Kruger at the University of Illinois and Kenneth Savitsky at Williams College.Implications
Organizations are much more likely to be overly optimistic than overly pessimistic about the time and cost of completing projects.The cognitive biases documented by Kahneman have been interpreted by a number of thinkers, including Kahneman himself, as providing a justification for government intervention. After all, if people are far from the well-informed, rational calculators assumed in economic models, then presumably the classical economic analysis underlying laissez-faire economic policy is wrong. Instead, it must be better to “nudge” people for their own good.7
However, I draw different implications from the hypothesis of cognitive hubris combined with radical ignorance. If social phenomena are too complex for any of us to understand, and if individuals consistently overestimate their knowledge of these phenomena, then prudence would dictate trying to find institutional arrangements that minimize the potential risks and costs that any individual can impose on society through his own ignorance. To me, this is an argument for limited government.
Instead of using government to consciously impose an institutional structure based on the maps of cognitively impaired individuals, I would prefer to see institutions evolve through a trial-and-error process. People can be “nudged” by all manner of social and religious customs. I would hope that the better norms and customs would tend to survive in a competitive environment. This was Hayek's view of the evolution of language, morals, common law, and other forms of what he called spontaneous order. In contrast, counting on government officials to provide the right nudges strikes me as a recipe for institutional fragility.
If Kahneman is correct that we have “an almost unlimited ability to ignore our own ignorance,” then all of us are prone to mistakes. We need institutions that attempt to protect us from ourselves, but we also need institutions that protect us from one another. Limited government is one such institution.
Arnold Kling is a member of the Financial Markets Working Group at the Mercatus Center of George Mason University. He writes for EconLog at http://econlog.econlib.org.
No comments:
Post a Comment