Consolidative and Exploratory Models

November 5th, 2008

Posted by: Roger Pielke, Jr.

Today’s NYT has an interesting article on the role of risk models in the financial crisis, which follows an article last week in the WSJ on the role of risk models in AIG’s downfall, which included Warren Buffet’s pithy advice to “beware geeks bearing formulas.”

The NYT article argues that the models failed to account for human behavior, and this led to misjudgments of risk. Well, yes and no. Yes, the models had blind spots in what was included an not included in their calculations of risk. The finance community often uses the works “volatility” and “risk” interchangeably, but measures of volatility are not a reflection of the true uncertainties because the neglect the things that we are ignorant about. Consequently, any model of an open system (like the economy) that accurately simulates the past is all but certain to underestimate future uncertainties, because the future has more possible outcomes than observed to have occurred in the past.

This is what happened in the financial crisis:

A recent paper by four Federal Reserve economists, “Making Sense of the Subprime Crisis,” found another cause. They surveyed the published research reports by Wall Street analysts and economists, and asked why the Wall Street experts failed to foresee the surge in subprime foreclosures in 2007 and 2008. The Fed economists concluded that the risk models used by Wall Street analysts correctly predicted that a drop in real estate prices of 10 or 20 percent would imperil the market for subprime mortgage-backed securities. But the analysts themselves assigned a very low probability to that happening.

The miss by Wall Street analysts shows how models can be precise out to several decimal places, and yet be totally off base.

The NYT locates the failure of the risk models in the area of human behavior:

Indeed, the behavioral uncertainty added to the escalating complexity of financial markets help explain the failure in risk management. The quantitative models typically have their origins in academia and often the physical sciences. In academia, the focus is on problems that can be solved, proved and published — not messy, intractable challenges. In science, the models derive from particle flows in a liquid or a gas, which conform to the neat, crisp laws of physics.

Not so in financial modeling. Emanuel Derman is a physicist who became a managing director at Goldman Sachs, a quant whose name is on a few financial models and author of “My Life as a Quant — Reflections on Physics and Finance” (Wiley, 2004). In a paper that will be published next year in a professional journal, Mr. Derman writes, “To confuse the model with the world is to embrace a future disaster driven by the belief that humans obey mathematical rules.”

The key point is not simply that human behavior was neglected, but that an important variable with significance for modeled outcomes was not included. The important distinction is thus not that between human and natural systems, with the naive notion that the former are unpredictable and the latter predictable. Rather the important distinction is that between open and closed systems. In a 2003 paper on the role of models in decision making I discussed this distinction and what it means for the use of models of those systems (references available in the text here in PDF):

Bankes (1993) defines two types of quantitative models, consolidative and exploratory, that are differentiated by their uses (cf. Morrison and Morgan 1999). A consolidative model seeks to include all relevant facts into a single package and use the resulting system as a surrogate for the actual system. The canonical example is that of the controlled laboratory experiment. Other examples include weather forecasting and engineering design models. Such models are particularly relevant to decision making because the system being modeled can be treated as being closed. Oreskes et al. (1994) define a closed system as one “in which all the components of the system are established independently and are known to be correct” (Oreskes et al. 1994).1 The creation of such a model generally follows two phases: first, model construction and evaluation; and second, operational usage of a final product. Such models can be used to investigate diagnostics (i.e., What happened?), process (Why did it happen?), or prediction (What will happen?).

An exploratory model—or what Bankes (1993) calls a “prosthesis for the intellect”—is one in which all components of the system being modeled are not established independently or are not known to be correct. In such a case, the model allows for experiments with the model to investigate the consequences for modeled outcomes of various assumptions, hypotheses, and uncertainties associated with the creation of and inputs to the model. These experiments can contribute to at least three important functions (Bankes 1993). First, they can shed light on the existence of unexpected properties associated with the interaction of basic assumptions and processes (e.g., complexity or surprises). Second, in cases where explanatory knowledge is lacking, exploratory models can facilitate hypothesis generation to stimulate further investigation. Third, the model can be used to identify limiting, worst-case, or special scenarios under various assumptions and uncertainty associated with the model experiment. Such experiments can be motivated by observational data (e.g., econometric and hydrologic models), by scientific hypotheses (e.g., general circulation models of climate), or by a desire to understand the properties of the model or class of models independent of real-world data or hypotheses (e.g., Lovelock’s Daisyworld).

Both consolidative and exploratory models have important roles to play in science and decision settings (Bankes 1993). However, the distinction between consolidative and exploratory modeling is fundamental but rarely made in practice or in interpretation of research results.

Clearly, the risk models in the world of finance were exploratory models that should have been used qualitatively, rather than consolidative models that could be used as truth machines. The lessons lie in better use of models, not in better models, again from the NYT:

In boom times, new markets tend to outpace the human and technical systems to support them, said Richard R. Lindsey, president of the Callcott Group, a quantitative consulting group. Those support systems, he said, include pricing and risk models, back-office clearing and management’s understanding of the financial instruments. That is what happened in the mortgage-backed securities and credit derivatives markets.

Better modeling, more wisely applied, would have helped, Mr. Lindsey said, but so would have common sense in senior management.

For more see my recent article in Bridges and this paper on the role of models in prediction for decision (here in PDF).

3 Responses to “Consolidative and Exploratory Models”

    1
  1. Mark Bahner Says:

    “The Fed economists concluded that the risk models used by Wall Street analysts correctly predicted that a drop in real estate prices of 10 or 20 percent would imperil the market for subprime mortgage-backed securities. But the analysts themselves assigned a very low probability to that happening.”

    That’s very remarkable (that the analysts assigned a very low probability to a 10-20 percent drop in real estate prices).

    According to this website (go to time = 5:50 for discussion of the U.S. housing bubble), the housing bubble should have been very visible as early as 2000…and certainly by 2005.

    http://www.chrismartenson.com/crash-course/chapter-15-bubbles

    So any analyst familiar with the hundred-plus year history of housing prices should have thought that a 10-20 percent drop was virtually certain after ~2005 (and maybe as early as 2000).

    My guess is there weren’t enough analysts who had knowledge of long-term historical U.S. housing prices.

  2. 2
  3. Martin Ringo Says:

    Roger,

    “The models failure” should probably read “the hip models failure.” There is an old — over 50 years — and distinguished financial model: the Discounted Cash Flow (DCF) model. It is so old and elementary it is scarcely ever used by financial analysts, at least of those working in the “finance” sector that is the concern of the financial crisis. (It is still used within companies and for almost all project-financed projects.)

    Had someone done a DCF modeling of the bundled-mortgages industry they would have predicted the crisis. Indeed several of the Federal Reserve Banks’ research economists wrote of the worsening quality of mortgage loans whose conclusion was increased default rates which in turn implied housing price decreases which given the leverage of the industry implied the crisis (at least once the default rate hit some minimum level).

    Without looking at the actual cash flows (the incomes of the borrowers, the interest and principal payments of the mortgage borrowers, the same of the mortgage holders (the bundlers), and the non-mortgage cash flows of the mortgage holders, it is hard to estimate the size of the quasi-systemic risk (systemic to the mortgage backed asset markets but not necessarily to the larger financial market). Just looking at housing prices, the rise after WW II probably looked worse on the surface. For instance, if the US had a 25% increase in population (say a large relaxation of immigration) and the doubling of total factor productivity, my guess would be that the housing price increase would make the 2000-2006 increase look like a little blip. The growth of GDP and its implied increase in expected net worth would justify a substantial growth in the prices of a good with a large fixed component — the classic Ricardian dismal prediction of increased land rents.

    A large part of financial model failure is simply the failure of statistical trends or tendencies to change — the parameters or equilibrium conditions change (see Clements and Hendry “Economic Forecasting in a Changing World” The Berkeley Electronic Press for an example regarding macroeconomic forecasting). Our proclivity for financial crises is in part that the models we rely upon the most tend, by the nature of their current popularity, to use the same statistical evidence: here the estimated distribution of future housing prices.

    The subprime crisis was a price bubble with an increasing leverage with moderately increasingly risky fundamental cash flow (the home owners net available cash flow), hardly something we haven’t seen before.

  4. 3
  5. SeanWise Says:

    I’m not a statistician so I cannot argue the points on a high level as other here. However, there is a common sense obvious risk indicator that I wrote to the NY Times author about. Here is my note to Mr. Lohr.

    I saw this quote in your article, ‘”If you are making a high return, I guarantee you there is a high risk there, even if you can’t see it,” said Mr. Lindsey, a former chief economist of the Securities and Exchange Commission.’ I agree with the implication that high return in the markets are derived from high risk but “even if you can’t see it” makes no sense. Just follow the trajectory of Wall Street bonuses over the last 6 years. The tripling of compensation to the “masters of the universe” bonus pool should have been a warning and it was published in your paper several times a year over through the bubble. When bankers look to distribute bonuses this year, after collapse of several institutions, loss of value in the survivors, and a bail-out by taxpayers, can someone please ask the question, where it the outsized risk or leverage that justifies this payday. These bonuses of the investment banking bubble were the canary in the coal mine. If the bankers insists its necessary to retain talent, have them explain the risk that’s implicit in the compensation.

    Is this simple implicit risk in high compensation a potential warning sign, particularly in a time where the rest of the economy was going sideways?