The Uncertainty of Credit Safety

6
34 WILMOTT magazine Kent Osband Perceived default risks The easiest way to think about credit markets is that each credit possesses an instantaneous default rate θ, which changes over time, and which market participants adjust their subjective probability beliefs p about θ based on servicing history. Let us consider the ith cumulant of p by κ i . Rational learning requires Bayesian updating. When default occurs, the new beliefs should be proportional to θ p (θ). When servicing occurs, beliefs shiſt in proportion to 1– θ dt. Let dJ denote the element of surprise relative to the expected number of defaults κ 1 dt. When servicing occurs, dJ = –κ 1 dt; when instantaneous default occurs, dJ = 1. e general updating rule is: dp(θ) = p(θ) θ – κ 1 ____ κ 1 dJ, (1) The Uncertainty of Credit Safety It is inherently difficult to distinguish safe debts that won’t default from risky debts that haven’t default- ed yet. The extra uncer- tainty calls for additional capital buffers on highly rated debt  A s we have seen in previous articles, rational debt markets will strike naïve observers as both overly optimis- tic and skittish. Overly optimistic because average default losses tend to significantly exceed long-term credit spreads. Skit- tish because a few unexpected defaults can make short-term spreads rocket. Simulations with plau- sible parameter values show that expected rational deviations are high enough to account for most of the macro evidence on lender foolishness. is article will show that uncertainty is particularly potent at very low credit spreads. In a sense, a “highly safe” debt is a highly asymmetric gamble. Most likely, the asset in question is indis- tinguishable from risk-free. But the few bad spells tend to cluster, making the tails of a “highly safe” portfolio much fatter than binomial or Poisson modeling suggests. is has major repercussions for debt port- folio management, and in turn for the regula- tion of banks holding debt portfolios. However, this article will focus on the underlying math: theory, Monte Carlo results, and intuition for the math. Hopefully, this will foster more shared understanding and make policy discussions more constructive. which turns out equivalent (Pandora’s Risk, p.209) to the cumulant ladder: i = if payment if default dJ= к i +1 –к i +1 dt –к i +1 / к i к i { (2) To this, we add adjustments for the anticipated regime switching. Here is how a more intuitive agent might arrive at similar results. Observing history, he identifies D defaults in a relevant time span T and estimates the mean default rate κ 1 as their ratio D / T . If a relevant default occurs immediately after, he updates the mean to D + 1 _____ T . If debt is serviced over the next instant dt, he updates to D D dt 1– T T T + dt = ~ .

Transcript of The Uncertainty of Credit Safety

Page 1: The Uncertainty of Credit Safety

34 WILMOTT magazine

Kent Osband

Perceived default risksThe easiest way to think about credit markets is that each credit possesses an instantaneous default rate θ, which changes over time, and which market participants adjust their subjective probability beliefs p about θ based on servicing history. Let us consider the ith cumulant of p by κi.

Rational learning requires Bayesian updating. When default occurs, the new beliefs should be proportional to θ p (θ). When servicing occurs, beliefs shift in proportion to 1– θ dt. Let dJ denote the element of surprise relative to the expected number of defaults κ1 dt. When servicing occurs, dJ = –κ1 dt; when instantaneous default occurs, dJ = 1. Th e general updating rule is:

dp(θ) = p(θ) θ – κ1 ____ κ1 dJ, (1)

The Uncertainty of Credit SafetyIt is inherently difficult to distinguish safe debts that won’t default from risky debts that haven’t default-ed yet. The extra uncer-tainty calls for additional capital buffers on highly rated debt

 As we have seen in previous articles, rational debt markets will strike naïve observers as both overly optimis-tic and skittish. Overly optimistic because average default losses tend to

signifi cantly exceed long-term credit spreads. Skit-tish because a few unexpected defaults can make short-term spreads rocket. Simulations with plau-sible parameter values show that expected rational deviations are high enough to account for most of the macro evidence on lender foolishness.

Th is article will show that uncertainty is particularly potent at very low credit spreads. In a sense, a “highly safe” debt is a highly asymmetric gamble. Most likely, the asset in question is indis-tinguishable from risk-free. But the few bad spells tend to cluster, making the tails of a “highly safe” portfolio much fatter than binomial or Poisson modeling suggests.

Th is has major repercussions for debt port-folio management, and in turn for the regula-tion of banks holding debt portfolios. However, this article will focus on the underlying math: theory, Monte Carlo results, and intuition for the math. Hopefully, this will foster more shared understanding and make policy discussions more constructive.

which turns out equivalent (Pandora’s Risk, p.209) to the cumulant ladder:

dкi=if paymentif defaultdJ=кi +1

–кi +1dt–кi +1 / кi кi {

(2)

To this, we add adjustments for the anticipated regime switching.

Here is how a more intuitive agent might arrive at similar results. Observing history, he identifies D defaults in a relevant time span T and estimates the mean default rate κ1 as their ratio D/T. If a relevant default occurs immediately after, he updates the mean to D + 1 _____ T . If debt is serviced over the next instant dt, he updates to

D D dt1–T TT + dt =~ .

Page 2: The Uncertainty of Credit Safety

WILMOTT magazine 35

^

Th is will match the update dκ1 in (2), provided variance κ2 =

D/T 2.While that sounds crude, a gamma prior with

shape D and inverse scale T will generate exactly the same results. In a risk-neutral market, the overnight spread should equal D/T, while the surge in mean on default should equal 1/T. Hence, unlike the unknown default rate, the fi rst two cumulants of beliefs can, in principle, be deduced from short-term observation.

To generalize, let us defi ne κ21/κ2 as the

“relevant defaults” or “eff ective shape” of beliefs, and κ1/κ2 as “relevant time” or “eff ective inverse scale.” Th ese values will suffi ce to compute (2), the impact of observation on mean beliefs, whether or not beliefs are truly gamma-distributed. Non-gamma shapes will, however, aff ect the update of κ2 and higher cumulants.

As credit ratings are spaced roughly geo-metrically, the ratio √

—κ2/κ1 of standard deviation to mean indicates the perceived wobbliness of a given credit rating. Th at ratio is just the inverse square root of eff ective shape. Hence, eff ective shape is an excellent proxy for relative certainty. Similarly, eff ective time summarizes the weight of past evidence.

To see the impact of uncertainty, we can perform Monte Carlo simulations. My last article generalized the mean reverting CIR diff usion to:

dx = ½c2x2γ–1(2γ–1+k–kx) dt+ cxγ dz, (3)

where x ≡ θ/μ denotes the ratio of θ to the long-term mean μ, k denotes the shape of the stationary gamma distribution, c denotes the log volatility at μ, and γ (which under CIR equals ½) controls the volatility of strong credits relative to weak ones. I then folded in Bayesian updating, based on a continuous stream of m concurrent i.i.d. observations, assuming that observers know the param-eters of (3).

Creating even a rough refl ection of observed markets constricts the plausible parameter ranges. Basically:

• k must be of the order of 0.8 to allow a roughly thousand-fold range of relative overnight spreads.

• c must be close to 0.2 to simulate the observed pace of rating migrations.

• γ must be close to 1.1 to allow faster migra-tion at lower ratings without making ratings bunch up at a particular edge.

Th e parameter that troubles me most is m, the size of the relevant i.i.d. pool. Few sovereign or corporate bonds carry perfectly correlated spreads down to default, yet default independently. Why should they? No credits are perfectly independent images of the US, of Japan, of Russia, of Brazil, or of a telecom in one of those countries. Yet m = 1 eff ectively rules out correlation across credits.

To acknowledge a range of uncertainty, I will consider two variants. Th e fi rst sets k = 0.8, c = 0.2, γ = 1.1, and m = 10, as this mix gave the most plausible fi t in the previous article. Th e second expands pool size to m = 100, raises equilibrium shape to k = 1.0 to preserve the range of overnight spreads, and keeps c = 0.2 and γ = 1.1.

Either model understates real-life jumps. Th e chance of overnight spreads diff ering by more than 200 bps from their values one year prior is only 1.0 percent in the low-m variant and 2.1 percent in the high-m variant. Th e chances of more than 300 bps deviation from values one year prior are 0.3 percent and 0.6 percent, respectively. Allowing for stochastic resets and partial correla-tions will induce sharper fl uctuations. However, we shall see that even these relatively tame models bode enormous uncertainty and drawdowns.

As defaults of high-quality credits are especially rare, we need to perform enormous numbers of simulations to be confi dent in the

tail distributions. We need a fi ne grid of prob-ability transitions to be confi dent that discreti-zation isn’t warping the evolution. In repeated tests to gauge sensitivity, a few million years of biweekly Monte Carlo simulations was more than adequate, with odds of simultaneous defaults rarely exceeding 1 percent. Beliefs must also encompass a broad range of outcomes in order to model occasional turbulence. In test-ing, a 600-point belief grid with inverse-cosine spacing (which allocated 20 percent of points to the outer 5 percent of long-term outcomes and 10 percent to the outer 1 percent) was also more than adequate. By “more than adequate,” I mean that doubling the fi neness in smaller samples didn’t signifi cantly alter any results. Runtime, including a slew of tests, took a couple of hours each on a Retina MacBook Pro, without invok-ing parallel processing.

Fractional defaultsIf default risk were known to be stable, the relevant observation period T would continually rise, and gradually squeeze out the surge in mean on default. Th e reason this doesn’t happen is the anticipated risk of regime change. Th at risk wid-ens κ2, and hence grinds down T. Intuitively, the prospect of regime change makes past evidence less relevant.

If D doesn’t shrink when T does, then our biased forgetfulness will make us increasingly

pessimistic over time. Conversely, if D shrinks much faster than T, say because no defaults are observed over an extended period, D can fall below one. Indeed, D = T spread must be less than one whenever a credit with less than a few hundred years of relevant evidence is rated as highly safe.

For gamma priors, a shape D < 1 im-plies a hybrid between an L-shape and an exponential shape. Density is unbounded at the origin, drops to low values quickly, but allows a signifi cant chance of risk reaching high multiples of the mean. Th e most studied case has a shape of ½. Better known as chi-squared with one degree of freedom, it depicts the distribution of a standardized normal random variable squared.

Figure 1: Histogram of effective shape for low-m variant

Page 3: The Uncertainty of Credit Safety

KENT OSBAND

36 WILMOTT magazine

I call D << 1 “polarized gamma” because the density near the mean is much thinner than typi-cally encountered. If polarized gamma risk were clouds, then it would rarely rain, but when it rained it would pour. Polarized gamma can be roughly compared to a two-point distribution, with prob-ability weight D

D + 1 on κ1 (1 + D) ______ D value and the

remaining weight on 0. Th is matches the mean and variance of the polarized gamma, but understates the skewness by about half.

The case D < 1 can also be viewed as reduced form for a mixture of exponential densities – that is, one default per time span, over various time spans. Bernstein (1928) showed that mixtures of exponential densities can map any density whose value and all its derivatives shrink monotonically toward zero.

Figure 1 displays a histogram of effective shape for the low-m variant described above. The Monte Carlo simulation records nearly 1.25 million defaults, and agents are fully informed about every prior default. Nevertheless, not once in five million years does any Bayesian-rational agent behave as if relevant defaults D numbered as many as 16. In 99.9 percent of cases, D is less than 8; in 44 percent of cases, D is less than 2; and in 16 percent of cases D is less than 1.

Figure 2 displays the corresponding histogram for the high-m variant. The simulations record

nearly 12.5 million defaults; nevertheless, not one rational agent in five million years behaves as if D exceeds 65. In 80 percent of cases, D is less than 10; in 28 percent of cases D is less than 5. The 0.8 percent chance that D is less than 1 exceeds the 0.7 percent chance that D exceeds 20.

Clearly, pool size makes a huge impact on effec-tive shape. However, the differences are far less than the ten-fold difference in pool size that m sug-gests. Effective shape appears to scale roughly with the square root of m. Relative uncertainty is rife.

Differential uncertaintyDue to non-gamma higher cumulants, effective time won’t advance perfectly linearly. Due to antici-pated regime switching, effective time and defaults can decay as well as advance. The evolution can be turbulent. Nevertheless, it is clear that (a) defaults tend to swell effective shape, while (b) anticipated regime switching without default shrinks effective shape.

Combining these observations, we see that “safe” debt (low mean) tends to be highly “uncer-tain” (low effective shape). For a cruder path to the same conclusion, imagine that safe debt and risky debt have similar effective periods of observation. To the extent that this is true, effective shape will be roughly linear in the overnight spread.

Orthodox finance misses this because it iden-tifies actual risk with perceived risk. Behavioral

finance misses this too because its methods aren’t systematic enough to predict differential uncer-tainty. By rooting perception in rational learning, we gain extra powers of discrimination.

Figure 3 is a scatter plot of effective shape ver-sus overnight spread for the low-m variant, out to a spread of 2,000 bps. Actually, it depicts less than 3 percent of observations as the plotting engine choked on the full set, but the fits are very close as I sampled more densely at higher spreads. Note how tight the relevant D range is at any given spread, and how tightly correlated D is with overnight spread.

Despite appearances, most observations clus-ter in the lower left of Figure 3. Slightly more than half of spreads are less than 200 bps. The highest D associated with them is 2.4; the associated mean is 1.3. For the 30 percent of observations with spreads of less than 100 bps, the mean D is 0.9 and the max is 1.7. For the 10 percent of observations with spreads of less than 30 bps, the mean D is 0.5 and the max is 0.8.

Figure 4 presents the corresponding scatter plot for the high-m variant out to spreads of 2,200 bps – this time with less than 1 percent sampling of the 100+ million observations. While the effective shapes are larger, the correlations look just as tight. Again, the picture misleads, as the lion’s share of observations cluster in the lower left. With 53 per-cent of spreads under 200 bps, the mean D of those

KENT OSBAND

Figure 2: Histogram of effective shape for high-m variant Figure 3: Effective shape versus overnight spread for low-m variant

Page 4: The Uncertainty of Credit Safety

^KENT OSBAND

WILMOTT magazine 37

spreads is 4.7 and the max is 8.2. For spreads of less than 100 bps, the mean D is 3.6 and the max is 5.9. For spreads of less than 30 bps, the mean D is 2.1 and the max is 3.4.

Estimated VaRAs readers of this series know, I dis-like Value at Risk (VaR) measures. They camouflage floors on risk as ceilings, they’re very hard to estimate reliably, and they create perverse incentives when used for regulation. I’m particularly skeptical of the 99.9 percentile used to set Basel-style cap-ital buffers for banks. With apologies for jaundice, I can’t help but think that regulators chose something that sounded fashionably safe but wasn’t expected to squelch risky lending, as it rounds to three standard deviations under normality.

In fairness, I cannot recommend this dislike, as it hinders lucrative employment in the bank-ing industry. Besides, VaR use is so widespread in finance that it has become a lingua franca. Tell most risk managers that their risk measure has low effec-tive shape, boding high uncertainty for safety, and they’ll ignore you for being an incomprehensible egghead. Tell them that their VaR is far higher than it appears, and they’ll rage and dismiss you with prejudice. It’s always nice to be noticed.

As used in banking, VaR refers only to unex-pected losses of capital. (Expected losses, which in percentage terms roughly correspond to the spread, are covered out of separate loss reserves.) Moreover, regulations implicitly assume so much micro-level diversification that random deviations around the conditional mean default rate ĸ1 are minor. The main unexpected losses come when the perceived ĸ1 surges – that is, when risks seem a lot more than lenders bargained for.

In a mark-to-market environment, these unex-pected losses will show up first and foremost in markdowns on longer-term credits. Our models can proxy these by pricing perpetuities every day at fair market value and recording the drawdown (maximum markdown in price) over the next year. The 99.9 percentile loss becomes our VaR estimate.

Let us consider a few possible objections to this procedure:

• Banks’ credit holdings are rarely perpetuities. True, and this matters a lot, as shorter-term assets have lower credit spread duration. However, Basel-style capital regulations typically ignore duration or downplay it relative to credit quality. Our proxy takes Basel regulations at their word.

• Drawdowns over the next year aren’t the only way to estimate VaR. True, but a one-year drawdown is about as close as we get to a standard metric. After a crisis, bank-ing regulators often wish the drawdowns covered more than a year, as forcing banks with excess losses to deleverage adds to the burdens of recovery.

• Market prices can deviate from fair value, especially in a crisis. True, and this will increase VaR beyond our baseline esti-mates. But regulators often make valuation exceptions when markets seem panicked, and our baseline estimates are already remarkably high.

In implementing this procedure, the first chal-lenge is calculation of a fair price. Unlike a pure reset process, diffusions of type don’t generate valuation equations that are easy to solve by hand. Instead, I set up dynamic programming equalities

for a dense grid of θ, as described in the previous article, and iter-ated assuming risk neutrality. Ten

thousand iterations nearly always achieved convergence to within one part in a million.

The second and far greater chal-lenge is estimation of the VaR func-tion. By “VaR function,” I mean a mapping of overnight spread to the 99.9 percentile (or other confidence level) of markdowns on perpetui-ties. As Monte Carlo observations rarely generate overnight spreads that exactly match, we need to fit the VaR function through some-thing more than simple counting.

I experimented with several parametric fitting methods. One involves a kind of kernel estimation. Couple each observed overnight

spread with the associated one-year drawdown, and sort pairs in order of overnight spread. For every overnight spread j,estimate its “raw” VaR as the 99.9 percentile of drawdowns associated with spreads j – 3000 through j + 3000. By construction, this gives us six exceedances, which make wild estimates exceptional. To make this less unwieldy and reduce potentially misleading overlap, I sampled only one pair per year and one ordered overnight spread per thousand. I then fit a fourth-degree polynomial in overnight spread to the raw logarithmic VaR, using ordinary least squares, and converted back to per-centage drawdow.

A second method estimates a VaR function directly using proper scoring rules. A proper scor-ing rule is a reward tied to predictor and output, designed so that the agent always maximizes expected reward by reporting the desired statistic truthfully (see Gneiting (2011) for an excellent overview). The simplest nontrivial scoring rule S for eliciting a 99.9 percentile q given outcome x is

S(x, q) = 0.999q – max(q – x, 0). Modeling q as a

fourth-degree polynomial in overnight spread and x as the logarithmic drawdown, we can then look for the coefficients that maximize the total score and convert the logarithmic prediction to a per-centage prediction.

Each method has drawbacks. The kernel meth-od overrates the influence of higher spreads; the

Figure 4: Effective shape versus overnight spread for high-m variant

Page 5: The Uncertainty of Credit Safety

KENT OSBAND

38 WILMOTT magazine

scoring rule method generates a rather flat optimi-zation surface, which biases results toward the ini-tial seed. Reassuringly, the two methods generated sufficiently similar results that I combined them, by using kernel estimation to seed the scoring rule search.

Figure 5 depicts the estimated VaR functions for both low-m and high-m variants, for overnight spreads ranging from 1 bp to 1,000 bps. To repeat, VaR is defined here as the 99.9 percentile of one-year percentage drawdown on perpetuities with a given overnight spread. Estimated VaR far exceeds the 1–10 percent range conventionally associated with bank capital buffers. The reason, in a word, is uncertainty. Conventional models presume that rational market perceptions perfectly track actual risk. Instead, they ebb and flow with observations of random outcomes. Fluctuations in perceptions often dwarf the actual changes in risk.

Uncertainty thoroughly dominates risk at the safe end of the spectrum. In the low-m variant, VaR for top-grade credit is 20–28 percent. The corre-sponding VaR for the high-m variant is about half that (or a third at the 1 bp low), thanks to the extra information that a ten times larger pool provides. Neither estimate comes close to justifying the sub-5 percent buffers that Basel-type regulation assigns to top credits. Nor do they justify assigning the high-est buffer to the weakest credit, where high volatil-ity and mean reversion trim drawdowns.

Let me emphasize that I am NOT recommend-ing the use of either VaR function to set banking capital buffers. The estimates are too sensitive to parameters we can’t be sure of, the incentives are too perverse, and they don’t address response to VaR breaches. Even in the narrow statistical sense, I don’t trust the fit, and you won’t either once you see Figure 6. It is a scatter plot for the low-m variant of the raw VaR versus the log of overnight spread.

The first thing we notice about Figure 6 is a bunch of blobs. Imagine we’re driving through a construction zone in the rain and a big truck driv-ing past splatters our windshield with mud, and our old wipers have trouble scraping it off. Then, Figure 6 might depict the left side of our wind-shield. Pivoting from the lower right, the wiper leaves a narrow streak at the lower left, stretching from –9 (1 bp) to –7 (9 bps), with just a tiny slant downward. Around –6 (25 bps), the wiper has a conniption; it leaves a residue at height 0.2, which smears out to 0.25 in conjunction with a second streak, skips to another narrow streak at height 0.3, then skips to a thicker streak at 0.35. Now, the downward slope is steeper, and a blob rolls down toward height 0.3 in the –5 (67 bps) to –4 (183 bps) range. But the wiper skips along and pushes blobs above height 0.4, even in the –5 to –4 range. However, once the blade gets to –3 (498 bps), the blobs clearly slide down, with height at –2 (1,353 bps) dropping to 0.2.

Here is one way to translate that description from blob language to credit language:

• AA/AAA credits pose distinctly less draw-down risk than most other rated credits, but the VaR of 20–22 percent is far higher than usually imagined because investors in very long-dated AA/AAA credits might take fright about the future.

• VaR for the weakest credits approaches AA/AAA levels, partly because much of the stuffing has already been knocked out of them and partly because they’re likely to improve in credit quality if they don’t default.

• B/BB credits generally have the highest VaR, although it is hard to pin down exactly what. Over tens of thousands of observa-tion years, VaR will wander between 0.3 and 0.4, or about 150–200 percent of AA//AAA levels.

• Credits near the cusp of investment versus subinvestment grade are extremely hard to characterize in VaR terms.

• For tens of thousands of years they may behave like AA credits with lower VaR, or like BB credits with twice the AA VaR.

Figure 7 presents the corresponding scatter plot for the high-m variant. The qualitative interpreta-tion is broadly similar but the gradient from AA to

Figure 5: Quartic fits for estimated one-year 99.9 percentile VaR Figure 6: Raw VaR versus log (overnight spread) for low-m variant

Page 6: The Uncertainty of Credit Safety

KENT OSBAND

WILMOTT magazine 39

BB is much steeper. Also, the main region of uncer-tainty about VaR threshold shifts from –6 (25 bps) to –9 (1 bp). As for the really safe region below 1 bp, let me caution that such confidence typically hinges on avoiding a single default in thousands of relevant observation years.

Given the raw VaRs, any curve-fitting is suspect. Still, it is useful to map what might be called the local centers of VaR gravity. One simple smoother is the average of an ascending exponentially weighted moving average (EMA) and a descending EMA. The aggregate smooth mimics a double-sided exponential or Laplace distribution, so we might call it a Laplace moving average. For less wriggle with harder calculation, maximize the least-squares error of fit, less a penalty for squared deviations in slope.

Figure 8 uses the second method to smooth the scatter plots for both variants. Despite its un-evenness, I prefer it to the quartic fit of Figure 5. It stays truer to the data. It also reminds us that, even with five million simulated observations, defaults are sufficiently rare and market evolution is suf-ficiently turbulent to make drawdown prediction dicey.

ConclusionsThe most important conclusion is that finance analysts need to distinguish risk from perceptions of risk. The servicing record forms a membrane between them. On the one side, objective risk breeds payment or default, regardless (at least in these simple models) of what people think. On the other side, what investors reasonably infer from the servicing record might vastly differ from the true risk.

Adding uncertainty to risk analysis makes it much harder to obtain neat solutions or to interpret messy ones. However, Monte Carlo simulations can help us identify the main drivers and appreciate their significance. Hopefully, some of you readers will apply methods from quantum mechanics or fluid dynamics to unravel more of the mysteries.

In the interim, we have an immediate practi-cal application. Basel-type banking regulation promotes “safe” lending over “risky” lending, by requiring much higher capital buffers for the lat-ter than the former. Our analysis suggests this is way overdone. Even the high-m variant doesn’t justify more than a three-to-one differential in capital buffers. In the low-m variant, the maximum

differential is so low – barely 1.5 to 1 – that it hardly seems worthwhile for regulators to distinguish safe long-term credits from risky ones.

In my opinion, the salient distinction for bank-ing capital needs is duration, not credit rating. Want to keep your payment system truly safe? Make banks back demand deposits with short-term instruments. Want to promote real investment through, as the buzz phrase goes, “duration transformation”? Stop subsidizing holdings of sovereign debt.

In fairness, the model here is too primitive to justify such sweeping conclusions. Before we rant, it behooves us to make our models more realistic, in particular by allowing both common and idiosyn-cratic risks. My next article will tackle this challenge.

Figure 7: Raw VaR versus log (overnight spread) for high-m variant Figure 8: Local smoother fits for estimated one-year 99.9 percentile VaR

REFERENCESBernstein, S.N. 1928. Sur les fonctions absolument monotones. Acta Mathematica 52, 1–66.Gneiting, T. 2011. Making and evaluating point fore-casts. Journal of the American Statistical Association 106, 746–762.Osband, K. 2011. Pandora’s Risk: Uncertainty at the Core of Finance. New York, NY: Columbia Business School Press.

w