 # portfolioAnalytics documentation

The portfolioAnalytics library involves quite detailed mathematical operations. Not very complicated as such (mostly integrations) but can be quite lengthy.

So far the mathematical side of the documention has the folliwing entries:

In this topic we’ll collect feedback and discuss how to expand the documentation

1 Like

Hi, in your python code for the Vasicek portfolio model in portfolioAnalytics on Github, please could you explain

1. why the phi_cum = stats.norm.cdf(arg, loc=0.0, scale=1.0) is squared in the integrand:
integrant = phi_den * math.pow(phi_cum, 2)
2. How the unexpected loss is then got from:
result = p / N - p * p + float(N - 1) / float(N) * dz * integral
return N * math.sqrt(result)
I have so far been unable to make sense of these lines of code.

hi @rdsk , welcome to the commons! These lines are I believe from the vasicek_base_ul function? UL stands for the standard deviation of the distribution computed from the underlying discrete distribution (first computing the variance, then taking the square root). This is how the square comes in.

If you want to replicate it start with the definition of variance as expectation. All the moments of that distribution involve various manipulations of Gaussian integrals. Should be correct but you never know Thanks very much. Makes sense to me now I think. I assume the p/N-p*p is the expected variance of the binomial and the float adjustment is a bessel correction.

1 Like

yes. there are similar expressions for higher order moments. they might have some use for testing / or approximating loss distributions so potentially they could be added to the library as additional functions at some point.

Hi, you say the output from vasicek_base_ul is a “standard deviation” type thing. Presumably this reflects the confidence interval put in the norm.ppf function.
To get the upper limit from this should I add it to the expected loss?

I’m puzzled by the results I get eg:
vasicek_base_ul(100,0.99, 0.1) gives 1.03
vasicek_base_ul(100,0.95, 0.1) gives 2.41
ie the higher severity at 99% gives a higher result than the lower severity at 95%
Have I misinterpreted this?

It is not a “type” thing, it is the standard deviation computed directly from the distribution.

The signature of the function is:

Vasicek Base Distribution Unexpected Loss:

• param N: The number of entities in the portfolio
• param p: The probability of default (of each entity in the portfolio, assumed homogeneous in the Vasicek model)
• param rho: The asset correlation (not needed here)

When you insert quantile figures such 0.99 or 0.95 in this function you are not using it correctly as it does not produce a distribution, it only produces a statistical moment of a distribution.

If you want to obtain an estimate of stressed defaults based on standard deviation you would do it something like this:

SD = EL + a * UL

where SD is stressed number of defaults, EL is the expected number of defaults and UL is the standard deviation of default and a is the number of standard deviations (2, 3 etc).

This approach used to be called the “poor man’s economic capital”. Namely it is an easy way to get an estimate of “tail losses” (losses that will only happen in a more stresseful scenario) by estimating average and standard deviation of losses over a historical period. The choice of a is by convention (and by analogy with the quantiles of the normal distribution) but it does not imply we assume a normal.

Many thanks. Now clear. I next hope to figure out the Granularity Adjustment in the IRB model. How difficult would it be to add this to the Vasicek model? How does the GA change the loss distribution?

1 Like

Supporting with granularity adjustments is a good suggestion for some future development. There are now various methodologies for GA, some more natural to include in this library than others. Feel free to raise an issue as a feature request on github. When it might get implemented is not clear though.

I created a new issue to improve the overall input data validation and maybe issue some warnings. Using the base vasicek model for very large portolios is likely to always hit some overflow limits because of the factorials involved. The binomial function currently used if from sympy and its a bit of a pity it doesn’t issue a warning. Maybe there is an option to do that, need to look into this.

There are in principle some ways to improve the handling of very large factorials but the use case for this is somewhat limited (if the portfolio is very large the assumption that all default probabilities are the same is not particularly realistic)

It would a nice exercise to compare base model with limit models and see whether the results converge adadequately. It is essentially a “brute force” estimation of the granularity adjustment