1 Introduction Value at Risk èvarè has become a popular risk management technique in the last few years. One driving force behind the popularity of th - PDF

Description
Value at Risk Based on the Volatility, Skewness and Kurtosis æ David X. Li Riskmetrics Group 44 Wall Street New York, NY Phone: è212è Fax: è212è March

Please download to get full document.

View again

of 15
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Information
Category:

Food & Beverages

Publish on:

Views: 47 | Pages: 15

Extension: PDF | Download: 0

Share
Transcript
Value at Risk Based on the Volatility, Skewness and Kurtosis æ David X. Li Riskmetrics Group 44 Wall Street New York, NY Phone: è212è Fax: è212è March 4, 1999 Abstract This paper presents a new approach to calculating Value-at-Risk èvarè in which skewness and kurtosis as well as the standard deviation or volatility are explicitly used. Based on the theory of estimating functions in statistics we construct an approximate conædence interval from the ærst two moment conditions. The ænal result shows explicitly how the conædence interval is aæected by the standard deviation, skewness and kurtosis. We test our method using ten years of daily observations on twelve diæerent foreign exchange spot rates and ænd the new approach captures the extreme tail much better than the standard VaR calculation method used in Riskmetrics TM. æ The author thanks his colleagues, Chris Finger, Joonwoo Kim, Allan Malz, Jim Ye at RiskMetrics Group, and Peter J. Zangari at Goldman, Sachs & Co. for helpful discussions and suggestions. 1 1 Introduction Value at Risk èvarè has become a popular risk management technique in the last few years. One driving force behind the popularity of this technique is the release to the public of JP Morgan RiskMetrics TM Technical Document è1997è and the subsequent BIS adoption of VaR risk report for all trading portfolios of ænancial institutions. The technical document provides a benchmark for VaR calculation based on the statistical conædence interval constructed under the assumption of a normal distribution. However many empirical studies of time series data show that the rate of return or percentage change of many ænancial variables are not normally distributed. These series tend to be skewed and leptokurtic. As shown by Zangari è1996è, the VaR calculated under the normal assumption underestimates the actual risk since the distribution of many observed ænancial return series have tails that are ëfatter than those implied by conditional normal distribution. How to incorporate these observations into the VaR calculation is an important issue. There exist mainly two methods to construct VaR if we do not assume normality, parametric and non-parametric approaches. In the parametric approach alternative distributions are explicitly assumed instead of normal distribution. For example, Hull and White è1998è suggest to use alternative distributions, such as a mixture of two normal distributions, to model the return of ænancial assets, and then use a percentile-to-pecentile mapping between this alternative distribution and the normal distribution to obtain the VaR. In the nonparametric approach no particular distribution assumption is made, and VaR is calculated using the standard theory of order statistics èsee Kupiec è1995è or Monte Carlo simulation. In standard Monte Carlo simulations, it is well-known that the precision of the estimated VaR increases with the square root of the numberofsimulation runs. Large sample is needed to have a stable VaR result for 99è conædence interval, which makes the Monte Carlo approach a quite expensive practice. This paper presents an alternative approach to the construction of conædence intervals based on a semiparametric setting. In general, we need either a fully-æedged distribution assumption or a Monte Carlo simulation to build conædence intervals. Since this new semiparametric approach uses only moment conditions up to the fourth order, it allows us to incorporate empirical ændings on moments directly. In the mean time it is not as restrictive asthe parametric model, or as expensive as the Monte Carlo simulation approach. 2 Since we do not assume any sample distribution the conædence interval we obtain is an approximate conædence interval based on large sample asymptotic theory. The proposed approach is consistent with the statistical method of conædence interval construction using pivotal quantities. A pivotal quantity or ancillary statistic is deæned as a function of the data and parameter having a æxed distribution the same for all parameter values. For example in the case of obtaining a conædence interval for the mean parameter ç of a normal distribution with a known variance ç 2, we can take the pivotal quantity X, ç ç to construct a conædence interval since this statistic follows the standard normal distribution for all parameter values of ç. The pivotal quantity we use here involves a higher order item of the observation X, then the skewness and the kurtosis come into the ænal conædence interval expression explicitly. The underlying theory we use here is called the theory of estimating functions, which has been becoming a popular statistical theory in the last decade. The theory of estimating functions generalizes and uniæes many existing statistical theories and has extensive application in generalized linear statistical models, sampling theory and biostatistics. For an exposure to the theory of estimating functions we refer to Godambe è1991è. 2 The Current RiskMetrics TM Approach Before we present our new approach, it is beneæcial to review the RiskMetrics TM approach. RiskMetrics TM assumes that returns follow a conditional normal distribution. Suppose that the return series is X t ;t = 1;2;æææn, and the volatility series of the return is ç t ;t=1;2;æææn. The variable X t is not normally distributed, but the ratio of the return over the volatility, X t =ç t, follows a standard normal distribution. RiskMetrics TM uses the exponentially smoothed historical data to estimate the volatility series. This approach has the following two advantage: 1. The unconditional series X t has a fatter tail than the conditional one, X t =ç t 2. The explicit modeling of the volatility series captures the time-varying, persistent volatility observed in real ænancial markets 3 In summary we essentially assume that the original return series is not normally distributed, but that a transformed series is. In the RiskMetrics TM framework the transformation is fèx t è= x t ç t : This approach has the same motivation as the pivotal quantity approach to the construction of conædence intervals. Essentially we need to construct a normally distributed pivotal quantity, which is a function of both the sample data and the parameter to be estimated. Then we can solve for the conædence interval of the parameter. In the RiskMetrics TM approach the pivotal quantity is simply the original return series divided by the time varying volatility. Using the theory of estimating functions we can ænd an alternative pivotal quantity involving higher order terms of the return as follows. 3 The Estimating Function Approach Suppose we have a random variable X, whose mean, variance, skewness and kurtosis are deæned as follows ç = EèXè; ç 2 = VarèXè; æ 1 = EèX,çè3 ; è1è ç 3 EèX, çè4 æ 2 = ç 4, 3: We consider one sample from the distribution of X using the theory of estimating functions in statistics. For the basic concepts of the theory, see Godambe è1991è. For a concise summary we refer to Li and Turtle è1997è. From the ærst two moment conditions in è1è we havetwo basic estimating functions as follows h 1 = X, ç; h 2 = èx, çè 2, ç 2 : 4 But h 1 and h 2 are not orthogonal to each other. We adopt the othogonalization procedure in Doob è1953è to produce an orthogonal estimating function to h 1 h 3 =èx,çè 2,ç 2,æ 1 çèx,çè: Then we need to ænd an optimal linear combination of estimating functions h 1 and h 3 as follows l ç = æh 1 + æh 3 : Godambe and Thompson è1989è shows that the optimal coeæcients æ and æ based on the theory of estimating functions are given as follows In general, l æ ç pv arèl æ ç æ æ è Eèh 2 1è =, 1 ç 2; æ æ è Eèh 2 3è = æ 1 ç ç 4 èæ 2 +2,æ 2 1è : è can be approximated by a standard normal distribution. So a è1, æè percent conædence interval for l æ ç q æ Varèlçè æ æ éc æ; l æ ç pv arèl æ ç è would be where C æ is the critical value corresponding to the conædence level æ. For example, if æ =0:05, C æ =1:96. From the inequality è2è we can solve for a conædence interval for X if all moments are known, i.e., X L éxéx U : Some tedious mathematical derivations result in the following result è2è 5 X U = ç + X L = ç + æ2+2 æ1 æ2+2 æ1 In the case of normal distribution s ç ç 2 p ç æ2+2 çc æ èæ2+2èèæ2+2,æ + æ è +1 jæ1j ç; è3è s 2 ç ç 2 p ç æ2+2 çc æ èæ2+2èèæ2+2,æ, è +1 jæ1j æ1 2 ç; æ 1 6=0 the optimal estimating function is æ 1 = æ 2 =0; and l æ ç =, X, ç ç 2 l æ ç q æ Varèlçè æ æ = æ X,ç ç æ : In this case our approximation approach leads to the same conædence interval constructed under the assumption of normal distribution. 4 The Properties of the Model Next we study the properties of this model. We also investigate how the length of conædence intervals changes with the moment inputs. The length of the conædence interval can be deæned as the diæerence between the upper limit and lower limit, i.e. L = X U, X L : 6 Under the estimating function approach we have v u L = tç æ2 +2 æ 1 ç 2 +4 p è C æ èæ2 + 2èèæ 2 +2,æ1è 2 +1 ç; æ 1 6=0: è4è jæ 1 j In the special case of normal distribution we have L =2C æ ç: We use a numerical example with the following parameters ç = 0:0060; æ 1 =,0:2244; æ 2 = 3:1556 which are the average volatility, skewness and kurtosis of the twelve major currencies we study later on. We can make the following observations æ As in the case of a normal distribution, the length of the conædence interval is positively related to the standard deviation or volatility. This is consistent with our intuition since the standard deviation measures the dispersion of the distribution. The more disperse the distribution, the longer the conædence interval we need for a given conædence level. æ Unlike in the normal case, the conædence interval is not symmetrical around the mean value. It is tilted toward the direction of skewness. If the skewness is positive, the conædence interval covers more value on the right-hand side of the mean value than the left-hand side. If the skewness is negative, the conædence interval covers more on the left hand of the mean value than the left-hand side. Skewness acts as an indicator as to which side the conædence interval should be stretched so that a given percentage of the underlying distribution is covered. æ The skewness parameter æ 1 characterizes the degree of asymmetry of the distribution around its mean. Positive ènegativeè skews indicates 7 asymmetric tail extending toward right-hand èleft-handè side. Intuitively, if the skewness is very large or small we know which side the asymmetrical fat tail extends toward, so the length of conædence interval should be narrow. But when the skewness is very small, but not zero, we know the distribution is not symmetrical, but are not sure which side the fatter tail is. In this case we need a longer conædence interval to cover a certain percentage of the density function. Based on the formula è4è we ænd that the length of the conædence interval is inversely related to the absolute value of the skewness. Figure 1 shows this property. The length of conædence intervals is a bell shape type function of the value of its skewness with the minimum attained at zero skewness. There is a discontinuous point at zero skewness since the conædence interval in the case of zero skewness is diæerent than that in the case of non-zero skewness. æ The standardized kurtosis measures the relative peakedness or æatness of a given distribution compared to a normal distribution. High kurtosis or leptokurtosis indicates there are more occurrences far away from the mean than predicted by a standard normal distribution. From equation è4è we see that L is positively related with the excess kurtosis. Figure 2 also shows the positive relationship. 8 The Length of Confidence Interval v.s. Skewness The Length of Confidence Interval Skewness 9 Figure 2. The Length of Confidence Interval v.s. Kurtosis The Length of Confidence Interval Kurtosis 5 An Empirical Study Next we backtest our model to see how well it works in practice. To backtest our model we use daily exchange rates for twelve major currencies between February 17, 1989 and February 8, The total number of trading days covered by the data is We ærst calculate the daily logarithm change X t =lnès t =S t,1 è, where S t is the spot exchange rate at time t. Then, as in Hull and White è1998è, we use two approaches to estimate the volatility of the series: æ We estimate one volatility ç using the entire return series for each currency. We simply call this approach the constant-variance model. 10 Table 1: Skewness and Kurtosis for the Return x t AUD BEF CHF DEM DKK ESP FRF GBP ITL JPY NLG SEK Constant Volatility Model æ æ RiskMetrics EWMA Model æ æ æ We use an exponentially weighted moving average èewmaè with a smoothing parameter ç = 0:94 and 74 past observations to estimate the time-varying volatility ç t. This is the standard RiskMetrics TM approach. We call this approach the RiskMetrics TM EWMA method. Then we obtain a new time series data by dividing the return series by the standard deviation. In the constant-variance approach x t = Xt, and in the ç RiskMetrics TM EWMA approach x t = Xt ç t. We calculate the sample moments based on the series X t and x t. Table 1 shows the skewness and kurtosis for the transformed series x t, of each currency. From Table 1 we have the following observations for the return series when the constant variance model is used. æ Most currencies have a non-zero skewness. For a normal distribution the skewness is zero. æ All currencies exhibits signiæcant excess kurtosis. The kurtosis for the twelve currencies varies from 1.75 to The excess kurtosis is zero for a normal distribution. From the table we see the following properties for the transformed data using the RiskMetrics TM EWMA method. æ Each transformed return series still has a non-zero skewness and kurtosis. If the RiskMetrics TM assumption is accepted, the series x t should be iid standard normally distributed æ Both the skewness and kurtosis are generally reduced after transformation using the RiskMetrics TM EWMA method., but the skewness of 11 Table 2: Backtesting Result when Constant Volatility Model is Used AUD BEF CHF DEM DKK ESP FRF GBP ITL JPY NLG SEK æ =5è æ =3è æ =1è some currencies and the kurtosis of all currencies are still signiæcantly diæerent from zero. These observations show that the conditional normality assumption by RiskMetrics TM is not consistent with our empirical ændings. The majority of the transformed return series are still skewed and leptokurtic, in contrast to the normal distribution which we assume in the usual RiskMetrics TM VaR calculation. It also shows that it is important to incorporate the skewness and kurtosis into the VaR calculation. To backtest our approach we ærst construct a two-tailed conædence interval based on the volatility alone under the normal assumption. We also construct a conædence interval based on equation è3è. Finally we calculate the actual percentage of observations which fall outside the conædence interval and compare it with the signiæcance level. We choose three conædence levels æ = 0:05; 0:03; 0:01. To study the eæect of the transformation or degarching, the same procedure has applied to the transformed series x t using two transformation methods. The result is summarized in Table 2 and Table 3. For each signiæcance level the ærst row gives the percentage of the number of observations outside the conædence interval based on the normal assumption, and the second one based on the estimating function approach. From Table 2 and 3 we see that the normal assumption approach always underestimates the number of observations outside the conædence interval. For 1 percent signiæcance level, there are twice as many observations out- 12 Table 3: Backtesting Result When EWMA Model is Used AUD BEF CHF DEM DKK ESP FRF GBP ITL JPY NLG SEK æ =5è æ =3è æ =1è side the conædence interval band than are predicted by a conædence interval based on a normal assumption. In contrast the estimating function approach overestimates for both the signiæcance levels of 0:05 and 0:03, and slightly underestimates for the signiæcance level of 0:01. This suggests that our new approach does capture the extreme cases better than the normal assumption method. Comparing Table 2 and Table 3 we also observe the eæect of de-arching of the RiskMetrics TM EWMA method. It shows that the degarching procedure does improve the accuracy of the RiskMetrics TM VaR calculation based on our backtesting result. However, it has much smaller impact on our estimating function method. It shows that our non-linear transformation involving skewness and kurtosis as well as volatility produces a series which is closer to normal than the changing volatility approach used in RiskMetrics EWMA method. 6 Conclusion We have shown how to incorporate the skewness and kurtosis explicitly into the construction of a conædence interval based on the theory of estimating functions in statistics. The ænal result of the conædence interval is an explicit function of the skewness and kurtosis as well as the standard deviation or volatility. The length of the conædence interval is positively related to the kurtosis and negatively related to the absolute value of skewness, consistent 13 with our intuition on the relationship between the conædence interval and the skewness and kurtosis. The new approach enables us to directly take account of empirical ændings on most ænancial time series data. It is a semiparametric approach where only moments up to fourth order need to be empirically estimated. No full distribution assumption is required. We back test the model using 10-year of daily exchange rate data for twelve major currencies, and ænd that it is able to capture the extreme situation much better than the normal assumption approach. This model can be adapted to calculating VaR relatively easily under the general framework of the RiskMetrics TM. ATaylor series expansion or a Monte Carlo simulation approach can also be adopted since moments can be readily obtained in these methods. Then we can use the obtained standard deviation, skewness and kurtosis to construct an approximate conædence interval. References ë1ë Doob, J. L. è1953è, Stochastic Processes, New York: John Wiley and Sons. ë2ë Godambe, V.P. è1991è, Estimating Functions, Oxford: Oxford University Press. ë3ë Godambe. V. P. and M. Thompson, è1989è, An Extension of Quasi- Likelihood Estimation èwith discussionè, Journal of Statistical Planning and Inference, 22, pp ë4ë Hull, J. and A. White è1998è, Value at Risk When Daily Changes in Market Variables are not Normally Distributed, The Journal of Derivatives, Spring 1998, pp ë5ë Kupiec, P. è1995è, Techniques for Verifying the Accuracy of Risk Measurement Models, The Journal of Derivatives, Winter 1995, pp ë6ë Li, D. X. and H. Turtle è1997è, Semiparametric ARCH Models: An Estimating Function Approach to ARCH Model Estimation, Working Paper 97-6, Global Analytics, Risk Management Division, CIBCèWood Gundy. ë7ë RiskMetrics - Technical Document, J. P. Morgan, April ë8ë Zangari, P. è1996è, ëan Improved Methodology for Measuring VaR. RiskMetrics Monitor, ReutersèJ. P. Morgan. 15
Related Search
Similar documents
View more...
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks