Roi Polanitzer

·

Follow

14 min read

·

Feb 10, 2023

--

*The Vasicek (1977) interest rate model is a single-factor short-rate model which is used to predict where interest rates will end up at the end of a given period of time. It outlines an interest rate’s evolution as a factor composed of market risk, time, and equilibrium value.*

In order to predict the bank of Israel interest rate one year ahead using the Vasicek model and Monte Carlo simulation, I chose the bank of Israel interest rate dataset that was sourced from Bank of Israel Database. This dataset was based on the benchmark interest rate of the bank of Israel between March 2020 and February 2023.

Equilibrium models usually stat with assumptions about economic variables and derive a process for the short rate, *r*. They then explore what the process for *r* implies about bond prices and option prices.

In a one-factor equilibrium model, the process for *r* involves only one source of uncertainty. Usually the risk-neutral process for the short rate is described by an Itô process of the form

The instantaneous drift, *m*, and instantaneous standard deviation, *s*, are assumed to be functions of *r*, but are independent of time. The assumption of a single factor is not as restrictive as it might appear. A one-factor model implies that all rates move in the same direction over any short time interval, but not that they all move by the same amount. The shape of the zero curve can therefore change with the passage of time.

In the Rendleman and Bartter model (1980), the risk-neutral process for *r* is

where *μ* and *σ* are constants. This means that *r* follows geometric Brownian motion. The process for *r* is of the same type as that assumed for a stock price.

This class of models is known as equilibrium models. They start with some assumptions about economic variables and imply a process for the short-term interest rate *r* . These models generate a predicted term structure, whose shape depends on the model parameters and the initial short rate.

The problem with these models, however, is that they are not flexible enough to provide a good fit to today’s term structure. This can be viewed as unsatisfactory, especially by practitioners who argue they cannot rely on a model that cannot be trusted to price today’s bonds.

The geometric Brownian motion process is widely used for stock prices and currencies. Interest rates are another matter, however. interest rates display long-term reversion to their long-run mean.

Such process is inconsistent with the geometric Brownian motion process, which displays no such mean reversion. Similarly, commodities often display mean reversion.

These features can be taken into account by modeling interest rates directly in a first step. In the next step, bond prices are constructed from the value of interest rates and a pricing function.

The assumption that the short-term interest rate behaves like a stock price is a natural starting point but is less than ideal. One important difference between interest rates and stock prices is that interest rates appear to be pulled back to some long-run average level over time.

This phenomenon is known as mean reversion. When *r* is high, mean reversion tends to cause it to have a negative drift; when *r* is low, mean reversion tends to cause it to have a positive drift. Mean reversion is illustrated in the figure bellow.

Therefore, the Rendleman and Bartter model does not incorporate mean reversion. There are compelling economic arguments in favor of mean reversion.

When rates are high, the economy tends to slow down and there is low demand for funds from borrowers. As a result, rates decline. When rates are low, the economy tends to speed up and there is high demand for funds on the part of borrowers. As a result, rates incline.

One of the one-factor equilibrium models is Oldřich Vasicek’s (1977) model. In Vasicek’s model, the risk-neutral process for *r* is

where *a*, *b*, and *σ* are constant. This model incorporates mean reversion. The short rate is pulled to a level *b* at rate *a*. Superimposed upon this “pull” is a normally distributed stochastic term *σdz*.

It is now appropriate to discuss how the parameters in the model we have been considering are estimated from historical data. The approach used is the maximum likelihood method. It involves choosing values for the parameters that maximize the chance (or likelihood) of the data occurring.

To illustrate the method, we start with a very simple example. Suppose that we sample ten stocks at random on a certain day and find that the price of one of them declined on that day and the prices of the other nine either remained the same or increased.

What is our best estimate of the proportion of all stocks with price decline? The natural answer is 10%. Let us see if this is what the maximum likelihood method gives.

Suppose that the proportion of stocks with price declines is *p*. The probability that one particular stock declines in price and the other nine do not is *p(1 — p)⁹ *? Using the maximum likelihood approach, the best estimate of *p* is the one that maximizes *p(1 — p)⁹*.

Differentiating this expression with respect to *p* and setting the result equal to zero, we find that *p* = 0.1 maximizes the expression. This shows that the maximum likelihood estimate of *p* is 10% as expected. maximum likelihood method.

We now consider how the maximum likelihood method can be used to estimate the parameters when Vasicek or some other equilibrium model is used. Define the difference between *[ r(i) — r(i-1) ] *and *[ a (b-r(i-1)) ] *for month *i* as *wi. *We assume that the probability distribution of *r(i)*, conditional on *wi* is normal. for each *wi* we have to calculate the corresponding normal pdf of *wi*, *xi*

Taking logarithms, we get

The best parameters for *a*, *b*, and *σ* are the ones that maximize

It is necessary to search iteratively to find the parameters in the model that maximize the expression in objective function above.

The dataframe bellow indicates how the calculations could be organized for the Vasicek model. The dataframe analyzes data on the bank of Israel interest rate between March 1, 2020 and February 1, 2023.

The numbers in the dataframe are based on trial estimates of the three Vasicek parameters: *a*, *b*, and *σ.* The first column in the dataframe records the data. The second column counts the months.

The third column shows the interest rate, *ri, *at the beginning of month *i. *The fourth column shows the change in the interest rate between the beginning of month *i *and* *the beginning of month *i-1. *This is *ui*, where

The fifth column shows the estimate of the mean reversion term, *vi*, where

*f*or month* i* made at the beginning of month *i-1. *The sixth column shows *wi*, which is the difference between *ui *and* vi*

The seventh columns show *xi*, which is the normal probability density function for *wi* (for an arithmetic mean of normal distribution of 0 and a standard deviation of normal distribution of *σ*, which is our trial estimate)

The eight column tabulates *yi*, which is the likelihood measure, ln(*xi*)

The values in the fifth, sixth, seventh and eight columns are based on the current trial estimates of *a*, *b*, and *σ. *We are interested in choosing *a*, *b*, and *σ* to maximize the sum of numbers in the eight column.

This involves an iterative search procedure. A general-purpose algorithm such as Solver in Microsoft’s Excel is liable to provide a local rather than global maximum of the likelihood function. A special purpose algorithm, such as Levenberg-Marquardt, should ideally be used.

In our example, the optimal values of parameters turn out to be

and the maximum value of the objective function is 170.26611. The numbers shown in the dataframe were calculated on the final iteration of the search for the optimal *a*, *b*, and *σ*.

We now move on to discuss Monte Carlo simulation. This uses the risk-neutral valuation result. The expected interest rate ,*r*, in a risk-neutral world is calculated using a sampling procedure.

Consider the short rate dependent on a single market variable *r* that we which to find its level at time T. Assuming that *a*, *b*, and *σ *are constant, we can model the evaluation of the short rate as follow:

- Sample a random path for
*r*in a risk-neutral world - Repeat step 1 to get many sample values of the expected interest rate in a risk-neutral world.
- Calculate the mean of the sample expected interest rates to get an estimate of the expected interest rate in a risk-neutral world.

Suppose that the process followed by the underlying market variable in a risk-neutral world is

where *dz* is a Weiner process, *a(b-r)* is the mean version term in a risk-neutral world, and *σ* is the volatility. To simulate the path followed by *r*, we divide the life of the forecast into *N* short intervals of length *δt* and approximate equation as

where *r(t)* denotes the value of *r* at time *t*, and *ε* is a random sample from a normal distribution with mean zero and standard deviation 1.0. This enables the value of *r* at time *δt* to be calculated from the initial value of *r*, the value at time 2*δt* to be calculated from the value at time *δt*, and so on.

One simulation trial involves constructing a complete path for *r* using *N* random samples from a normal distribution. As seen before, the coefficients *a, b* and *σ *are estimated using a maximum-likelihood (ML) procedure.

In order to implement the Vasicek model for this prediction, we used the Monte Carlo model to estimate the bank of Israel interest rate one year ahead. The Monte Carlo model is widely used in situations where there is no analytical solution and other approaches (such as binomial tree) are less appropriate, such as complex path dependent options (our case).

This model uses a simulation of risk factors according to an assumed probability distribution and then calculates the interest rate at the end of each path separately. All the interest rates are then averaged.

The average value is referred as the prediction value. Standard error of sample mean typically serves as an indication of the level of precision of the model. Since this standard error declines very slow with the number of paths typically one need to use dozens of thousands (or even more) of sample paths in order to arrive at an acceptable level of accuracy.

We will used 50,000 random paths in this calculation.

First, let’s import the required modules and set the optimal values which we found above.

`import numpy as np`

import pandas as pd

from matplotlib import pyplot as plt

from numpy import random as rn

%matplotlib inlinea = -0.137147124953583

b = -0.00179029708250429

σ = 0.001866047835164

Second, the spot rate for the bank of Israel interest rate for today (01/02/2023) is 3.75%

`r0 = 0.0375`

Third, the forecast horizon is one-year from today.

`T = 1`

Fourth, let’s set the number of short intervals on 12 (i.e., twelve months).

`N = 12`

So, the length of each interval is

`δt = T/N`

δt

*0.08333333333333333*

Fifth, we choose to use 50,000 simulations

`M = int(5*1e4)`

M

*50000*

Sixth, we generate a matrix for random variables which are standard normally distributed with a mean of 0 and a standard deviation of 1.0.

`dz = rn.randn(M,N)`

Seventh, we generate a matrix for the sample interest rates

`r = r0*np.ones((M,N+1))`

`for i in range(0,N):`

r[:,i+1] = r[:,i] + a*(b-r[:,i])*δt + σ*dz[:,i]*np.sqrt(δt))

Eight, let’s visualize the evaluation of the interest rate over the next year at monthly intervals

`plt.figure(figsize=(17,10))`a = [ rn.randint(0,M) for j in range(1,30)]

for runer in a:

plt.plot(np.arange(0,T+δt,δt),r[runer])

Ninth, let’s get the result

`V = (r[:,-1])`

print("The expected value for the interest rate is:","{:.3%}".format(np.mean(V)))

print("The standard error of sample mean is:", "{:.4%}".format(np.std(V)/np.sqrt(M)))

*The expected value for the interest rate is: 4.325%The standard error of sample mean is: 0.0009%*

`from scipy.stats import norm`

def normsinv(x):

x = si.norm.ppf(x)

return (x)z = normsinv(0.975)

μ = np.mean(V)

SE = np.std(V)/np.sqrt(M)

print("Lower 95% is:","{:.3%}".format( (μ-z*SE) ))

print("Upper 95% is:","{:.3%}".format( (μ+z*SE) ))

*Lower 95% is: 4.323%Upper 95% is: 4.327%*

**Results.** We estimate that the bank of Israel interest rate one year ahead in this case is between 4.323% and 4.327%. The average value is 4.325% as of the valuation date.

The Vasicek (1977) model is a yield-based one-factor equilibrium model that assumes that the interest rate is normally distributed. The model incorporates mean reversion and is popular in the academic community — mainly due to its analytic tractability.

The model is used much by actuaries in the U.S. to model the evaluation of interest rate in order to price life insurance contracts.

The model specifies that the instantaneous interest rate follows the stochastic differential equation:

where, *dz* is a Wiener process modeling the random market risk factor, *σ* determines the volatility of the interest rate, *a* is the speed of the mean reversion, and *b* is the mean reversion level.

Monte Carlo method is a numerical method that is useful in many situations when no closed-form solution is available. The Monte Carlo method can be used to simulate a wide range of stochastic processes and is thus very general.

To illustrate the use of Monte Carlo simulation, we used the Vasicek model as a stochastic process for which the underlying interest rate follows. The process governing the interest rate *r *is then given by

where *dz* is a Wiener process with standard deviation 1 and mean 0. To simulate the process, we consider its values at given discrete time intervals, *Δt* apart:

where *Δr* is the change in *r* in the chosen time interval *Δt*, and *ϵt* is a random drawing from a standard normal distribution.

The main drawback of Monte Carlo simulation is that it is computer-intensive. A minimum of 10,000 simulations are typically necessary to achieve a prediction with satisfactory accuracy.

The standard error in the estimated value from the standard Monte Carlo simulation is normally related to the square root of the number of simulations. More precisely, if *σ* is the standard deviation of the expected interest rates from *n* simulations, then the standard error is given by

This means that to double the accuracy, we will need to quadruple the number of simulations. So if we want to double the accuracy from 10,000 simulations, we will need 40,000 simulations, and so on. Several techniques are available to speed up the Monte Carlo simulation.

The steps in standard Monte Carlo simulation are simply to simulate *n* number of paths of the interest rate. The value is then given as the average of the simulated paths. In our case, we are only interested in the end value of the interest rate.

Hopefully, this article gives you a good idea of what a predicting future interest rate using Monte Carlo simulations looks like. As you can see, much of the work is in the data wrangling and the preparation steps, and these procedures consume most of the time spent on prediction.

Now it’s time to get out there and start exploring your data. Try two or three other equilibrium models, and let me know how it goes.

I would be pleased to receive feedback or questions on any of the above.

**Roi Polanitzer, MRA, QFV, FEM, F.IL.A.V.F.A., FRM, CRM, PDS**, is a well-known authority in Israel the field of financial risk actuarial science and has written hundreds of papers that articulate many of the concepts used in modern financial risk actuarial science around the world. Mr. Polanitzer is the Owner and Chief Risk Actuary of Intrinsic Value — Independent Business Appraisers, a financial risk actuarial science consulting firm headquartered in Rishon LeZion, Israel. He is also the Owner and Chief Data Scientist of Prediction Consultants, a consulting firm that specializes in advanced analysis and model development.

Over more than 17 years, he has performed market risk actuarial science consulting engagements in the areas of: Value-at-Risk (VaR) (applied to stock, currencies, and commodities, applied to linear and non-linear derivatives, applied to fixed income securities with embedded options, structured Monte Carlo, stress testing, and scenario analysis, limitations as a risk measure, coherent risk measures, and volatility models), Option valuation (pricing options using binomial trees, the Black-Scholes-Merton model, and the “Greeks”), fixed income valuation (discount factors, spot rates, forward rates, and yield to maturity, arbitrage and the law of one price, one factor measures of price sensitivity, Duration, DV01, and convexity, key rate exposures, and hedging and immunization), fixed income securities (risk-neutral pricing and term structure models), mortgages and mortgage-backed securities (MBS) (structure, markets, and valuation), VaR and other risk measures (VaR mapping, backtesting VaR, expected shortfall (ES) and other coherent risk measures, parametric and non-parametric methods of estimation, modeling dependence: correlations and copulas, and extreme value theory (EVT)), volatility: smiles and term structures, and exotic options.

Mr. Polanitzer holds an undergraduate degree in economics and a graduate degree in business administration, majoring in finance, both from the Ben-Gurion University of the Negev. He is a Full Actuary (Fellow), a Market Risk Actuary (MRA), a Quantitative Finance Valuator (QFV) and a Financial and Economic Modeler (FEM) from the Israel Association of Valuators and Financial Actuaries (IAVFA). Mr. Polanitzer is the Founder of the IAVFA and currently serves as its chairman.

Mr. Polanitzer’s professional recognitions include being designated a Financial Risk Manager (FRM) by the Global Association of Risk Professionals (GARP), a Certified Risk Manager (CRM) by the Israel Association of Risk Managers (IARM), as well as being designated a Python Data Analyst (PDA), a Machine Learning Specialist (MLS), an Accredited in Deep Learning (ADL) and a Professional Data Scientist (PDS) by the Professional Data Scientists’ Israel Association (PDSIA). Mr. Polanitzer is the Founder of the PDSIA and currently serves as its CEO.

He is the editor of IAVFA’s weekly newsletter since its inception (primarily for the professional appraisal community in Israel).

Mr. Polanitzer develops and teaches market risk actuarial science professional trainings and courses for the Israel Association of Valuators and Financial Actuaries, and frequently speaks on market risk actuarial science at professional meetings and conferences in Israel. He also developed IAVFA’s certification programs in the field of actuarial science and he is responsible for writing the IAVFA’s statement of financial valuation standards.