Crank-Nicholson (Finite Difference) with Black-Scholes (with code)

Antoni Smolski
6 min readDec 30, 2023

--

This is an interactive plot of computed put prices. Note that on phones the plot is minimized

In this article, I’m diving into applying the Black-Scholes formula using the Implicit Crank-Nicholson Finite Difference Method. The method is much more robust and stable than fully implicit and explicit. This means that small mistakes won’t be as impactful in later computations as those in the other methods. This is desirable and the extra time spent on the implementation will pay off. However, to get the idea, I advise you to see the previous post about explicit methods.

First, I will present the conceptual framework, to move to the code implementation in Python. I’ll try to make it as clear as possible as sometimes the amount of mathematical formulas can be daunting. I won’t show you derivations and so on as there are great sources out there, which show that. I’ll show you the code implementation in Python as it is more practical and more difficult to find digestible sources on the internet. Let’s dive right into it!

Graphical representation

To visually comprehend the differences and similarities between the explicit, implicit, and Crank-Nicholson methods, let’s see them side by side. Graphically it makes sense why explicit and implicit methods are called as they are. Crank-Nicholson is just an average of these two. However, this “averaging” is not in the literal sense of taking arithmetic means but rather in how the method blends the numerical approaches of both methods.

Comparison of three methods

It averages the approaches of both explicit and implicit methods:

  • From the Explicit Method, it borrows the idea of using current time step values to estimate future values. This aspect contributes to the method’s intuitiveness and easier implementation.
  • From the Implicit Method, it incorporates stability and accuracy, especially for smaller time steps, by considering future time step values in its calculations.

In mathematical terms, the Crank-Nicholson method can be represented as a differential equation

Here, the left side​ represents the option values at the next and current time steps, respectively, and f(V) represents the function defining the change in option value. The right side of the equation is an average of the function f(V) evaluated at the current and next time steps, illustrating the “averaging” aspect of the Crank-Nicholson method. The f(V^n) and f(V^(n+1)) are implicit and explicit methods.

Here’s a glimpse of the Crank-Nicholson formula. Don’t be daunted by its complexity; our primary focus is on its practical application.

A formula you should not care about right now

In matrix form, Crank-Nicholson can be expressed as follows:

Notice the highlighted green parts, representing the areas in the matrix we’ll modify to accommodate our boundary conditions, transforming these into square matrices for computation.

This can be now written as:

Implementation

The implementation of the Crank-Nicholson method requires setting up a grid, similar to what we did in the explicit method. However, the computation of values within this grid differs.

We start by initializing our grid and parameters. This includes the asset array, time steps, and the initial conditions for our option. The code snippet looks like this:

S_min = strike/3
S_max = strike*2

dS = (S_max-S_min)/NAS
dt = expiration/NTS

S = np.arange(0, NAS+1)* dS +S_min
V = np.zeros((NAS + 1, NTS + 1))

Not a lot to explain here. Just basic parameters to set up a grid for further computations. They are the same as with explicit method, so if you need more explanation, visit my previous article about Explicit Method.

Next, we populate the grid with our boundary conditions. These conditions are crucial for our computations, so it is important to make them correct to reduce the likelihood of numerical instabilities. Note that we populate the 3 edges (upper, lower, right) as we will solve the equations backward and these are “certain” values at these ends.

payoff = np.maximum((strike-S), 0)
V[:, -1] = payoff
V[-1, :] = 0
V[0, :] = np.maximum(strike - S_min, 0) * np.exp(-r * np.linspace(0, expiration, NTS + 1)[::-1])

The Crank-Nicholson method involves averaging the explicit and implicit finite difference schemes. When you discretize the Black-Scholes PDE using this method, you end up with a system of linear equations that can be written in matrix form. The coefficients alpha, beta, and gamma are part of this matrix formulation. These are our As, Bs, and Cs in the matrices presented before.

The coefficients alpha, beta, and gamma include elements of both the current and next time steps, integrating the central difference derivatives into the overall stepping scheme of the Crank-Nicholson method. This is why it is an average of implicit and explicit methods.

I = np.arange(0,NAS+1)
alpha = 0.25 * dt * ((sigma**2) * (I**2) - r*I)
beta = -dt * 0.5 * (sigma**2 * (I**2) + r)
gamma = 0.25 * dt * (sigma**2 * (I**2) + r * I)

And now let’s create ML and MR matrices. The ML matrix is used to represent the current time step, while the MR matrix is used for the next time step. This setup allows for a stable transition between time steps in the grid.

The use of sparse matrices in Python, particularly through scipy.sparse.diags, is a memory-efficient approach and also computationally time-efficient. Sparse matrices store only non-zero elements, which is desirable given the tridiagonal nature of the ML and MR matrices in the Crank-Nicholson scheme.

ML = sparse.diags([-alpha[2:], 1-beta[1:], -gamma[1:]], [-1,0,1], shape=(NAS-1, NAS-1)).tocsc()
MR = sparse.diags([alpha[2:], 1+beta[1:], gamma[1:]], [-1,0,1],shape=(NAS-1, NAS-1)).tocsc()

To arrive at the options prices and solve the matrices, we have to iterate through our grid. We are going to do this backward. Note that there is a “boundary_t” array — it is to care about our boundary conditions.

for t in range(NTS - 1, -1, -1):
boundary_t = np.zeros(NAS - 1)
boundary_t[0] = alpha[1] * (V[0, t] + V[0, t + 1]) -alpha[0] * V[0, t + 1]
boundary_t[-1] = gamma[NAS - 1] * (V[NAS, t] + V[NAS, t + 1])
b = MR.dot(V[1:NAS, t + 1]) + boundary_t
V[1:NAS, t] = spsolve(ML, b)

Extensions

The Crank-Nicholson method isn’t limited to plain vanilla options. In subsequent articles, I will explore its adaptability to exotic options, like Asian and barrier options. We’ll discuss the specific challenges posed by these options, such as path dependency and barrier features, and how the Crank-Nicholson method can be modified to tackle them.

Conclusion

The Crank-Nicholson method, while a bit more complex to implement, provides a robust framework for option pricing. It has greater accuracy and stability, which are crucial when pricing options.

I would also add that the way I did the computations is not the only one, you can also do this in other ways and handle the boundary conditions differently — this is important for example when pricing barrier options where points do not coincide with the barrier.

I hope you enjoyed this article. For any questions, feel free to comment, I’m happy to help.

Link to the notebook:
https://github.com/antek0308/Volatility_notebooks/blob/main/Medium/Crank_Nicholson.ipynb

--

--