1D Diffusion Simulation

The Problem

Setting Up the Grid

# Spatial grid: 50 points give dx = 0.02, fine enough to resolve a Gaussian
# pulse of width 0.05 (about 2.5 grid cells per standard deviation).
GRID_POINTS = 50
LENGTH = 1.0
DX = LENGTH / (GRID_POINTS - 1)

# Diffusion coefficient in arbitrary units.
DIFFUSIVITY = 0.1

# The explicit finite-difference scheme is stable only when the stability
# ratio r = D*dt/dx^2 <= 0.5.
# Using STABILITY_RATIO = 0.4 gives a 20% safety margin.
STABILITY_RATIO = 0.4
DT = STABILITY_RATIO * DX**2 / DIFFUSIVITY

# Run for 500 steps; save a snapshot every 100 steps.
N_STEPS = 500
SNAPSHOT_INTERVAL = 100

# Initial Gaussian pulse centered in the domain.  Width 0.05 keeps the tails
# more than 4 standard deviations from each boundary throughout the full run.
PULSE_CENTER = 0.5
PULSE_WIDTH = 0.05

The initial concentration is a Gaussian pulse centered in the domain:

def make_initial():
    """Return spatial grid and Gaussian initial concentration profile."""
    x = np.linspace(0, LENGTH, GRID_POINTS)
    c = np.exp(-((x - PULSE_CENTER) ** 2) / (2 * PULSE_WIDTH**2))
    return x, c

Deriving the Update Rule

$$\frac{c_i^{n+1} - c_i^n}{\Delta t} = \frac{D}{\Delta x^2}\left(c_{i+1}^n - 2c_i^n + c_{i-1}^n\right)$$

$$c_i^{n+1} = c_i^n + r\left(c_{i+1}^n - 2c_i^n + c_{i-1}^n\right)$$

where $r = D\,\Delta t / \Delta x^2$ is called the stability ratio.

Stability

DT is computed as STABILITY_RATIO * DX**2 / DIFFUSIVITY, so r = STABILITY_RATIO at every run. Which change would make r exceed 0.5 and make the simulation produce meaningless results?

Setting STABILITY_RATIO = 0.6
Correct: r equals STABILITY_RATIO by construction, so r = 0.6 > 0.5 and the scheme will blow up.
Doubling GRID_POINTS
Wrong: a finer grid halves DX, but DT is recalculated from DX**2, so r = STABILITY_RATIO is unchanged.
Multiplying DIFFUSIVITY by 10
Wrong: DT is divided by DIFFUSIVITY, so increasing DIFFUSIVITY shrinks DT and keeps r = STABILITY_RATIO.
Running 10 000 steps instead of 500
Wrong: the number of steps does not affect r — it only determines how long the simulation runs.

Boundary Conditions

def step(c):
    """Return concentration after one explicit finite-difference time step.

    Prepend and append ghost cells equal to the boundary values to enforce
    zero-flux boundary conditions: no material enters or leaves the domain.
    This makes the total mass exactly conserved each step.
    """
    r = DIFFUSIVITY * DT / DX**2
    padded = np.concatenate([[c[0]], c, [c[-1]]])
    return c + r * (padded[2:] - 2 * padded[1:-1] + padded[:-2])

Put these operations in the correct order for one explicit time step with no-flux boundary conditions.

  1. Prepend c[0] and append c[-1] to form a padded array
  2. Compute the centred second difference from the padded array
  3. Multiply the second difference by the stability ratio r
  4. Add the result to the current concentration array c

The ghost-cell implementation conserves total mass exactly. Which boundary condition produces this behaviour?

Dirichlet (absorbing): c = 0 at both ends
Wrong: absorbing boundaries remove mass from the domain; total mass decreases over time.
Neumann (no-flux): zero gradient at both ends
Correct: zero-flux boundaries prevent material from entering or leaving, so mass is conserved at every step.
Periodic: c[0] = c[-1] at every step
Wrong: periodic boundaries also conserve mass, but they are implemented differently — the concentration wraps around, not reflects.
No boundary condition is needed — mass is conserved automatically
Wrong: without an explicit boundary condition the update formula has no information about what happens at the grid edges.

Running the Simulation

def simulate():
    """Run the simulation and return concentration snapshots as a DataFrame."""
    x, c = make_initial()
    records = _make_records(x, c, 0.0)
    for i in range(1, N_STEPS + 1):
        c = step(c)
        if i % SNAPSHOT_INTERVAL == 0:
            records += _make_records(x, c, round(i * DT, 6))
    return pl.DataFrame(records)


def _make_records(x, c, time):
    return [
        {"x": float(xi), "concentration": float(ci), "time": time}
        for xi, ci in zip(x, c)
    ]

Visualizing the Results

def plot(df):
    """Return an Altair line chart of concentration profiles over time."""
    return (
        alt.Chart(df)
        .mark_line()
        .encode(
            x=alt.X("x", title="Position"),
            y=alt.Y("concentration", title="Concentration"),
            color=alt.Color("time:O", title="Time (s)"),
        )
        .properties(width=400, height=300)
    )
Five concentration profiles at t=0 through t=0.83, showing a Gaussian pulse flattening and broadening over time.
Figure 1: Concentration profiles at five equally-spaced snapshots. The pulse spreads and flattens; the area under each curve (total mass) is constant.

Testing

Tests validate that the simulation behaves correctly both physically and numerically.

Stability check

r = DIFFUSIVITY * DT / DX**2
assert r <= 0.5

Mass conservation

Symmetry

Monotone peak decrease

Comparison to analytical solution

$$\sigma^2(t) = \sigma_0^2 + 2Dt$$

Choosing the tolerance

import numpy as np
from diffusion import (
    DIFFUSIVITY,
    DT,
    DX,
    N_STEPS,
    PULSE_CENTER,
    PULSE_WIDTH,
    SNAPSHOT_INTERVAL,
    make_initial,
    step,
)


def test_stability_ratio():
    # The explicit scheme is stable only when r = D*dt/dx^2 <= 0.5.
    # This test catches accidental changes to DIFFUSIVITY, DT, or DX
    # that would break the simulation.
    r = DIFFUSIVITY * DT / DX**2
    assert r <= 0.5, f"stability ratio {r:.4f} exceeds 0.5"


def test_mass_conservation():
    # With zero-flux boundaries, the total amount of substance is conserved.
    # The ghost-cell implementation makes the correction term sum to zero in
    # exact arithmetic.  Floating-point rounding accumulates at roughly
    # N_STEPS * machine_epsilon, so a relative tolerance of 1e-10 is safe.
    _, c = make_initial()
    initial_mass = c.sum() * DX
    for _ in range(N_STEPS):
        c = step(c)
    final_mass = c.sum() * DX
    assert abs(final_mass - initial_mass) / initial_mass < 1e-10


def test_symmetry():
    # The Gaussian initial condition is symmetric about x = 0.5 and the
    # boundary conditions are identical at both ends, so the profile must
    # remain symmetric at every step.  Tolerance is 1e-12 (floating-point
    # symmetry of the arithmetic operations).
    _, c = make_initial()
    for _ in range(N_STEPS):
        c = step(c)
        assert np.max(np.abs(c - c[::-1])) < 1e-12


def test_peak_decreases():
    # As the pulse spreads out, its peak concentration must fall monotonically.
    _, c = make_initial()
    peaks = [c.max()]
    for i in range(1, N_STEPS + 1):
        c = step(c)
        if i % SNAPSHOT_INTERVAL == 0:
            peaks.append(c.max())
    assert all(peaks[i] > peaks[i + 1] for i in range(len(peaks) - 1))


def test_matches_analytical():
    # On an infinite domain a Gaussian pulse with standard deviation sigma_0
    # spreads so that sigma^2(t) = sigma_0^2 + 2*D*t (Crank 1975).  After
    # 20 steps the pulse tails are more than 4 sigma from each wall, so
    # boundary effects are negligible.
    # The scheme has truncation error O(DT + DX^2) ~ 2e-3 with our parameters.
    # The measured maximum error at n=20 is ~3e-3 (the prefactor exceeds 1).
    # A tolerance of 5e-3 gives a safety factor of ~1.7 over the measured error.
    n_check = 20
    x, c = make_initial()
    for _ in range(n_check):
        c = step(c)
    t = n_check * DT
    sigma2 = PULSE_WIDTH**2 + 2 * DIFFUSIVITY * t
    analytical = (PULSE_WIDTH / np.sqrt(sigma2)) * np.exp(
        -((x - PULSE_CENTER) ** 2) / (2 * sigma2)
    )
    assert np.max(np.abs(c - analytical)) < 5e-3

Using the analytical formula $\sigma^2(t) = \sigma_0^2 + 2Dt$, compute $\sigma^2$ after N_STEPS = 500 steps. Use PULSE_WIDTH = 0.05 for $\sigma_0$, DIFFUSIVITY = 0.1, GRID_POINTS = 50, LENGTH = 1.0, and note that DT = STABILITY_RATIO * DX**2 / DIFFUSIVITY. Give your answer to three decimal places.

Exercises

Step-function initial condition

Replace the Gaussian initial condition in make_initial with a step function: $c(x) = 1$ for $x < 0.5$ and $c(x) = 0$ for $x \ge 0.5$. Run the simulation and plot the results. Does test_mass_conservation still pass? Does test_symmetry pass or fail, and why?

Absorbing boundaries

Change the boundary conditions from zero-flux to absorbing: set c[0] = 0 and c[-1] = 0 after each update instead of using ghost cells. Run the simulation and observe how the total mass changes over time. Modify test_mass_conservation to check that mass decreases monotonically rather than remaining constant.

Demonstrating instability

Set STABILITY_RATIO = 0.6 (above the stability limit of 0.5) and run the simulation for 20 steps. Describe what happens to the concentration values. Confirm that test_stability_ratio catches this before any time-stepping occurs.

Grid refinement and convergence

Double GRID_POINTS to 100 (leaving STABILITY_RATIO unchanged so DT is recalculated automatically). Measure the new maximum error in test_matches_analytical at $n = 20$ steps. The truncation error scales as $O(\Delta t + \Delta x^2)$; with twice as many points, $\Delta x$ halves and $\Delta t$ (derived from $\Delta x^2$) quarters. Is the observed reduction in error consistent with this prediction?