Chapter 8: Market Regime-Switching Model Practice

Haiyue
6min

Chapter 8: Market Regime-Switching Model Practice

Learning Objectives
  • Implement Markov regime-switching models
  • Identify bull and bear market state transitions
  • Use EM algorithm for parameter estimation
  • Build regime-switching based investment strategies

Knowledge Summary

1. Markov Switching Autoregressive Model (MS-AR)

Basic Form: rt=μst+ϕstrt1+σstϵtr_t = \mu_{s_t} + \phi_{s_t}r_{t-1} + \sigma_{s_t}\epsilon_t

where:

  • st{1,2,...,k}s_t \in \{1,2,...,k\} is the regime state
  • μst,ϕst,σst\mu_{s_t}, \phi_{s_t}, \sigma_{s_t} are state-dependent parameters
  • ϵtN(0,1)\epsilon_t \sim N(0,1)

2. EM Algorithm Parameter Estimation

E Step: Calculate state probabilities ξt(i,j)=P(st=i,st+1=jr1:T,θ(k))\xi_t(i,j) = P(s_t=i, s_{t+1}=j | r_{1:T}, \theta^{(k)}) γt(i)=P(st=ir1:T,θ(k))\gamma_t(i) = P(s_t=i | r_{1:T}, \theta^{(k)})

M Step: Update parameters μi(k+1)=t=1Tγt(i)(rtϕirt1)t=1Tγt(i)\mu_i^{(k+1)} = \frac{\sum_{t=1}^T \gamma_t(i)(r_t - \phi_i r_{t-1})}{\sum_{t=1}^T \gamma_t(i)}

Example Code

import numpy as np
import pandas as pd
from scipy.optimize import minimize
from scipy.special import logsumexp
import matplotlib.pyplot as plt

class MarkovSwitchingAR:
    """Markov Regime-Switching Autoregressive Model"""

    def __init__(self, n_states=2, order=1):
        self.n_states = n_states
        self.order = order
        self.params = {}

    def fit(self, data, max_iter=100, tol=1e-6):
        """Estimate parameters using EM algorithm"""
        # Initialize parameters
        self._initialize_parameters(data)

        log_likelihood_old = -np.inf

        for iteration in range(max_iter):
            # E step: Calculate state probabilities
            gamma, xi, log_likelihood = self._e_step(data)

            # M step: Update parameters
            self._m_step(data, gamma, xi)

            # Check convergence
            if abs(log_likelihood - log_likelihood_old) < tol:
                print(f"EM algorithm converged after {iteration+1} iterations")
                break

            log_likelihood_old = log_likelihood

        return self

    def _initialize_parameters(self, data):
        """Initialize parameters"""
        # Simple initialization
        self.params['mu'] = np.random.normal(0, 0.01, self.n_states)
        self.params['phi'] = np.random.uniform(0.1, 0.9, (self.n_states, self.order))
        self.params['sigma'] = np.random.uniform(0.01, 0.05, self.n_states)
        self.params['transition'] = np.random.dirichlet(np.ones(self.n_states), self.n_states)

    def predict_regime(self, data):
        """Predict regime states"""
        gamma, _, _ = self._e_step(data)
        return np.argmax(gamma, axis=1)

# Example: Building investment strategy
def regime_switching_strategy(returns, regimes):
    """Regime-switching based investment strategy"""
    positions = np.zeros_like(returns)

    # Long in bull market, short or cash in bear market
    bull_regime = 0  # Assume state 0 is bull market
    bear_regime = 1  # Assume state 1 is bear market

    for t in range(len(returns)):
        if regimes[t] == bull_regime:
            positions[t] = 1.0  # Full position long
        elif regimes[t] == bear_regime:
            positions[t] = -0.5  # Half position short
        else:
            positions[t] = 0.0  # Cash

    strategy_returns = positions[:-1] * returns[1:]
    return strategy_returns, positions

# Generate example data and apply model
np.random.seed(42)
T = 500

# Simulate two-regime data
true_regimes = np.random.choice([0, 1], T, p=[0.7, 0.3])
mu_true = [0.02, -0.01]
sigma_true = [0.15, 0.25]

returns = np.zeros(T)
for t in range(T):
    regime = true_regimes[t]
    returns[t] = np.random.normal(mu_true[regime], sigma_true[regime])

# Fit model
ms_model = MarkovSwitchingAR(n_states=2)
ms_model.fit(returns)

# Predict regimes
predicted_regimes = ms_model.predict_regime(returns)

# Build strategy
strategy_rets, positions = regime_switching_strategy(returns, predicted_regimes)

print(f"Strategy annualized return: {np.mean(strategy_rets) * 252:.2%}")
print(f"Strategy annualized volatility: {np.std(strategy_rets) * np.sqrt(252):.2%}")
print(f"Benchmark annualized return: {np.mean(returns) * 252:.2%}")
print(f"Benchmark annualized volatility: {np.std(returns) * np.sqrt(252):.2%}")

Theoretical Analysis

Hamilton Filter

Filtering Probability: P(st=jr1:t)=f(rtst=j,r1:t1)P(st=jr1:t1)i=1kf(rtst=i,r1:t1)P(st=ir1:t1)P(s_t=j|r_{1:t}) = \frac{f(r_t|s_t=j,r_{1:t-1})P(s_t=j|r_{1:t-1})}{\sum_{i=1}^k f(r_t|s_t=i,r_{1:t-1})P(s_t=i|r_{1:t-1})}

Prediction Probability: P(st+1=jr1:t)=i=1kPijP(st=ir1:t)P(s_{t+1}=j|r_{1:t}) = \sum_{i=1}^k P_{ij}P(s_t=i|r_{1:t})

Model Selection

Information Criteria:

  • AIC: 2logL+2p-2\log L + 2p
  • BIC: 2logL+plogT-2\log L + p\log T
  • HQ: 2logL+2ploglogT-2\log L + 2p\log\log T

Mathematical Formula Summary

  1. Regime-switching AR model: rt=μst+i=1pϕi,strti+σstϵtr_t = \mu_{s_t} + \sum_{i=1}^p \phi_{i,s_t}r_{t-i} + \sigma_{s_t}\epsilon_t

  2. Transition probability: Pij=P(st+1=jst=i)P_{ij} = P(s_{t+1}=j|s_t=i)

  3. EM algorithm update:

    • μ^i=tγt(i)yttγt(i)\hat{\mu}_i = \frac{\sum_t \gamma_t(i)y_t}{\sum_t \gamma_t(i)}
    • σ^i2=tγt(i)(ytμ^i)2tγt(i)\hat{\sigma}_i^2 = \frac{\sum_t \gamma_t(i)(y_t-\hat{\mu}_i)^2}{\sum_t \gamma_t(i)}
  4. Log-likelihood: =t=1Tlogj=1kαt(j)\ell = \sum_{t=1}^T \log\sum_{j=1}^k \alpha_t(j)

Application Considerations
  • Regime identification has inherent lag
  • Need sufficient sample to estimate parameters
  • Model selection (number of states) is important
  • Regime characteristics may change over time