Chapter 2: Introduction to Dynamic Systems and State Estimation

Haiyue
12min

Chapter 2: Introduction to Dynamic Systems and State Estimation

Learning Objectives
  • Understand the basic concepts and mathematical descriptions of dynamic systems
  • Master the definitions of state variables and observation variables
  • Learn about noise models and uncertainty description methods

Knowledge Summary

1. Basic Concepts of Dynamic Systems

Mathematical Description of Systems

A dynamic system is one whose state changes over time, mathematically represented as:

xk+1=f(xk,uk,wk)\mathbf{x}_{k+1} = f(\mathbf{x}_k, \mathbf{u}_k, \mathbf{w}_k) yk=h(xk,vk)\mathbf{y}_k = h(\mathbf{x}_k, \mathbf{v}_k)

Where:

  • xkRn\mathbf{x}_k \in \mathbb{R}^n: State vector
  • ukRm\mathbf{u}_k \in \mathbb{R}^m: Control input
  • ykRp\mathbf{y}_k \in \mathbb{R}^p: Observation vector
  • wk\mathbf{w}_k: Process noise
  • vk\mathbf{v}_k: Observation noise

Linear Dynamic Systems

When the system functions are linear, they can be expressed as:

xk+1=Fkxk+Bkuk+wk\mathbf{x}_{k+1} = F_k \mathbf{x}_k + B_k \mathbf{u}_k + \mathbf{w}_k yk=Hkxk+vk\mathbf{y}_k = H_k \mathbf{x}_k + \mathbf{v}_k

2. State Variables and Observation Variables

Principles for Selecting State Variables

  • Minimality: State vector should contain minimal information needed to describe system dynamics
  • Observability: States should be inferable from observations
  • Physical meaning: State variables should have clear physical or economic significance

Characteristics of Observation Variables

  • Partial observability: Usually only part of the system state is observable
  • Noise contamination: Observation data contains measurement errors
  • Sampling frequency: Observation frequency may be lower than system dynamics change frequency

3. Noise Models and Uncertainty

Process Noise

Represents uncertainty in the system model:

  • Model error: Errors from simplifying assumptions
  • Unmodeled dynamics: System dynamics not considered
  • External disturbances: Environmental changes and other external factors

Observation Noise

Represents uncertainty in the measurement process:

  • Sensor error: Equipment precision limitations
  • Quantization error: Rounding errors in digitization process
  • Environmental interference: Effects of measurement environment

Statistical Properties of Noise

Noise is typically assumed to have the following properties:

  • Zero mean: E[wk]=0,E[vk]=0E[\mathbf{w}_k] = 0, E[\mathbf{v}_k] = 0
  • Independence: Noise at different times is mutually independent
  • Normal distribution: wkN(0,Qk),vkN(0,Rk)\mathbf{w}_k \sim \mathcal{N}(0, Q_k), \mathbf{v}_k \sim \mathcal{N}(0, R_k)

Example Code

Simple Linear Dynamic System Simulation

import numpy as np
import matplotlib.pyplot as plt

# Define a simple linear dynamic system
# State: [position, velocity]^T
# Observation: position

def simulate_linear_system(T=100):
    """
    Simulate one-dimensional motion system
    State equation: x_{k+1} = F * x_k + w_k
    Observation equation: y_k = H * x_k + v_k
    """
    dt = 0.1  # Time step

    # State transition matrix (position-velocity model)
    F = np.array([[1, dt],
                  [0, 1]])

    # Observation matrix (observe position only)
    H = np.array([[1, 0]])

    # Noise covariance matrices
    Q = np.array([[0.01, 0],      # Process noise covariance
                  [0, 0.01]])
    R = np.array([[0.1]])         # Observation noise covariance

    # Initial state
    x_true = np.array([[0],       # Initial position
                       [1]])      # Initial velocity

    # Store true states and observations
    states_true = np.zeros((2, T))
    observations = np.zeros((1, T))

    for k in range(T):
        # State update
        w_k = np.random.multivariate_normal([0, 0], Q).reshape(2, 1)
        x_true = F @ x_true + w_k
        states_true[:, k] = x_true.flatten()

        # Generate observation
        v_k = np.random.multivariate_normal([0], R).reshape(1, 1)
        y_k = H @ x_true + v_k
        observations[:, k] = y_k.flatten()

    return states_true, observations, F, H, Q, R

# Run simulation
states_true, observations, F, H, Q, R = simulate_linear_system()

# Plot results
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(12, 8))

# Position comparison
ax1.plot(states_true[0, :], 'b-', label='True Position', linewidth=2)
ax1.plot(observations[0, :], 'r.', label='Observed Position', alpha=0.6, markersize=4)
ax1.set_ylabel('Position')
ax1.set_title('State vs Observation: Position')
ax1.legend()
ax1.grid(True)

# Velocity (unobservable)
ax2.plot(states_true[1, :], 'g-', label='True Velocity', linewidth=2)
ax2.set_ylabel('Velocity')
ax2.set_xlabel('Time Step')
ax2.set_title('Hidden State: Velocity (Not Directly Observable)')
ax2.legend()
ax2.grid(True)

plt.tight_layout()
plt.show()

print("System matrices:")
print(f"State transition matrix F:\n{F}")
print(f"Observation matrix H:\n{H}")
print(f"Process noise covariance Q:\n{Q}")
print(f"Observation noise covariance R:\n{R}")

Financial Application: Stock Price Dynamics Modeling

# Stock price dynamic system modeling example
def stock_price_model(T=252):
    """
    State-space model for stock price
    State: [log_price, drift]^T
    Observation: log_price (with noise)
    """

    # State transition matrix
    # log_price_{t+1} = log_price_t + drift_t + w1_t
    # drift_{t+1} = drift_t + w2_t
    F = np.array([[1, 1],
                  [0, 1]])

    # Observation matrix (observe log price)
    H = np.array([[1, 0]])

    # Process noise covariance (uncertainty in price movement and drift change)
    sigma_price = 0.02    # Daily price volatility
    sigma_drift = 0.001   # Drift change
    Q = np.array([[sigma_price**2, 0],
                  [0, sigma_drift**2]])

    # Observation noise covariance (measurement error)
    R = np.array([[0.0001]])

    # Initial state
    x_true = np.array([[4.6],      # log(100) ≈ 4.6
                       [0.0004]])  # Daily drift ≈ 0.1 annualized return

    # Simulation
    states_true = np.zeros((2, T))
    observations = np.zeros((1, T))

    for k in range(T):
        # State update
        w_k = np.random.multivariate_normal([0, 0], Q).reshape(2, 1)
        x_true = F @ x_true + w_k
        states_true[:, k] = x_true.flatten()

        # Generate observation
        v_k = np.random.multivariate_normal([0], R).reshape(1, 1)
        y_k = H @ x_true + v_k
        observations[:, k] = y_k.flatten()

    # Convert to prices
    true_prices = np.exp(states_true[0, :])
    observed_prices = np.exp(observations[0, :])

    return true_prices, observed_prices, states_true, F, H, Q, R

# Run stock price simulation
np.random.seed(42)
true_prices, observed_prices, states, F, H, Q, R = stock_price_model()

# Plot stock price dynamics
fig, (ax1, ax2, ax3) = plt.subplots(3, 1, figsize=(12, 10))

# Price trajectory
ax1.plot(true_prices, 'b-', label='True Price', linewidth=2)
ax1.plot(observed_prices, 'r-', label='Observed Price', alpha=0.7)
ax1.set_ylabel('Price')
ax1.set_title('Stock Price Dynamic System Simulation')
ax1.legend()
ax1.grid(True)

# Log price
ax2.plot(states[0, :], 'b-', label='True Log Price', linewidth=2)
ax2.plot(np.log(observed_prices), 'r-', label='Observed Log Price', alpha=0.7)
ax2.set_ylabel('Log Price')
ax2.set_title('Log Price Dynamics')
ax2.legend()
ax2.grid(True)

# Drift term (hidden state)
ax3.plot(states[1, :], 'g-', label='Price Drift', linewidth=2)
ax3.set_ylabel('Drift')
ax3.set_xlabel('Time (Trading Days)')
ax3.set_title('Price Drift (Hidden State)')
ax3.legend()
ax3.grid(True)

plt.tight_layout()
plt.show()

# Calculate statistical features
price_returns = np.diff(np.log(true_prices))
print(f"Annualized return: {np.mean(price_returns) * 252:.3f}")
print(f"Annualized volatility: {np.std(price_returns) * np.sqrt(252):.3f}")

System Observability Analysis

def check_observability(F, H):
    """
    Check observability of linear system
    Observability matrix: O = [H; HF; HF²; ...; HF^(n-1)]
    """
    n = F.shape[0]  # State dimension

    # Construct observability matrix
    O = H.copy()
    HF_power = H.copy()

    for i in range(1, n):
        HF_power = HF_power @ F
        O = np.vstack([O, HF_power])

    # Calculate rank
    rank_O = np.linalg.matrix_rank(O)
    is_observable = (rank_O == n)

    return O, rank_O, is_observable

# Test observability of position-velocity system
O, rank_O, is_observable = check_observability(F, H)

print("Observability analysis:")
print(f"State dimension: {F.shape[0]}")
print(f"Observability matrix rank: {rank_O}")
print(f"System observable: {is_observable}")
print(f"Observability matrix:\n{O}")

# Is the system observable if we only observe position?
if is_observable:
    print("✓ Velocity can be inferred from position observations")
else:
    print("✗ Cannot fully determine system state from position observations alone")

Dynamic System Classification

🔄 正在渲染 Mermaid 图表...

Noise Model Comparison

Noise TypeMathematical RepresentationPhysical MeaningFinancial Application
Process NoisewkN(0,Qk)\mathbf{w}_k \sim \mathcal{N}(0, Q_k)Model uncertaintyMarket microstructure noise
Observation NoisevkN(0,Rk)\mathbf{v}_k \sim \mathcal{N}(0, R_k)Measurement errorQuote errors, data delays
Colored Noisewk=ρwk1+ϵkw_k = \rho w_{k-1} + \epsilon_kCorrelated noiseVolatility clustering
Heavy-Tailed Noisewktν(0,Σ)w_k \sim t_\nu(0, \Sigma)Extreme eventsFinancial crises, black swans

Challenges in State Estimation Problems

Essence of Estimation Problems

The core challenge of state estimation is inferring hidden states from noisy observations:

  • Incomplete information: Only partial state is observable
  • Noise interference: Observation data contains random errors
  • Dynamics: State continuously changes over time
  • Real-time requirement: Need online estimation updates

Optimal Estimation Criteria

  1. Minimum Mean Square Error (MMSE): x^k=argminE[xkx^k2]\hat{\mathbf{x}}_k = \arg\min E[||\mathbf{x}_k - \hat{\mathbf{x}}_k||^2]

  2. Maximum A Posteriori (MAP): x^k=argmaxp(xky1:k)\hat{\mathbf{x}}_k = \arg\max p(\mathbf{x}_k | \mathbf{y}_{1:k})

  3. Maximum Likelihood Estimation (MLE): x^k=argmaxp(y1:kxk)\hat{\mathbf{x}}_k = \arg\max p(\mathbf{y}_{1:k} | \mathbf{x}_k)

Practical Application Considerations
  • Model assumptions: Linear assumptions are often approximations in practice
  • Noise characteristics: Financial data noise typically doesn’t satisfy normal distribution
  • Parameter time-variation: System parameters may change over time
  • Computational complexity: Computational burden for high-dimensional systems

Chapter Summary

This chapter introduced fundamental concepts of dynamic systems and state estimation:

  1. Dynamic system modeling: Learn to describe systems using state-space equations
  2. States and observations: Understand observability and incomplete information
  3. Noise modeling: Master mathematical description methods for uncertainty
  4. Estimation challenges: Recognize complexities in practical applications

These concepts lay the foundation for understanding Kalman filtering algorithms, especially in financial applications where we often face complex dynamic systems and multi-source noise challenges.

Categories