A Python tool for analyzing absorbing Markov chains by computing expected steps to absorption from transient states.
This tool computes the fundamental matrix and expected number of steps to reach an absorbing state for each transient state in an absorbing Markov chain. The implementation uses the standard mathematical approach:
-
Fundamental Matrix:
$N = (I - Q)^{-1}$ -
Expected Steps:
$t = N \times \mathbf{1}$ (where$\mathbf{1}$ is the vector of ones)
The diagram above shows an example absorbing Markov chain with three transient states (State 0, State 1, State 2) and one absorbing state. The arrows indicate transition probabilities between states.
- Load Q matrix from a JSON configuration file
- Automatic validation of Q matrix (ensures row sums ≤ 1)
- Compute fundamental matrix N
- Calculate expected steps to absorption for each transient state
- Support for custom state names
This project uses uv for dependency management. Install dependencies with:
uv syncEdit config.json to define your Q matrix (transient-to-transient transitions):
{
"Q_matrix": [
[0.5, 0.3, 0.1],
[0.2, 0.4, 0.1],
[0.3, 0.3, 0.1]
],
"state_names": ["State 0", "State 1", "State 2"]
}Important
- The Q matrix represents transitions between transient states only
- Each row must sum to ≤ 1.0 (the remainder represents probability of transitioning to absorbing states)
- The
state_namesfield is optional
uv run main.pyQ matrix (transient to transient transitions):
[[0.5 0.3 0.1]
[0.2 0.4 0.1]
[0.3 0.3 0.1]]
Fundamental matrix N = (I - Q)^(-1):
[[3.75 2.5 1.25]
[2.5 3.33 1.25]
[2.5 2.5 2.08]]
Expected number of steps to absorption for each transient state:
State 0: 7.5000 steps
State 1: 7.0833 steps
State 2: 7.0833 steps
The classic "drunkard's walk" (or random walk) problem, where a drunk person randomly walks between a bar and the sea, provides an intuitive analogy for understanding equipment degradation in predictive maintenance.
The Analogy:
- The Bar represents a perfect operating state (good health)
- The Sea represents complete failure (absorbing state)
- Random steps represent the stochastic nature of equipment deterioration over time
In manufacturing and predictive maintenance, equipment doesn't typically fail instantaneously. Instead, it progressively deteriorates through intermediate states (transient states) before reaching a failed state (absorbing state). Just as the drunkard randomly moves between positions, equipment can improve slightly (through minor repairs or favorable conditions) or degrade (through wear, stress, or adverse conditions).
Key Insights for Predictive Maintenance:
- Expected Time to Failure: The fundamental matrix allows us to compute the expected number of time steps (operating hours, cycles, etc.) before equipment transitions from a current health state to failure
- Intervention Planning: By knowing expected steps to absorption, maintenance teams can schedule interventions before critical failure occurs
- Cost Optimization: Understanding transition probabilities helps balance preventive maintenance costs against failure costs
- Risk Assessment: Different starting states yield different expected times to failure, enabling risk-based maintenance prioritization
This mathematical framework transforms qualitative notions of "equipment health" into quantitative predictions, enabling data-driven maintenance strategies.
An absorbing Markov chain has:
- Transient states: States that can eventually leave (e.g., various degradation levels)
- Absorbing states: States that, once entered, cannot be left (e.g., complete failure)
The transition matrix
Where:
-
$Q$ : transient to transient transitions (the Q matrix) -
$R$ : transient to absorbing transitions -
$I$ : identity matrix (absorbing states stay in themselves)
The fundamental matrix
-
$N_{ij}$ = expected number of times the chain visits transient state$j$ , starting from transient state$i$
Multiplying the fundamental matrix by the vector of ones gives the total expected steps:
$\mathbf{t} = N \times \mathbf{1}$ -
$t_i$ = expected number of steps to reach an absorbing state, starting from transient state$i$
- Python >= 3.13
- NumPy >= 2.0.0
If you use this tool in your research or project, please cite it as:
@software{absorbing_markov_chain,
title = {Absorbing Markov Chain Analysis},
author = {Vachier, J.},
year = {2025},
url = {https://github.com/jvachier/Absorbing-Markov-chain}
}For the underlying mathematical theory, see:
- Kemeny, J. G., & Snell, J. L. (1976). Finite Markov Chains. Springer-Verlag.
- Norris, J. R. (1998). Markov Chains. Cambridge University Press.
MIT License - see the LICENSE file for details.

