**Summary**

In this kick-off meeting we reviewed various classes of Markov models we encounter in our research, and general problems encountered in their study.

**Classes of Markov models**

Finite-state Markov processes operate in *discrete* or *continuous* time.

- Discrete-time models are determined by a
*transition matrix*A satisfying A_{ij}>0 and ∑_{i}A_{ij}=1 - Continuous-time models are determined by a
*rate matrix*A satisfying A_{ij}>0 for i≠j and ∑_{i}A_{ij}=0. - Extensions to Markov processes studied at DKE include

*Time-dependent Markov processes*in which the rate matrix depends on time: A=A(t). These include:*periodic*time-dependent models A(T+t)=A(t)*state-dependent*models in ion-channel gating A=A(V(t)) where V is the action potential, which depends on the Markov state dV/dt = f(V,x,…).*Cox processes*: A(t) = k(t) A where k(t) is a stochastic process

*Markov games*in which the transition matrix depends on choices q_{k}made by multiple players: A=A(q_{1},…,q_{n}). Each player tries to maximise rewards r_{k}(x,q_{1},…,q_{n}). For one player, these are*Markov decision processes*.**Dynamics of Markov models**- The fundamental problem for Markov processes is to compute the probability distribution at a given time in the future from a given initial distribution. The equations are

*Discrete time*: x_{n}= A^{n}x_{0}*Continuous time*: x(t) =*e*^{At}x_{0}- Another important problem is to compute the
*stationary distributions*, which satisfy

*Discrete time*:*Continuous time*: A x = 0**Special structural properties**- Markov models may have structural properties which may make their evolution easier or harder to compute.
- For Markov games, the fundamental problem is to characterise and compute equilibrium strategies.

*Multiple time-scales*: The process has states which change very quickly compared to other states. Typically, the rate matrix has the block form A=[ 1/ε A_{ff}, A_{fs}; A_{sf}, A_{ss}] where x_{f}and x_{s}are fast and slow variables. Systems with multiple time scales are especially challenging to compute the evolution of.

*Modular systems*: The states of the process partition into subsets, with transitions between states in different sets occurring very slowly. The transition/rate matrix has the block form [ A_{11}, ε A_{12}; ε A_{21}, A_{22}]. Modularity should help compute the evolution, especially of large systems.

*Regressive/delay models*: The evolution depends on the current state and states up to k steps in the past. The evolution has the form x_{n+1}= A [x_{n}; x_{n-1}; … ; x_{n-k}]. Can be reformulated as a standard Markov model.**Research Problems****Evolution**Existing techniques include Taylor series, domain reduction exp(At)=exp(At/2)^{2}, similarity transformations and eigenvalue decompositions exp(At)P = P exp(Dt) P^{-1}, and Padé approximants. The Baker-Campbell-Hausdorff / Zassenhaus expansions may be useful for modular systems, and the Magnus expansion for time-dependent systems.Computing the stationary distribution reduces to solving a system of linear equations, but for large systems is still a challenging and relevant problem (e.g. internet page ranking). Multigrid/reduction methods have been developed by Chun Wen.The basic workflow for Markov modelling of physical systems is:The*identification*problem is to find the best model fitting the available experimental data, or a set of plausible system models.The*experiment design*problem is to find a good experimental protocol for identifying the system; in particular, we want to identify parameter combinations which are most important for the system behaviour.**Stochastic/Deterministic****Equilibrium strategies**- For Markov decision processes without discounting, and for Markov games, there are no good algorithms for computing optimal/equilibrium strategies. Maybe we could find good algorithms by formulating and solving necessary conditions for equilibria as linear complementarity problems.
- Markov models are inherently stochastic, but when sufficiently many Markov models are aggregated, they act as a deterministic model with evolution given by the probability density vector. For moderate numbers of processes, stochastic fluctuations may still be non-negligable. When is the continuum limit valid, and how do we take stochastic fluctuations into account when these are important?
- Ultimately, we are often interested in making predictions, and explicit models are then only a means to this end. An interesting question is whether we can directly make predictions from data without an explicit model, or with only a partial model.
- The
*model-order reduction*problem is to find a lower-dimensional model with approximately the same external/observable behaviour. This is still unsolved for time/state-dependent models. A subproblem is to*simplify*the functional form of nonlinear dependencies. - Data -> [Identification] -> (Physical Model) -> [Reduction] -> (Computational Model) -> [Analysis] -> Prediction
**Prediction/Workflow****Stationary distribution**- Good ways of computing the evolution using the matrix exponential exp(At) are still lacking, especially for systems with multiple time scales. We would also like to find
*rigorous*bounds on the probability distribution vectors. - There are a number of unsolved problems related to Markov processes: