Loading...
*  Ancestral reconstruction
For discrete-valued traits (such as "pollinator type"), this process is typically taken to be a Markov chain; for continuous- ... Huelsenbeck and Bollback first proposed a hierarchical Bayes method to ancestral reconstruction by using Markov chain Monte ... The typical means of modelling evolution of this trait is via a continuous-time Markov chain, which may be briefly described as ... Asymmetrical Markov k {\displaystyle k} -state 2 parameter model (Figure 4): in this model, the state space is ordered (so that ...
*  Markov chain
Hidden Markov model Markov blanket Markov chain geostatistics Markov chain mixing time Markov chain Monte Carlo Markov decision ... process Markov information source Markov random field Quantum Markov chain Telescoping Markov chain Variable-order Markov model ... A beautiful visual explanation of Markov Chains Chapter 5: Markov Chain Models Making Sense and Nonsense of Markov Chains ... A Markov chain is a stochastic process with the Markov property. The term "Markov chain" refers to the sequence of random ...
*  Examples of Markov chains
A game of snakes and ladders or any other game whose moves are determined entirely by dice is a Markov chain, indeed, an ... Google's PageRank algorithm is essentially a Markov chain over the graph of the Web.[not in citation given] Mark V. Shaney ... ISBN 978-1-119-38755-8. Gagniuc, Paul A. (2017). Markov Chains: From Theory to Implementation and Experimentation. USA, NJ: ... This page contains examples of Markov chains in action. ... absorbing Markov chain. This is in contrast to card games such ...
*  Additive Markov chain
In probability theory, an additive Markov chain is a Markov chain with an additive conditional probability function. Here the ... A binary additive Markov chain is where the state space of the chain consists on two values only, Xn ∈ { x1, x2 }. For example ... Examples of Markov chains S.S. Melnyk, O.V. Usatenko, and V.A. Yampol'skii. (2006) "Memory functions of the additive Markov ... An additive Markov chain of order m is a sequence of random variables X1, X2, X3, ..., possessing the following property: the ...
*  Absorbing Markov chain
In an absorbing Markov chain, a state that is not absorbing is called transient. Let an absorbing Markov chain with transition ... 3: Absorbing Markov Chains". In Gehring, F. W.; Halmos, P. R. Finite Markov Chains (Second ed.). New York Berlin Heidelberg ... Like general Markov chains, there can be continuous-time absorbing Markov chains with an infinite state space. However, this ... In the mathematical theory of probability, an absorbing Markov chain is a Markov chain in which every state can reach an ...
*  Quantum Markov chain
In mathematics, the quantum Markov chain is a reformulation of the ideas of a classical Markov chain, replacing the classical ... More precisely, a quantum Markov chain is a pair ( E , ρ ) {\displaystyle (E,\rho )} with ρ {\displaystyle \rho } a density ... Very roughly, the theory of a quantum Markov chain resembles that of a measure-many automaton, with some important ... "Quantum Markov chains." Journal of Mathematical Physics 49.7 (2008): 072105.. ...
*  Telescoping Markov chain
In probability theory, a telescoping Markov chain (TMC) is a vector-valued stochastic process that satisfies a Markov property ... is a Markov chain with transition probability matrix Λ 1 {\displaystyle \Lambda ^{1}} P ( θ k 1 = s , θ k − 1 1 = r ) = Λ 1 ( s ... satisfies a Markov property with a transition kernel that can be written in terms of the Λ {\displaystyle \Lambda } 's, P ( θ k ...
*  Markov chain geostatistics
A Markov chain random field is still a single spatial Markov chain. The spatial Markov chain moves or jumps in a space and ... Markov chain geostatistics uses Markov chain spatial models, simulation algorithms and associated spatial correlation measures ... e.g., transiogram) based on the Markov chain random field theory, which extends a single Markov chain into a multi-dimensional ... is proposed as the accompanying spatial measure of Markov chain random fields. Li, W. 2007. Markov chain random fields for ...
*  Markov chain mixing time
In probability theory, the mixing time of a Markov chain is the time until the Markov chain is "close" to its steady state ... More precisely, a fundamental result about Markov chains is that a finite state irreducible aperiodic chain has a unique ... Such problems can, for sufficiently large number of colors, be answered using the Markov chain Monte Carlo method and showing ... Mixing (mathematics) for a formal definition of mixing Aldous, David; Fill, Jim, Reversible Markov Chains and Random Walks on ...
*  Markov chain approximation method
In case of need, one must as well approximate the cost function for one that matches up the Markov chain chosen to approximate ... F. B. Hanson, "Markov Chain Approximation", in C. T. Leondes, ed., Stochastic Digital Control System Techniques, Academic Press ... In numerical methods for stochastic differential equations, the Markov chain approximation method (MCAM) belongs to the several ... The basic idea of the MCAM is to approximate the original controlled process by a chosen controlled markov process on a finite ...
*  Markov chain Monte Carlo
These interacting Markov chain Monte Carlo samplers can be interpreted as a way to run in parallel a sequence of Markov chain ... In principle, any Markov chain Monte Carlo sampler can be turned into an interacting Markov chain Monte Carlo sampler. ... Random walk Monte Carlo methods make up a large subclass of Markov chain Monte Carlo methods. Markov chain Monte Carlo methods ... In contrast to traditional Markov chain Monte Carlo methods, the precision parameter of this class of interacting Markov chain ...
*  Nearly completely decomposable Markov chain
In probability theory, a nearly completely decomposable (NCD) Markov chain is a Markov chain where the state-space can be ... A Markov chain with transition matrix P = ( 1 2 1 2 0 0 1 2 1 2 0 0 0 0 1 2 1 2 0 0 1 2 1 2 ) + ϵ ( − 1 2 0 1 2 0 0 − 1 2 0 1 2 ... Example 1.1 from Yin, George; Zhang, Qing (2005). Discrete-time Markov chains: two-time-scale methods and applications. ... Particularly efficient algorithms exist to compute the stationary distribution of Markov chains with this property. Ando and ...
*  Lempel-Ziv-Markov chain algorithm
LZMA uses Markov chains, as implied by "M" in its name. The binary tree approach follows the hash chain approach, except that ... The Lempel-Ziv-Markov chain algorithm (LZMA) is an algorithm used to perform lossless data compression. It has been under ... the search stop after a pre-defined number of hash chain nodes has been traversed, or when the hash chains "wraps around", ... Chaining is achieved by an additional array which stores, for every dictionary position, the last seen previous position whose ...
*  Markov chains on a measurable state space
A Markov chain on a measurable state space is a discrete-time-homogenous Markov chain with a measurable space as state space. ... The definition of Markov chains has evolved during the 20th century. In 1953 the term Markov chain was used for stochastic ... Sean Meyn and Richard L. Tweedie: Markov Chains and Stochastic Stability. 2nd edition, 2009. Daniel Revuz: Markov Chains. 2nd ... denotes the Markov chain according to a Markov kernel p {\displaystyle p} with stationary measure μ {\displaystyle \mu } , then ...
*  Reversible-jump Markov chain Monte Carlo
In computational statistics, reversible-jump Markov chain Monte Carlo is an extension to standard Markov chain Monte Carlo ( ... Green, P.J. (1995). "Reversible Jump Markov Chain Monte Carlo Computation and Bayesian Model Determination". Biometrika. 82 (4 ...
*  Construction of an irreducible Markov chain in the Ising model
Markov Chain in the Ising model is the first step in overcoming a computational obstruction encountered when a Markov chain ... So we an get the irreducibility of the Markov Chain based on simple swaps for the 1-dimension Ising model. Even though we just ... Then the Markov basis in Ising model can be degined as: A Markov bases for the Ising model is a set Z ~ ⊂ Z N 1 × ⋯ × N d {\ ... Thus in the following we will show how to modify the algorithm mentioned in the paper to get the irreducible Markov chain in ...
*  Fork-join queue
"Markov Chains". Basics of Applied Stochastic Processes. Probability and Its Applications. doi:10.1007/978-3-540-89332-5_1. ISBN ...
*  Transition rate matrix
Norris, J. R. (1997). "Markov Chains". doi:10.1017/CBO9780511810633. ISBN 9780511810633. ... Passage Times for Markov Chains. IOS Press. doi:10.3233/978-1-60750-950-9-i. ISBN 90-5199-060-X. Asmussen, S. R. (2003). " ... weighted graph whose vertices correspond to the Markov chain's states. An M/M/1 queue, a model which counts the number of jobs ... is an array of numbers describing the rate a continuous time Markov chain moves between states. In a transition rate matrix Q ( ...
*  Time reversibility
Kolmogorov's criterion defines the condition for a Markov chain or continuous-time Markov chain to be time-reversible. Time ... Markov chains, and piecewise deterministic Markov processes. Time reversal method works based on the linear reciprocity of the ... Norris, J. R. (1998). Markov Chains. Cambridge University Press. ISBN 0521633966. Löpker, A.; Palmowski, Z. (2013). "On time ... Markov processes can only be reversible if their stationary distributions have the property of detailed balance: p ( x t = i , ...
*  Balance equation
... of a Markov chain, when such a distribution exists. For a continuous time Markov chain with state space S, transition rate from ... For a continuous time Markov chain (CTMC) with transition rate matrix Q, if π i {\displaystyle \pi _{i}} can be found such that ... In probability theory, a balance equation is an equation that describes the probability flux associated with a Markov chain in ... For a discrete time Markov chain with transition matrix Q and equilibrium distribution π {\displaystyle \pi } , the global ...
*  James R. Norris
Norris, J. R. (1997). Markov Chains. Cambridge University Press. "James Norris's homepage at Cambridge University". "James ...
*  Foster's theorem
It uses the fact that positive recurrent Markov chains exhibit a notion of "Lyapunov stability" in terms of returning to any ... Consider an irreducible discrete-time Markov chain on a countable state space S having a transition probability matrix P with ... Brémaud, P. (1999). "Lyapunov Functions and Martingales". Markov Chains. p. 167. doi:10.1007/978-1-4757-3124-8_5. ISBN 978-1- ... Foster's theorem states that the Markov chain is positive recurrent if and only if there exists a Lyapunov function V : S → R ...
*  Kemeny's constant
... required for a Markov chain to transition from a starting state i to a random destination state sampled from the Markov chain's ... It is in that sense a constant, although it is different for different Markov chains. When first published by John Kemeny in ... For a finite ergodic Markov chain with transition matrix P and invariant distribution π, write mij for the mean first passage ... Kemeny, J. G.; Snell, J. L. (1960). Finite Markov Chains. Princeton, NJ: D. Van Nostrand. (Corollary 4.3.6) Catral, M.; ...
*  Hydrological modelling
Markov Chains are a mathematical technique for determine the probability of a state or event based on a previous state or event ... Markov Chains were first used to model rainfall event length in days in 1976, and continues to be used for flood risk ... "Markov Chains explained visually". Explained Visually. Retrieved 2017-04-21. Haan, C. T.; Allen, D. M.; Street, J. O. (1976-06- ... "A Markov Chain Model of daily rainfall". Water Resources Research. 12 (3): 443-449. Bibcode:1976WRR....12..443H. doi:10.1029/ ...
*  Entropy (information theory)
See Markov chain. Entropy is one of several ways to measure diversity. Specifically, Shannon entropy is the logarithm of 1D, ... For a second order Markov source, the entropy rate is H ( S ) = − ∑ i p i ∑ j p i ( j ) ∑ k p i , j ( k ) log 2 ⁡ p i , j ( k ... A common way to define entropy for text is based on the Markov model of text. For an order-0 source (each character is selected ... For a first-order Markov source (one in which the probability of selecting a character is dependent only on the immediately ...
*  Uncertainty quantification
Markov chain Monte Carlo (MCMC) is often used for integration; however it is computationally expensive. The fully Bayesian ...
*  Probabilistic bisimulation
Finite Markov Chains (Second ed.). New York Berlin Heidelberg Tokyo: Springer-Verlag. p. 224. ISBN 978-0-387-90192-3. Oliveira ... When applied to Markov chains, probabilistic bisimulation is the same concept as lumpability. Probabilistic bisimulation ...