Qual é a diferença entre uma rede Bayes (dinâmica) e um HMM?

14

Eu li que HMMs, Particle Filters e Kalman filter são casos especiais de redes Bayes dinâmicas. No entanto, conheço apenas HMMs e não vejo a diferença nas redes dinâmicas de Bayes.

Alguém poderia explicar?

Seria bom se sua resposta pudesse ser semelhante à seguinte, mas para a Bayes Networks:

Modelos ocultos de Markov

Um modelo Markov oculto (HMM) é uma 5-tupla :λ=(S,O,A,B,Π)

  • S : um conjunto de estados (por exemplo, "início do fonema", "meio do fonema", "fim do fonema")
  • O : um conjunto de possíveis observações (sinais de áudio)
  • AR|S|×|S|: A stochastic matrix which gives probabilites (aij) to get from state i to state j.
  • BR|S|×|O|: A stochastic matrix which gives probabilites (bkl) to get in state k the observation l.
  • ΠR|S|: Initial distribution to start in one of the states.

It is usually displayed as a directed graph, where each node corresponds to one state sS and the transition probabilities are denoted on the edges.

Hidden Markov Models are called "hidden", because the current state is hidden. The algorithms have to guess it from the observations and the model itself. They are called "Markov", because for the next state only the current state matters.

For HMMs, you give a fixed topology (number of states, possible edges). Then there are 3 possible tasks

  • Evaluation: given a HMM λ, how likely is it to get observations o1,,ot (Forward algorithm)
  • Decoding: given a HMM λ and a observations o1,,ot, what is the most likely sequence of states s1,,st (Viterbi algorithm)
  • Learning: learn A,B,Π: Baum-Welch algorithm, which is a special case of Expectation maximization.

Bayes networks

Bayes networks are directed acyclical graphs (DAGs) G=(X,E). The nodes represent random variables XX. For every X, there is a probability distribution which is conditioned on the parents of X:

P(X|parents(X))

There seem to be (please clarify) two tasks:

  • Inference: Given some variables, get the most likely values of the others variables. Exact inference is NP-hard. Approximately, you can use MCMC.
  • Learning: How you learn those distributions depends on the exact problem (source):

    • known structure, fully observable: maximum likelihood estimation (MLE)
    • known structure, partially observable: Expectation Maximization (EM) or Markov Chain Monte Carlo (MCMC)
    • unknown structure, fully observable: search through model space
    • unknown structure, partially observable: EM + search through model space

Dynamic Bayes networks

I guess dynamic Bayes networks (DBNs) are also directed probabilistic graphical models. The variability seems to come from the network changing over time. However, it seems to me that this is equivalent to only copying the same network and connecting every node at time t with every the corresponding node at time t+1. Is that the case?

Martin Thoma
fonte
2
1. You can also learn the topology of an HMM. 2. When doing inference with BNs, besides asking for maximum likelihood estimates, you can also sample from the distributions, estimate the probabilities, or do whatever else probability theory lets you. 3. A DBN is just a BN copied over time, with some (not necessarily all) nodes chained from past to the future. In this sense, a HMM is a simple DBN with just two nodes in each time-slice and one of the nodes chained over time.
KT.
I asked someone about this and they said: "HMMs are just special cases of dynamic Bayes nets, with each time slice containing one latent variable, dependent on the previous one to give a Markov chain, and one observation dependent on each latent variable. DBNs can have any structure that evolves over time."
ashley

Respostas:

1

From a similar Cross Validation question follows @jerad answer:

HMMs are not equivalent to DBNs, rather they are a special case of DBNs in which the entire state of the world is represented by a single hidden state variable. Other models within the DBN framework generalize the basic HMM, allowing for more hidden state variables (see the second paper above for the many varieties).

Finally, no, DBNs are not always discrete. For example, linear Gaussian state models (Kalman Filters) can be conceived of as continuous valued HMMs, often used to track objects in space.

I'd recommend looking through these two excellent review papers:

xboard
fonte