Dado que a estimativa posterior de de uma probabilidade normal e uma gama inversa anterior a é:
que é equivalente a
uma vez que um fraco anterior em remove e do eqn 1:
É aparente que a estimativa posterior de é uma função do tamanho da amostra e da soma dos quadrados da probabilidade. Mas o que isso significa? Há uma derivação na Wikipedia que eu não sigo exatamente.
Tenho as seguintes perguntas
- Posso chegar a esta segunda equação sem invocar a regra de Bayes? Estou curioso para saber se há algo inerente aos parâmetros de um GI relacionado à média e variância, independentemente da probabilidade normal.
- Posso usar o tamanho da amostra e o desvio padrão de um estudo anterior para estimar um anterior informado sobre e atualizar o anterior com novos dados? Isso parece simples, mas não consigo encontrar exemplos de como fazê-lo ou justificativa para que essa seja uma abordagem legítima - além do que pode ser visto no posterior.
- Existe um livro popular de probabilidade ou estatística que eu possa consultar para obter mais explicações?
Respostas:
Eu acho que é mais correto falar da distribuição posterior do seu parâmetro vez de sua estimativa posterior. Para maior clareza das anotações, deixarei o primo em seguir.σ′2 σ′2
Suponha que seja distribuído como , - eu largo por enquanto para fazer um exemplo heurístico - e é distribuído como e é independente de .X N(0,σ2) μ 1/σ2=σ−2 Γ(α,β) X
O pdf de dado é gaussiano, ieX σ−2
O pdf conjunto de , é obtido pela multiplicação de por - o pdf de . Isso sai como(X,σ−2) f(x,σ−2) f(x|σ−2) g(σ−2) σ−2
We can group similar terms and rewrite this as follows
The posterior distribution ofσ−2 is by definition the pdf of σ−2 given x , which is f(x,σ−2)/f(x) by Bayes' formula. To answer your question 1. I don't think there is a way to express f(σ−2|x) from f(x,σ−2) without using Bayes' formula. On with the computation, we recognize in the formula above something that looks like a Γ function, so integrating σ−2 out to get f(x) is fairly easy.
so by dividing we get
And here in the last formula we recognize aΓ distribution with parameters (α+1/2,β+x2/2) .
If you have an IID sample((x1,σ−21),...,(xn,σ−2n)) , by integrating out all the σ−2i , you would get f(x1,...,xn) and then f(σ−21,...,σ−2n|x1,...,xn) as a product of the following terms:
Which is a product ofΓ variables. And we are stuck here because of the multiplicity of the σ−2i . Besides, the distribution of the mean of those independent Γ variables is not straightforward to compute.
However, if we assume that all the observationsxi share the same value of σ−2 (which seems to be your case) i.e. that the value of σ−2 was drawn only once from a Γ(α,β) and that all xi were then drawn with that value of σ−2 , we obtain
from which we derive the posterior distribution ofσ−2 as your equation 1 by applying Bayes' formula.
The posterior distribution ofσ−2 is a Γ that depends on α and β , your prior parameters, the sample size n and the observed sum of squares. The prior mean of σ−2 is α/β and the variance is α/β2 , so if α=β and the value is very small, the prior carries very little information about σ−2 because the variance becomes huge. The values being small, you can drop them from the above equations and you end up with your equation 3.
In that case the posterior distribution becomes independent of the prior. This formula says that the inverse of the variance has aΓ distribution that depends only on the sample size and the sum of squares. You can show that for Gaussian variables of known mean, S2 , the estimator of the variance, has the same distribution, except that it is a function of the sample size and the true value of the parter σ2 . In the Bayesian case, this is the ditribution of the parameter, in the frequentist case, it is the distribution of the estimator.
Regarding your question 2. you can of course use the values obtained in a previous experiment as your priors. Because we established a parallel between Bayesian and frequentist interpretation in the above, we can elaborate and say that it is like computing a variance from a small sample size and then collecting more data points: you would update your estimate of the variance rather than throw away the first data points.
Regarding your question 3. I like the Introduction to Mathematical Statistics by Hogg, McKean and Craig, which usually gives the detail of how to derive these equations.
fonte
For question 1, the second equation follows from Bayes' rule as you point out, and I don't see how to avoid that.
For question 2, yes, you can do this. Just use a prior of the same form as your second equation.
For question 3, I would look for something about exponential families. Maybe someone will recommend a good resource.
fonte