Estimador para uma distribuição binomial

12

Como definimos um estimador para dados provenientes de uma distribuição binomial? Para bernoulli, posso pensar em um estimador estimando um parâmetro p, mas no binômio não consigo ver quais parâmetros estimar quando n caracterizamos a distribuição?

Atualizar:

Por estimador, entendo uma função dos dados observados. Um estimador é usado para estimar os parâmetros da distribuição que gera os dados.

Rohit Banga
fonte
Qual é a sua compreensão de um "estimador"? Eu me pergunto sobre isso, porque os estimadores não têm "parâmetros". Me preocupa que você não esteja claramente comunicando sua pergunta. Talvez você possa dar um exemplo concreto de uma situação real que está considerando.
whuber
@whuber adicionou mais informações. deixe-me saber se você deseja adicionar mais detalhes ou se meu entendimento é incorreto.
Rohit Banga
A edição está correta, mas um exemplo concreto ainda ajudaria. Em muitas aplicações da distribuição binomial, não é um parâmetro: é dado é o único parâmetro a ser estimado. Por exemplo, a contagem de sucessos em ensaios independentes de Bernoulli distribuídos de forma idêntica tem uma distribuição binomial ( , ) e um estimador do único parâmetro é . npknnppk/n
whuber
2
Eu adoraria ver um exemplo, mesmo que artificial, de estimar e (em um ambiente freqüentista). Pense nisso: você observa uma única contagem, k , diga . Esperamosnpk=5k approximately to equal np. So do we estimate n=10, p=0.5? Or maybe n=5000, p=0.001? Or almost anything else? :-) Or are you suggesting you might have a series of independent observations k1,k2,,km all from a common Binomial(n,p) distribution with both p and n unknown?
whuber
1
I am suggesting the latter - both p and n are unknown. I want an estimator for both n and p as a function of N observed data points.
Rohit Banga

Respostas:

1

Every distribution have some unknown parameter(s). For example in the Bernoulli distribution has one unknown parameter probability of success (p). Likewise in the Binomial distribution has two unknown parameters n and p. It depends on your objective which unknown parameter you want to estimate. you can fix one parameter and estimation other one. For more information see this

love-stats
fonte
What if I want to estimate both the parameters?
Rohit Banga
1
For maximum likelihood estimation, you have to take derivative of the likelihood function with respect to interested parameter(s) and equate that equation to zero, and solve the equation. I mean to say the procedure is same as you did while estimating 'p'. You have to do same with 'n'. check this one www.montana.edu/rotella/502/binom_like.pdf
love-stats
@love Your reference estimates only p, taking N as fixed.
whuber
-1 @love-stats For an example of a situation where taking the derivative of the likelihood function, equating it to 0, etc. does not work, see this attempt and the correct solution
Dilip Sarwate
1

Say you have data k1,,kmiid binomial(n,p).

You could easily derive method-of-moment estimators by setting k¯=n^p^ and sk2=n^p^(1p^) and solving for n^ and p^.

Or you could calculate MLEs (perhaps just numerically), eg using optim in R.

Karl
fonte
It turns out the MLEs are really horrible for p<1/2--they are biased and hugely variable, even with large samples. I haven't studied the MM estimators, in part because they're frequently not even defined (whenever s2/k¯>1, which happens).
whuber
@whuber - he didn't ask for a good estimator. ;)
Karl
1
Why not just propose n^=17 and p^=1/2 no matter what, then? :-) But you have a point: the question doesn't even specify what is to be estimated. If we only need an estimator for np, then there's an obvious good one available.
whuber
@whuber - Indeed. And I wouldn't be surprised to find n^maxki for the MLE.
Karl
That's correct: especially when p is close to 1, the max of the counts is the MLE. It works pretty well in such cases, as you might imagine. For smaller p, even with lots of data it's hard to distinguish this from a Poisson distribution, for which n is effectively infinite, leading to an enormous uncertainty in the estimate of n.
whuber
0

I think we could use method of moments estimation to estimate the parameters of the Binomial distribution by the mean and the variance.


Using the method of moments estimation to estimate The parameters p and m. [{\hat{p}}_n=\frac{\overline{X}-S^2}{\overline{X}}][\hat{m}_n=\frac{\overline{X}^2}{\overline{X}-S^2}] Proof The estimators of the parameters m and p by the Method of Moments are the solutions of the system of equations

mp=X¯,mp(1p)=S2.
Hence our equations for the method of moments are: [\overline{X}=mp] [S^2=mp (1-p).]

Simple arithmetic shows: [S^2 = mp\left(1 - p\right) = \bar{X}\left(1 - p\right)] [S^2=\bar{X}-\bar{X} p] [\bar{X}p=\bar{X}-S^2, \mbox{ therefore } \hat{p}=\frac{\bar{X}-S^2}{\bar{X}}.] Then, [\bar{X} = mp, \mbox{ that is, } m \left(\frac{\bar{X}-S^2}{\bar{X}}\right)] [\bar{X}=m\left(\frac{\bar{X}-S^2}{\bar{X}}\right), \mbox{ or } \hat{m}=\frac{\bar{X}^2}{\bar{X}-S^2}. ]

salma
fonte
1
It would be good if you could expand on this, for example, by writing the formula for the MoM estimator. Otherwise the answer is not self-contained; others (who don't already know the answer) will have to search online for "method of moments" etc. until they find the real answer.
jbowman
is there a way to render the math here correctly?
David Refaeli