Expectativa de recíproco de uma variável

Respostas:

27

pode ser 1 / E (X)?

Não, em geral não pode; A desigualdade de Jensen nos diz que se XX é uma variável aleatória e φφ é uma função convexa, então φ ( E [ X ] ) E [ φ ( X ) ]φ(E[X])E[φ(X)] . Se XX é estritamente positivo, 1 / X1/X é convexo, então E [ 1 / X ] 1 / E [ X ]E[1/X]1/E[X] e, para uma função estritamente convexa, a igualdade só ocorre se XXtem variação zero ... então, nos casos em que tendemos a nos interessar, os dois são geralmente desiguais.

Supondo que estamos lidando com uma variável positiva, se estiver claro para você que XX e 1 / X1/X serão inversamente relacionados ( Cov ( X , 1 / X ) 0Cov(X,1/X)0 ), isso implicaria E ( X 1 / X ) - E ( X ) E ( 1 / X ) 0, oE(X1/X)E(X)E(1/X)0 que implica E ( X ) E ( 1 / X ) 1 , então E ( 1 / X )E(X)E(1/X)1 1 / E ( X ) .E(1/X)1/E(X)

Estou confuso ao aplicar expectativa no denominador.

Use a lei do estatístico inconsciente

E [ g ( X ) ] = - g ( x ) f X ( x ) d x

E[g(X)]=g(x)fX(x)dx

(no caso contínuo)

então quando g ( X ) = 1X ,E[1g(X)=1XX ]=- f(x)x dxE[1X]=f(x)xdx

Em alguns casos, a expectativa pode ser avaliada por inspeção (por exemplo, com variáveis ​​aleatórias gama), ou derivando a distribuição do inverso, ou por outros meios.

Glen_b -Reinstate Monica
fonte
14

Como Glen_b diz que isso provavelmente está errado, porque o recíproco é uma função não linear. Se você deseja uma aproximação de E ( 1 / X ),E(1/X) talvez possa usar uma expansão de Taylor em torno de E ( X )E(X) :

E ( 1X )E(1E ( X ) -1E ( X ) 2 (X-E(X))+1E ( X ) 3 (X-E(X))2)== 1E ( X ) +1E(X)3Var(X)

E(1X)E(1E(X)1E(X)2(XE(X))+1E(X)3(XE(X))2)==1E(X)+1E(X)3Var(X)
so you just need mean and variance of X, and if the distribution of XX is symmetric this approximation can be very accurate.

EDIT: the maybe above is quite critical, see the comment from BioXX below.

Matteo Fasiolo
fonte
oh yes yes...I am very sorry that I could not apprehend that fact...I have one more q...Is this applicable to any kind of function???actually I am stuck with |x||x|...How can the expectation of |x||x| can be deduced in terms of E(x)E(x) and V(x)V(x)
Sandipan Karmakar
2
I don't think you can use it for |X||X| as that function is not differentiable. I would rather divide the problem into the cases and say E(|X|)=E(X|X>0)p(X>0)+E(X|X<0)p(X<0)E(|X|)=E(X|X>0)p(X>0)+E(X|X<0)p(X<0), I guess.
Matteo Fasiolo
1
@MatteoFasiolo Can you please explain why the symmetry of the distribution of XX (or lack thereof) has an effect on the accuracy of the Taylor approximation? Do you have a source that you could point me to that explains why this is?
Aaron Hendrickson
1
@AaronHendrickson my reasoning is simply that the next term in the expansion is proportional to E{(XE(X))3}E{(XE(X))3} which is related to the skewness of the distribution of XX. Skewness is an asymmetry measure. However, zero skewness does not guarantee symmetry and I am not sure whether symmetry guarantees zero skewness. Hence, this is all heuristic and there might be plenty of counterexamples.
Matteo Fasiolo
4
I don't understand how this solution gets so many upvotes. For a single random variable XX there is no justificiation about the quality of this approximation. The third derivative f(x)=1/xf(x)=1/x is not bounded. Moreover the remainder of the approx. is 1/6f(ξ)(Xμ)31/6f′′′(ξ)(Xμ)3 where ξξ is itself a random variable between XX and μμ. The remainder won't vanish in general and may be very huge. Taylor approx. may only be useful if one has sequence of random variables Xnμ=Op(an)Xnμ=Op(an) where an0an0. Even then uniform integrability is needed additionally if interested in the expectation.
BloXX
8

Others have already explained that the answer to the question is NO, except trivial cases. Below we give an approach to finding E1XE1X when X>0X>0 with probability one, and the moment generating function MX(t)=EetXMX(t)=EetX do exist. An application of this method (and a generalization) is given in Expected value of 1/x1/x when xx follows a Beta distribution, we will here also give a simpler example.

First, note that 0etxdt=1x0etxdt=1x (simple calculus exercise). Then, write E(1X)=0x1f(x)dx=0(0etxdt)f(x)dx=0(0etxf(x)dx)dt=0MX(t)dt

E(1X)=0x1f(x)dx=0(0etxdt)f(x)dx=0(0etxf(x)dx)dt=0MX(t)dt
A simple application: Let XX have the exponential distribution with rate 1, that is, with density ex,x>0ex,x>0 and moment generating function MX(t)=11t,t<1MX(t)=11t,t<1. Then 0MX(t)dt=011+tdt=ln(1+t)|0=0MX(t)dt=011+tdt=ln(1+t)0=, so definitely do not converge, and is very different from 1EX=11=11EX=11=1.
kjetil b halvorsen
fonte
7

An alternative approach to calculating E(1/X)E(1/X) knowing X is a positive random variable is through its moment generating function E[eλX]E[eλX]. Since by elementary calculas 0eλxdλ=1x

0eλxdλ=1x
we have, by Fubini's theorem 0E[eλX]dλ=E[1X].
0E[eλX]dλ=E[1X].
user172761
fonte
2
The idea here is right, but the details wrong. Pleasecheck
kjetil b halvorsen
1
@Kjetil I don't see what the problem is: apart from the inconsequential differences of using tXtX instead of tXtX in the definition of the MGF and naming the variable tt instead of λλ, the answer you just posted is identical to this one.
whuber
1
You are right, the problems was less than I thought. Still this answer would be better withm some more details. I will upvote this tomorrow ( when I have new votes)
kjetil b halvorsen
1

To first give an intuition, what about using the discrete case in finite sample to illustrate that E(1/X)1/E(X)E(1/X)1/E(X) (putting aside cases such as E(X)=0E(X)=0)?

In finite sample, using the term average for expectation is not that abusive, thus if one has on the one hand

E(X)=1NNi=1XiE(X)=1NNi=1Xi

and one has on the other hand

E(1/X)=1NNi=11/XiE(1/X)=1NNi=11/Xi

it becomes obvious that, with N>1N>1,

E(1/X)=1NNi=11/XiNNi=1Xi=1/E(X)E(1/X)=1NNi=11/XiNNi=1Xi=1/E(X)

Which leads to say that, basically, E(1/X)1/E(X)E(1/X)1/E(X) since the inverse of the (discrete) sum is not the (discrete) sum of inverses.

Analogously in the asymptotic 0-centered continuous case, one has

E(1/X)=f(x)xdx1/xf(x)dx=1/E(X).

keepAlive
fonte