why is this theorem for stats useful?
for mle that R0-R4 conditions are satisfied. Also that Fisher information satisfies, then any consistent sequence of solutions of mle equations satisfies this : \[\sqrt{n}(\hat{\Theta} -\Theta)\rightarrow N(0,\frac{ 1 }{ I(\Theta) })\]
What are R0-R4? I assume that \(\Theta\) is a parameter and \(\hat{\Theta}\) is a consistent MLE taken from a sample \(X_1, X_2,..., X_n\).
i mean R0-R5. these are the conditions, for MLE, Fisher information and Cramer-Rao bound
regularity conditions
Could you list them here?
R0. If theta is not equal to theta(prime) then f(theta) and f(theta_prime) are different distributions
R1. The support of f(theta) i.e, supp(f(theta)):={x:f(theta(x)>0} is the same for all theta R2. theta star is an interior point of omega
R3. F(theta x) is twice differentiable in theta for each x)
R4. integral of f(theta(x))dx in the continuos case is twice differentiable in theta
R5. thrice differentiable in theta for each x then there exists a constanc c>0 and a function M(X)>0
IIRC, the Fisher information is the variance of the score. The theorem that you stated is a different form of the Central Limit Theorem. However, CLT deals with the sample itself, and the theorem you stated is used on an estimator of the sample.
how might it be important? for what?
For large enough n, you can use the asymptotic relationship to create confidence intervals and devise tests regarding the estimator.
Also, it may be the case that the pdf of the estimator is really complicated, such that deriving confidence intervals is hard. You can use the theorem you stated so that the pdf becomes asymptotic to normal.
Join our real-time social learning platform and learn together with your friends!