Ask your own question, for FREE!
Probability 10 Online
OpenStudy (anonymous):

Consider the probability density function f(x)= 1/2(1+øx) , -1≤x≤1 a)what is the moment estimator for ø b) find maximum likelihood estimator for ø

OpenStudy (anonymous):

@radar

OpenStudy (anonymous):

@douglaswinslowcooper

OpenStudy (perl):

I'm not sure what the definition of an moment estimator is

OpenStudy (anonymous):

well I struggling to make a maximum likelihood .. for that ..cuz i tried many times and it goes to zero and am sure it it is not zero

OpenStudy (perl):

what is the definition of maximum likelihood. what class is this ( i know basic probability)

OpenStudy (anonymous):

it is applied statistics and probability ,In statistics, maximum-likelihood estimation (MLE) is a method of estimating the parameters of a statistical model. When applied to a data set and given a statistical model, maximum-likelihood estimation provides estimates for the model's parameters.

OpenStudy (perl):

ok

OpenStudy (anonymous):

\[\prod_{1}^{n}1/2(1+øx) =?\]

OpenStudy (anonymous):

\[=(1/2)^{n} \prod_{1}^{n}(1+øx _{1}) (1+øx _{2}) (1+øx _{3}) (1+øx _{n})=? \]

OpenStudy (perl):

i think that should be sigma (sum)

OpenStudy (kirbykirby):

I am assuming you are using a random sample of size \(n\)? a) This is using the Method of Moments: What you do is equate the 1st, 2nd, ..., \(k^{\text{th}}\) "theoretical" moments, \(E (X^k)\), with the 1st, 2nd, ..., \(k^{\text{th}}\) sample moments, \(m_k=1/n \sum_{i=1}^n X_i^k\) and you solve for the parameters of interest. Here, \(k\) is the number of parameters in your distribution. So, you end up solving \(k\) simultaneous equations and solve for your parameters. In your case, there is only 1 parameter, \(\phi\) in your distribution, so it is much easier to deal with. You find the 1st moment, \(E(X^1)=E(X)\), and equate it to the 1st sample moment, \(m_1=1/n \sum_{i=1}^n X_i = \bar{X}\), and then solve for \(\phi\). Theoretical Moment: \[\large \begin{align} E(X) &=\int_{\forall x} x\cdot f(x)\, dx \\ &= \int_{-1}^1 x\cdot \frac{1}{2}(1+\phi x)\, dx \\ &= \frac{1}{2} \int_{-1}^1 (x+\phi x^2)\, dx \\ &= \frac{1}{2} \left. \left( \frac{x^2}{2}+\phi\frac{x^3}{3}\right) \right|_{-1}^1 \\ &= \frac{\phi}{3} \end{align} \] Now equate \(E(X) = m_1\), and we put a hat on \(\phi\) to say we estimated it: \[\large \frac{\hat{\phi}}{3}=\bar{X}\implies \hat{\phi}=3\bar{X}\] --------------------------------------- b) Your likelihood function is almost correct. Once you find that, you find the log-likelihood, and then set the 1st partial derivative of the log-likelihood to 0 to find the max. (Make sure you find that the observeed information, the negative 2nd partial derivative, is greater than 0 to make sure it's a max): \[\large \begin{align} L(\phi)&=\prod_{i=1}^n \frac{1}{2}(1+\phi x_i)\\ &=\left( \frac{1}{2}\right)^n\prod_{i=1}^n(1+\phi x_i)\\ l(\phi)=\log L(\phi) &= \log \left[\left( \frac{1}{2}\right)^n\prod_{i=1}^n(1+\phi x_i)\right]\\ &=n\log\left(\frac{1}{2}\right)+\sum_{i=1}^n \log(1+\phi x_i) \\ S(\phi)=\frac{\partial}{\partial \phi}l(\phi)&= \sum_{i=1}^n \frac{1}{1+\phi x_i}\cdot x_i\\ \end{align} \] Set \(S(\phi)=0\): \[\large \begin{align} \sum_{i=1}^n \frac{x_i}{1+\hat{\phi} x_i}&=0 \\ \sum_{i=1}^n (1+\hat{\phi} x_i)x_i &= 0 \\ \sum_{i=1}^n x_i + \sum_{i=1}^n \hat{\phi}x_i^2&=0\\ \implies \hat{\phi} &= \frac{-\sum_{i=1}^n X_i}{\sum_{i=1}^n X_i^2} \end{align} \] Verify that this is a max: That is, Verify that \[\large I(\hat{\phi}) = -\left.\frac{\partial^2}{\partial \phi^2}l(\phi)\right|_{\phi=\hat{\phi}}>0 \] You should get that \[\large I(\phi)=\sum_{i=1}^n\frac{\phi^2}{(1+\phi x_i)^2}\] So clearly, evaluated at the MLE, \(\large I(\hat{\phi})\) will be \(> 0\) since both the numerator and denominator are squared.

OpenStudy (kirbykirby):

Hmm actually I wrote this rather quickly... what I wrote for solving \(S(\phi)=0\) is not correct :S. I can't just multiply the numerators like that since the \(x\)'s are indexed. I'm not sure if this is one of those results that requires something like Newton's method to solve for the parameter.

OpenStudy (kirbykirby):

If you are only actually dealing with 1 observation rather than a sample of n, this is much easier to do...

OpenStudy (kirbykirby):

If you use Newton's Method, then using \(\phi^{(r)}\) as a notation for the \(r\)-th iteration: you need the formula: \[\large \phi^{(r+1)}=\phi^{(r)}+[ I(\phi)]^{-1}S(\phi) \] 1) set an initial value \(\large \phi^{(0)}\) on the RHS of the equation which is close to the MLE (usually you graph the function to estimate this) 2) Iterate until for some \(r\) and \(\varepsilon\) such that \(\large \left|\phi^{(r+1)} - \phi^{(r)}\right|<\varepsilon \) for some specified \(\varepsilon\) (which is just a small number, it's basically a tolerance level). 3) then the MLE \(\large \hat{\phi}\approx \large \phi^{(r+1)}\)

Can't find your answer? Make a FREE account and ask your own questions, OR help others and earn volunteer hours!

Join our real-time social learning platform and learn together with your friends!
Can't find your answer? Make a FREE account and ask your own questions, OR help others and earn volunteer hours!

Join our real-time social learning platform and learn together with your friends!