Ask your own question, for FREE!
Mathematics 20 Online
OpenStudy (anonymous):

OLS

OpenStudy (anonymous):

\[OLS = \frac{ \sum_{i=1}^{n} Y_{i} X_{i} }{ \sum_{i=1}^{n} X_{i}^{2}}\]

OpenStudy (anonymous):

\[ROM = \frac{ \sum_{i=1}^{n} Y_{i}} { \sum_{i=1}^{n} X_{i}}\]

OpenStudy (anonymous):

\[MOR = \frac{ 1 }{ n } \sum_{i=1}^{n} \frac{ Y_{i} }{ X_{i} }\]

OpenStudy (anonymous):

Since the regression is of a variable on a constant, I guess this is equivalent to \[y_{i}=4 + \epsilon_{i}\] with 4 just an example of some constant. We could then assume the expectation of residuals is zero so we are left with \[y_{i}=4\]But where next?

OpenStudy (anonymous):

@phi @TuringTest ?

OpenStudy (anonymous):

@e.mccormick @Ashleyisakitty

OpenStudy (anonymous):

@amistre64

OpenStudy (anonymous):

@myininaya can you help?

OpenStudy (anonymous):

@karatechopper ?

OpenStudy (anonymous):

@hartnn @Abhisar ?

OpenStudy (anonymous):

Can you just eliminate the \[X_{i}\] in the equations altogether? Or does the constant become the \[X_{i}\] perhaps?

OpenStudy (anonymous):

If you remove the \[X_{i}\] though you won't have the mean, certainly in the ROM case you'll just have the sum of the constant? Do I need to take expectations?

hartnn (hartnn):

for a constant variable, X1 = X2 =X3 =X4 =... so \(X_i \) will be a constant

OpenStudy (anonymous):

Okay so you could just call it any number? say, 4?

OpenStudy (anonymous):

Or perhaps my interpretation of that is wrong?

hartnn (hartnn):

call it 'a' where a is a constant

OpenStudy (anonymous):

Okay great

OpenStudy (anonymous):

So for this question would I then have to find the mean using the formulas for OLS, ROM and MOR above, using this constant a as all X1, X2, X3... values?

hartnn (hartnn):

\(\sum X_i \) will be total number of 'X' values like for X: 3,3,3,3,3,3,3 \(\sum X_i = 7\)

OpenStudy (anonymous):

Okay sure. So with a, \[\sum_{}^{}X_{i} = an\] ?

hartnn (hartnn):

i think i read it wrong.. it always takes a constant value, means something like this : X : 1 2 3 4 5 6 7 Y : 1 1 1 1 1 1 1 ... so, infact, \(\sum Y_i = 7\) take a=1 for simplicity

OpenStudy (anonymous):

Okay that was my initial thinking. But then I ran into trouble when I tried to plug such a setup into the three formulas

OpenStudy (anonymous):

I will give it another try now. Maybe breaking down the different summations and seeing what results I get will help

OpenStudy (anonymous):

I'm thinking \[\sum_{}^{}Y_{i}X_{i}=nx\] does that look right @phi @hartnn ?

OpenStudy (anonymous):

Very confused. If that was right OLS goes to 1/x, which can't be right...

OpenStudy (anonymous):

Any ideas @phi?

OpenStudy (anonymous):

Someone wrote this online re: the question.. "Here is a hint, If you are fitting a regression with only a constant then you are fitting a horizontal line to the data. Think about the criteria that the regression models are using to find the "best" fit and then work out what value (height of the horizontal line) will fit that criteria."

OpenStudy (phi):

I was thinking, for Ordinary Least squares, we say \[ y_i= \alpha+ \beta x_i \\ \alpha = \bar{y} - \beta \bar{x} \\ y_i= \bar{y} - \beta \bar{x} + \beta x_i \]

OpenStudy (phi):

if the x's are constant then for all \(x_i, x_i= \bar{x} \) and so \[ y_i= \bar{y} - \beta \bar{x} + \beta x_i \\ = \bar{y} - \beta \bar{x} +\beta \bar{x} \\= \bar{y} \]

OpenStudy (anonymous):

Thank you @phi that's got to be a big leap forward. I hadn't thought of breaking away from the formulas above entirely. That said, given the regression is just that on a constant, wouldn't the initial setup have to be \[y_{i}=\alpha\]

OpenStudy (phi):

regression "on a constant" (I think) means the input (independent variable) is the constant.

OpenStudy (phi):

y = fcn(x)

OpenStudy (phi):

**** "Here is a hint, If you are fitting a regression with only a constant then you are fitting a horizontal line to the data..." *** that is saying that the "slope" of the line is zero in the regression formula \[ y_i = \alpha + \beta x_i\] i.e. \( \beta=0\)

OpenStudy (phi):

at least for OLS \[ \alpha= \bar{y} - \beta \bar{x} \] and with \( \beta=0 \) we again get \[ y_i = \alpha = \bar{y} \]

OpenStudy (anonymous):

So surely \[y_{i}= \alpha\] and following your logic \[\alpha = \bar{y}\] which proves the regression is always equal to mean of variable?

OpenStudy (anonymous):

Yep great!! Thank you!! No idea on approach for ROM or MOR though?

OpenStudy (phi):

I would hope it is something similar, but I don't know about ROM or MOR

OpenStudy (phi):

I would write down the expression for \( y_i\) using the "ROM" model. Any idea what it is ?

OpenStudy (anonymous):

Unfortunately not. All I can find online is the same formula given above, which I was given in class. The intuition of ROM is that it finds the average value of X and the average value of Y and draws a line from origin to that point to make best fit. So you're finding intercept of Xbar and Ybar and drawing from origin

OpenStudy (anonymous):

I guess to find solution for the 3 estimators one needs to use those formuals above just can't see how to apply it

OpenStudy (phi):

if it's of the form y= b + m x then we need the definition for b and m (though m = 0 if we assume the x's are constant)

OpenStudy (anonymous):

As a best alternative to a formal solution I could perhaps describe the fact that in each of these cases they're linear estimators with no slope since we're working with only y = b?

OpenStudy (phi):

yes. and then show that b is \( \bar{y} \)

OpenStudy (anonymous):

Would you be able to provide the intuition behind this step? I've always had a bit of trouble with it

OpenStudy (phi):

I don't see how to explicitly use your formulas for Beta, because the equations are indeterminate. Not surprising because we are find the slope = \( \frac{\Delta y}{\Delta x} \) and we are dividing by 0

OpenStudy (anonymous):

Agreed. I'll probably go with that descriptive description of the scenario

OpenStudy (anonymous):

"descriptive description" whoops

OpenStudy (phi):

**Would you be able to provide the intuition behind this step? *** I will have to think about it, and post something later. But your question is a bit broad. The derivation of least-squares can be done using linear algebra and projection matrices, or using calculus to minimize the sum of squares error. Either way, you end up with those formulas

OpenStudy (anonymous):

Okay thank you so much for your help!

OpenStudy (phi):

For what it is worth, here is Khan using linear algebra https://www.khanacademy.org/math/linear-algebra/alternate_bases/orthogonal_projections/v/linear-algebra-least-squares-examples

OpenStudy (anonymous):

@satellite73

OpenStudy (phi):

How about this for \[ r = \frac{ \sum_{i=1}^{n} Y_{i}} { \sum_{i=1}^{n} X_{i}} \] the model is \[ y_i = r x_i \] if x is constant = a then \[ \sum_{i=1}^{n} X_{i} = \sum_{i=1}^{n} a= na\] and \[ r = \frac{ \sum_{i=1}^{n} Y_{i}}{a n} = \frac{\bar{y}}{a}\] using that r in the model we gat \[ y_i =\frac{\bar{y}}{a} x_i \] but all x's = a so \[ y_i =\frac{\bar{y}}{a} a =\bar{y} \]

OpenStudy (phi):

now we need the MOR model

OpenStudy (anonymous):

@phi that is fantastic. thank you so much!! The step un using the n in the denominator to make for the ybar I really wouldn't have seen!

Can't find your answer? Make a FREE account and ask your own questions, OR help others and earn volunteer hours!

Join our real-time social learning platform and learn together with your friends!
Can't find your answer? Make a FREE account and ask your own questions, OR help others and earn volunteer hours!

Join our real-time social learning platform and learn together with your friends!