Find an orthonormal basis for the solutions to the linear equation 2x(1)-x(2)+x(3)=0
Can you find a basis (doesn't need to be orthonormal yet)?
I guess im confused on a basis... because this is just one single vector.
or is the basis or this two vectors?
i understand the concept... it is supposed to be perpendicular to the the equation
It looks like a vector, but it really isn't. It's a system of 1 equations with 3 unknowns. So, it's underdetermined, which means two of your variables are parameters. But yes, your basis should be two vectors. And, not quite. The vectors that form your basis should be perpendicular to each other.
Can you take a guess as to what a basis might be?
well if i understand the concept of a basis, it is a set of vectors that can be written as a linear combination of the set... but it is confusing me, because this is just one equation, and a set of vectors like we have done before.
and not a set of vectors like before
Right. You're probably just over-thinking it. Before, when you were trying to find a basis, you would row reduce to pivot columns, then figure out if you have any free variables. Then, you would write your basis in terms of those free variables. Here, you don't have to row reduce. You have two free variables, so just solve for one of them and then write it in terms of the other two (I used x_2).
ohh okay, so you would just put x(2)=2x(1)+x(3) and then have (1,2,0) and (0,1,1)???
Yep. Very good. So, you now have a basis. All that's left is to make it an orthogonal basis, and then finally an orthonormal basis. To make it orthogonal, use the Gram-Schmidt process. Do you know what that is?
I dont, and the weird thing is, we are in section 10.1 now, and thats what these questions are on, and i looked ahead, and that process is done in section 10.2, so they are doing something different, yet it doesnt explain what exactly.
I don't have your book, so I'm not sure I can be much help. Does the section specify what it does instead?
there are no examples just proofs, i have the answer, but that doesnt much help me... if I need to use that process instead of whatever they want, then that is what i will do. i will look ahead and see quick.
is it just 1/length times the vector?
Uh, almost. You have to set one of your vectors in your basis as your starting vector, so you can give orientation to it, call it u1. Then, to find the second one, you take your next vector and subtract the projection of u1 on v2. (There's usually a formula for what the projection is, though). If you were to continue, you'd keep subtracting the projections of your orthogonal projects. But, since you're only doing two, it's not really the full process.
so i am suppose to use that formula v(2)=w(2)-(w(2) x v(1))v(1)
Yes, almost. You need to divide your projection by the norm of v1 squared, unless v1 is normalized (which it isn't). So, v(2) = w(2) - [(w(2)v(1))/||v(1)||^2] * v(1)
so we apply these formulas to the two vectors that we found to be the basis of that one single equation... the result of using these formulas on those two vectors will give two new vectors that are orthogonal to the equation with unit length one?
It won't give unit length of one, yet. These two new vectors will be orthogonal vectors. You can check by taking the inner product of them, which should give you 0. Then, to get orthonormal vectors, you just need to divide the vectors by their norms. (So, u1/||u1||)
the forumlas in the book lists a w(0), w(1), w(2), v(1), v(2)... now is the w(0) the initial equation... and the w(1) and v(1) the vectors from the basis, and the w(2) and v(2) the two new orthogonal vectors?
like in your equation, what is w(2) and v(1) ... are those the two basis vectors?
Not sure. I'd have to see the book to be able to tell. If your book gave you the equation that they did before, it just means they normalized the basis before transforming it to an orthogonal basis. You can divide the original basis by their norms, and then apply the gram-schmidt process without the extra part I had and achieve the same thing. So, you have v1 = (0,1,1) and v2 = (1,2,0). These are your basis vectors that you found. Now, you can normalize them, which would give you some ugly square roots (which is why I wait until the end), and then apply the equation you gave before. Or, you can find your orthogonal basis by setting u1 = v1. Then, computing: u2 = v2 - (<v2,u1>/||u1||^2)*u1. You can see if you normalize your v1, then ||u1|| = 1, so that part goes away in the formula. Two methods, both achieve the same thing. If the book specifies one way, I would go with that way for now.
okay, so i got (1,1,-1) and to make it orthonormal, it would be times square root of 3?
so that is one of the vectors, do i just switch all of the v2 u1 to get the other.
You set v1 = u1, so you have one of your orthogonal vectors. And, you found your second one, (1,1,-1). You can check by taking the inner product: <(0,1,1),(1,1,-1)> = 0*1 + 1*1 + 1*-1 = 0 To normalize them, divide them by their norms. So, for the one you gave, multiply by 1/sqrt(3), not sqrt(3). But, very close. Similar for the second one. Normalizing a vector doesn't change its direction, just its distance. So, you would expect orthogonality to be preserved also. You can check again if you want, but the inner product of those vectors, when normalized, should also equal 0. And, once you have those two, you've found your orthonormal basis.
so would you of gotten two different ones if you would of used (1,2,0) as u1 instead of (0,1,1)
Yes. Can you tell me what condition would cause you to get the same answer?
ummm, when both sets would be the same basis......
why would they be the same? or would they be linearly dependent, but no... basis always has to be linearly independent right?
or if they are the same then they are in the same direction right.
To elaborate further, consider the xy plane. x=(0,1) is one vector, y=(1,0) is the other. These form a basis, which spans all of R^2. But, take the vectors formed by the line y = x and y = -x. (vector notation would be (-1,1) and (1,1)). These vectors are orthogonal, but different from (0,1) and (1,0). But, you were close. The only time you would get the same orthogonal basis by starting with two different vectors would be if the vectors themselves were already orthogonal. (If you look at the formula, the inner product of v2,u1 = 0, so the entire projection would cancel, leaving you with u2 = v2).
is there a reason, why the vectors have to be a length of one... why does it matter?
Hm. There's probably quite a few reasons, but I'm a bit tired and can't think of all that many. Sorry. I can give you one that I make use of though: In later math, specifically in computational math (running algorithms on computers to solve minimization problems, for example), computing normal vectors makes computation faster. Instead of recomputing the norm of a vector at each iteration, you can normalize the vector and then forget about having to maintain its length. Also, especially in computation, vectors, matrices, anything is generally much more stable (your rounding errors won't give you a terrible solution) when normalized. Again, this goes back to having to compute something once or multiple times. The more times you have to compute something, the more error you will have.
I was just curious, I wasn't sure if it was going to actually be used later in the linear algebra course. I thank you for all your help on this problem. I hope you don't mind if I fan you, so I can ask more questions.
Yeah, that's cool. I'm not sure what your future plans are, but almost everything you learn in linear algebra is incredibly useful in computational math. So, engineering, data mining, even quantum computing requires extensive knowledge of linear algebra.
i definitely see how important linear algebra is, and i wish i would of took it more serious as an undergrad... now im trying to get a masters in math.
its been almost 10 years since i graduated college, so i am rusty on about everything. and it has been a tough process trying to get back into it.
Ah, yeah. Quite a few things change in only 10 years too. Anyway, I need to get some sleep. Good luck with your homework!
Thank you, I appreciate all your help.
Join our real-time social learning platform and learn together with your friends!