Coordinates with respect to basis? Let B_1={u_1,u_2,u_3} and B_2 = {v_1,v_2,v_3} be the bases for R^3 which u_1=(-3,0,-3) u_2=(-3,2,-1) u_3=(1,6,-1) v_1=(-6,-6,0) v_2=(-2, -6, 4) and v_3=(-2,-3,7). Find the transition matrix P_B1---> _B2
Don't know how to approach this question
hmm...im trying to see how i would do this in my mind, i dont want to look in my book....
alright. lets say we have a vector x (a,b,c) with respect to the first basis B_1. That means that the vector in the standard basis (1,0,0) (0,1,0) (0,0,1) would be: \[\vec v = a\vec u_1+b\vec u_2+c\vec u_33\] If you want a matrix equation to simulate that, you would make a matrix out of the columns of B_1. So that means so far we have: \[A\vec x = \vec v\] Now that we have the coordinates with respect to the standard basis, we can convert them to the other basis.
Okay I sort of get what you're saying
To convert them to the other basis, we would set up a matrix (call it B) with columns using the second basis vectors (B_2), and look for a solution to: \[B\vec y = \vec v\] the answer we are looking for is the vector y, which will tell us what linear combination of the B_2 vectors we need. Since the columns of B form a basis, it is invertible. So multiplying by its inverse we get: \[B\vec y = \vec v \iff \vec y = B^{-1}\vec v \iff \vec y = B^{-1}A\vec x \] Thus the change of basis matrix would be: \[B^{-1}A\] where A is formed by making the first basis vectors into columns, and B^-1 is formed by getting the inverse of the matrix formed from by making the second basis vectors into columns.
Now, im going to test this out...i dont know if its correct <.< it makes sense in my head lolol
okay just for the first part though, I don't know how you'd find a b and c if you don't know v
if we were interested in actually converting a vector from one basis to the other, we would need to know a, b, and c. Since this question just asks for the matrix that will do the job, we dont need that info. I'll do calculations on paper and post in a sec.
would you happen to have the answer to this problem for verification?
Man I'm so confused but maybe seeing it will make more sense. Yes I do, [0.75 0.75 1/12; -0.75 -17/12 -17/12; 0 2/3 2/3]
yeah, i think im on the right track, as soon as im done with the tedious calculations i'll post lol
Of course you'd be.. I wish I was a math genius so I wouldn't have to study so much. Thanks again
ok, in the first picture, thats just putting the first basis vectors into a matrix, and taking the second basis vectors, putting them in matrix, and finding the inverse of the matrix (which could be easy or not easy, unfortunately) The second picture is just multiplying the two matrices out, which gives the change of basis matrix. Im going to spend a little time thinking of a more efficient method.
Okay well is that the standard way of finding a transition matrix from one basis to another?
i dont think it is. Its just something that came to mind lol >.< i just came up with a better way to do it. You want to create the augmented matrix [ B | A ] and row reduce it. You will end up with: [ I | P ] where P is your change of basis matrix from B_1 (represented by A), to B_2 (represented by B)
I think that's the method they have in my book but it's worded so weirdly didn't understand it until you restated it
oh okay I get it now thanks for your help
but just wondering does coordinates have any relation to orthogonal projections?
We were learning that then bam we went into this and I got so lost
if your basis happens to be an orthogonal basis (better yes, an orthonormal basis), then finding out its coordinates is really really easy.
yet*
Oh okay by using that w dot with all the vectors method?
lets say you have a vector x in the standard basis, and you want its coordinates in the new basis: \[B =\left\{ q_1, q_2,\ldots , q_n\right\}\] where B is an orthogonal basis. Because B is a basis, we know that the vector x is some linear combination of those q's: \[\vec x = c_1q_1+c_2q_2+c_3q_3+\ldots +c_nq_n\] Now, lets say i wanted the first coordinate, c1. To get rid of all the other junk, i do a dot product (sometimes called inner product) with q1: \[x*q_1 = c_1q_1*q_1+c_2q_2*q_1+\ldots +c_nq_n*q_1\] Because its an orthogonal basis, each q is perpendicular to every other q, leaving their dot products 0: \[x*q_1 = c_1q_1*q_1+0+0+\ldots+0\]
Thats why orthogonal basis are nice, dot products are generally easier to compute.
how do you check to make sure b is an orthogonal basis, that question I asked earlier apparently it wasn't? do you dot each vector together and you should get 0?
yes, thats exactly how you would check :)
so if it's not then you'd have to make a matrix and row reduce
and find the unique solutions?
yep. thats correct
oh snap that make sense, haha thanks a lot!!
Join our real-time social learning platform and learn together with your friends!