Yet another Linear Algebra question.
let \(\theta=\{\left[\begin{matrix}1 & 1 \\ 3 & 6\end{matrix}\right], \left[\begin{matrix}1 & 2 \\ 4 & 8\end{matrix}\right], \left[\begin{matrix}0 & 0 \\ 1 & 0\end{matrix}\right],\left[\begin{matrix}1 & 0 \\ 2 & 5\end{matrix}\right]\}\). Does \(\theta\) span a \(M_{22}\)?
I'm getting a really big SLE. I think this shouldn't be happening.
One way is to write each of these as a 4x1 column vector in some consistent way and then take the determinant of the 4x4 matrix filled with all 4 column vectors. If it's 0, then you have a linearly dependent set of vectors and so they don't span the space. But there are other ways you could show this too I think, just the first thing to come to mind
I did the usual way: \(aA+bB+cC+dD=M_{22}\) But this is yielding a augmented matrix of dimensions 5x4 and the solutions are in terms of the elements of \(M_{22}\)
But there should be a simpler way to this. Because in the set there are functions that span a \(P_{4}\) and I cant see a simpler way to do this.
I mean of dimensions 4x5*
I'm unfamiliar with the usual way, I sorta have a different linear algebra in my mind than when I learned it several years ago :X One way you could do it is you could change your basis step by step. Like for the same reason that if you're in the 2D plane with the basis vectors \((\hat i + \hat j)\) and \(\hat i\) you can subtract some of this second basis vector from the first, and it will still span the space, \(\hat j\) and \(\hat i\). But now it's clearer what's separate from what. SO by this I mean you can uhh take that 3rd vector and subtract it from all the other vectors and kinda work your way until either all the vectors are 0 matrices with a single 1 in each of the 4 corners each. But idk how you're taught/expected/supposed to do this really so I'm kinda out there probably.
I was taught (that was several years ago and now I have to get my algebra knowledge back) to do the usual linear combination check, i.e, au + bv = w. But in this case and in polynomials I think it's not worth going through this whole drill... And any kind of different approach to this problem is fine.
Here's a thought, take your basis and represent it in a new basis \(\theta=\{\left[\begin{matrix}1 & 1 \\ 3 & 6\end{matrix}\right], \left[\begin{matrix}1 & 2 \\ 4 & 8\end{matrix}\right], \left[\begin{matrix}0 & 0 \\ 1 & 0\end{matrix}\right],\left[\begin{matrix}1 & 0 \\ 2 & 5\end{matrix}\right]\}\) \(\phi =\{ \left[\begin{matrix}1 & 0\\ 0 & 0\end{matrix}\right], \left[\begin{matrix}0 & 1\\ 0 & 0\end{matrix}\right], \left[\begin{matrix}0 & 0\\ 1 & 0\end{matrix}\right], \left[\begin{matrix}0 & 0\\ 0 & 1\end{matrix}\right]\}\) Then you can write your \(\theta\) basis in terms of this basis like this: \[\theta_1 = 1 \phi_1 +1 \phi_2 + 3 \phi_3 + 6 \phi_4\] Then you'll be able to write this as a change of basis formula. If the change of basis matrix is not invertible, then it doesn't span the space. Ok, technically this is probably the same thing as the original thing I suggested but it seems different at least haha.
Sorry for not checking. I will update tomorrow since someone I know knows the "easy" and is willing to teach me.
Yeah haha sounds good. I'd be curious to see
Seems that there's no "easy way out" for this question. In fact, Gauss-Jordan is the way to go since you get a reduced row echelon form and a solution set \(\lambda\) which can be tested numerically.. @Kainui
@Kainui how did u get (phii)
@dinamix, he wrote an element (namely, the first matrix of \(\theta\)) in terms of a new basis called \(\phi\)
hmm ok ty mr @ChillOut
My solution is to calculate If this determinant is not zero, then it spans the space. \[\left| \begin{matrix} 1 & 1 & 3 & 6 \\ 1&2&4&8 \\ 0&0&1&0 \\ 1 &0&2&5 \end{matrix} \right| = 0\] We can expand fairly easily for the first term since we have a column of mostly 0s: \[\left| \begin{matrix} 1 & 1 & 3 & 6 \\ 1&2&4&8 \\ 0&0&1&0 \\ 1 &0&2&5 \end{matrix} \right| = \left| \begin{matrix} 1 & 1 & 6 \\ 1&2&8 \\ 1 &0&5 \end{matrix} \right| \] Then I'd suggest expanding along the column or row containing 0 so you only have 2 2x2 determinants left to compute.
I said column of mostly zeroes when I meant row of mostly zeroes haha. Oh well \(\det(A) = \det(A^\top)\) anyways heh :P
I dunno if that's what you meant by "numerically" or if you meant like using MATLAB or Octave so I just wanted to make sure you knew you could do this by hand
Mostly 0's means we will have to use graphs... right? When I say numerically I meant can do it by hand.
Well, I got it done. But no big deal, Gauss-Jordan worked fine. I got another Linear Algebra question, though. I'm THAT bad with Linear Algebra :(
graphs? I'm not sure I know what you mean. I'm using Laplace's algorithm for evaluating determinants, it's pretty useful and much faster than Gauss-Jordan elimination but hey, as long as you get the job done and understand what you're doing that's good to
Yeah ask away if you have more questions I am using this as a good opportunity to practice memorizing how to type matrices in latex lol
Then I'm closing this one! Thanks!
Join our real-time social learning platform and learn together with your friends!