Prove If A has independent columns, then Ax=0 has only a trivial solution x=0
it depends on the space I will assume you mean \(n\) independent columns in some \(n\) dimensional space. Suppose \(Ax=0\) Since \(A\) has full rank, it must have an inverse. Multiply both sides (on the left) by \(A^{-1}\)
A could be rectangular. "Independent columns" only implies a full column rank, right ?
full rank \(\iff\) invertable
full column rank need not imply full rank ?
what?
again, the question, as it is stated, can not be answered. So I made some assumptions.
We can assume a general matrix, A is a mxn matrix with n independent column vectors
if the only thing in the kernel is the 0 vector then the matrix has full column rank and thus full row rank
check out 'invertable matrix theorem'
That is true only if A is a square matrix
oh hahahah you are the person that posted the question...
We cannot define a regular inverse for rectangular matrices Since the question doesn't specifically state the type of matrix, I think it is reasonable assume that the matrix could be rectangular
ok sorry forget what I said. I was making assumptions because I was not clear from your question.
OK suppose you have \(n\) linearly independent columns and \(Ax=0\), then \(x_1v_1+x_2v_2+...+x_nv_n=0\), correct?
Example matrix equation \[\begin{bmatrix}1&2\\2&3\\1&1\end{bmatrix}\begin{pmatrix}x_1\\x_2\end{pmatrix} = \begin{pmatrix}0\\0\\0\end{pmatrix} \]
yes yes, I am with you now. I didn't know you were the guy that asked the qeustion.
are you with me above?
Yeah... I think we need to show \(Ax = x_1v_1+x_2v_2+...+x_nv_n=0 \implies x_1=x_2=\cdots = x_n=0 \)
How could you add things up to get 0 unless, at some point, you are subtracting some quantity from itself.
hint, A+B+C+D=0 implies A=-B-C-D
Since it is not the case that all of the \(v_i\) are 0....
\(Ax = x_1v_1+x_2v_2+...+x_nv_n=0\) \(x_1v_1=-x_2v_2-...-x_nv_n\)
and for sure both sides are not 0 right?
this shows they are not lin ind
contradiction...
None of the \(v_i\)'s are 0 but how do we know that the right hand is not 0 ?
How do we know that some combination of linearly independent vectors doesn't evaluate to 0 ?
well, we know the \(v_i\) are non zero and there is some \(x_i\) that is not zero. So \(x_iv_i=-x_1v_1-x_2v_2-...-x_{i-1}v_{i-1}-x_{i+1}v_{i+1}-...-x_nv_n\)
now, since the left is not zero, the right is not zero
The \(v_i\) are not zero because they are lin ind, there is some \(x_i\) that is not zero because \(x\ne 0\).
Oh oh I think I see what you're saying... one moment
If the right hand side evaluates to the 0 vector, then the coefficient xi on left hand side must be the number 0. Beautiful ! Thank you xD
Something looks wrong, right hand side can evaluate to some other nonzero vector right ?
A + B + C = 0 A = -B - C A = 2 B = -1 C = -1
the left is non zero, so the right is not zero, so we have non zero multiples of \(v_i\) equal to scalar multiples of the other vectors
so the v_i are not lin ind as assumed.
2,-1,-1 are not lin ind vectors
what is the problem
?
Ohk that proves the contrapositive : if a nontrivial solution exists for Ax=0, then the columns of A are not independent. Nice nice :)
correct
It seems to me that more work is being done here than necessary (though admittedly, I've only invested a little over two minutes looking at the comments above). To be brief: If the columns of \(\mathbf{A}_{m\times n}\) are all mutually independent, then the columns span all of \(\mathbb{R}^n\) and \(\mathrm{rank}(\mathbf{A})=n\) (i.e. full column rank if \(m\neq n\), or simply full rank if \(m=n\)). The rank-nullity theorem (number of columns = rank + nullity) tells you the dimension of the nullspace of \(\mathbf{A}\) must be \(0\) and so only contains the zero vector.
Alternatively, if you're not familiar with the result of that theorem, you can think of it this way: given \(\mathbf{A}_{m\times n}\) with \(n\) independent columns, you know that \(\mathbf{A}\) can be row reduced with exactly \(n\) pivots, i.e. \[\mathrm{rref}(\mathbf{A})=\begin{bmatrix}p_1&\cdots&\cdots&\cdots\\ 0&p_2&\cdots&\cdots\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&p_n\\ 0&0&\cdots&0\end{bmatrix}\]With \(n\) pivot variables, there is no room for any free variables, which means there are no "special" vectors in the nullspace.
Sure that works but you use way more "stuff" than we did here. The above got a little wacky but the following is the proof using definitions only. Suppose \(A\) is made of \(n\) independent vectors and suppose b.w.o.c. \(x\ne 0\) and \(Ax=0\) . \(\exists \ x_i \) s.t. \(x_i\ne 0\). Then \(x_1v_1+x_1v_2+...+x_nv_n=0\) so \(x_iv_i=-x_1v_1-x_2v_2+...+x_{i-1}v_{i-1}+x_{i+1}v_{i+1}+...+x_nv_n\). Since the \(v_i\ne0 \ \forall i\in[n]\), we have the left side is non zero and thus the right is also non zero. So \(v_i\) is a linear combination of the other vectors and thus a contradiction. i.e. This is basic work a 6th grader can understand, which I think is cleaner and easier than the proposed proof.
Fair enough, but the OP's use of "rank" is suggestive of knowing something about the R-N theorem. Ultimately up to him/her to decide which approach is easier.
obviously
Join our real-time social learning platform and learn together with your friends!