1
\[\large \textbf{ b}_1 = \left( \begin{array} \\ r\\0\end{array}\right) ; \textbf{ b}_2 = \left( \begin{array} \\ 0\\s\end{array}\right) \]
One way to figure this out is by noticing that both vectors are pointing in DIFFERENT directions - \(\textbf{b}_1\) has no component along y direction but \(\textbf{b}_2\) has some nonzero component. So they're independent and thus span entire plane
The standard way is to find the rank of matrix with these vectors - the vectors are independent if you get a full column rank
@ganeshie8 iy intuitively makes sense but not mathematically....i m stuck with solving it
\[\large \left[ \begin{array} \\ r&0\\0&s\end{array}\right] \] whats the rank of this matrix ?
2?
and you have got 2 column vectors in the matrix - full column rank => column vectors are independent
independent vectors => they form a basis
honestly, today is my first day doing linearly independence so it does not make sense to me. Sorry about that
we say vectors are linearly independent when there is no solution for : \[\large c_1\mathbb{x_1} + c_3\mathbb{x_2} + c_3\mathbb{x_3} + \cdots + c_n\mathbb{x_n} = 0 \]
i.e. all c's are 0
it always has a trivial solution when all c's are 0, so the definition excludes this trivial solution
there should not be any OTHER solution to above ^^
then only the vectors are independent
thats the definition of independence, does that make sense ?
yes it does, thank you but how do you calculate bases, span and stuff like that
those are just definitions : 1) the given set of vectors "span" a space means : you can can reach every point in the space using linear combinations of the given vectors 2) basis : if the set of vectors "span" the space and if they are independent, they form basis of that space
consider previous example to get a feeling of these terms
lemme get another example to make it clear, if you don't mind.
okie
b1 = (1,1,-2)^T b2 = (1,-2,1)^T
and your first goal is to figure out whether they are independent or not, right ?
agree
Sticking to the definition, you need to find out wherther there is a nontrivial solution to below : \[\large c_1 \mathbb{b_1} + c_2\mathbb{b_2} = 0\]
Ofcourse as you said earlier, a trivial solution always exists when all c's are 0 but we are not interested in this, we want to see if there is any OTHER solution
non-zero solution
uhhh how do we check that
\[\large c_1 \mathbb{b_1} + c_2\mathbb{b_2} = 0\] can be expressed as \[\large \left[\begin{array}{cc}1&1\\1&-2\\-2&1 \end{array}\right] \left(\begin{array} \\ c_1 \\c_2 \end{array}\right) = 0\]
yes ?
yep
solve \(c_1\) and \(c_2\)
below is more accurate representation : \[\large \left[\begin{array}{cc}1&1\\1&-2\\-2&1 \end{array}\right] \left(\begin{array} \\ c_1 \\c_2 \end{array}\right) = \left(\begin{array} \\ 0 \\0 \end{array}\right) \]
c1=c2=0?
no c1=c2=1?
the column vectors will be `independent` if the `only solution` is c1=c2=0
how c1=c2=1 ? add both the columns, do u get 0 ?
1+1 = ? 1-2 = ? -2+1 = ?
2 -1 1
ya i solved it in a wrong way oops
and why are you guessing ? you can row reduce the matrix, find out number of pivots and find out the solution right ?
not guessing, just confused
\[\large \left[\begin{array}{cc}1&1\\1&-2\\-2&1 \end{array}\right] \left(\begin{array} \\ c_1 \\c_2 \end{array}\right) = \left(\begin{array} \\ 0 \\0 \\0 \end{array}\right)\]
do you know how to solve above system ?
c1+c2=0 c1-2c2=0 -2c1+c2=0 right
yes thats the system, but i want you solve it in matrix form so that we can **see** the relation between "rank" and "independednce"
oh then I don't, i can solve it linearly not matrox system way
Ohk, never did gaussian elimination before ?
dont thin so
how about row operations ?
yah i have done that
echeleon form /triangular form ? heard these before ?
no lol don't embarass me
you need to know below stuff before touching basis/span : 1) row operations/gaussian elimination 2) upper triangular form/echeleon form
the whole linear algebra is about solving linear equations using matrices, you should know how to solve a system in matrix form first
okay, I guess my teacher will go over it soon
\[ \large \left[\begin{array}{cc}1&1\\1&-2\\-2&1 \end{array}\right] \left(\begin{array} \\ c_1 \\c_2 \end{array}\right) = \left(\begin{array} \\ 0 \\0 \\0 \end{array}\right) \]
it should be taught first, basis/independence/subspaces make no sense w/o knowing how to solve above system in matrix form
\[\large Ax = b\]
you must know how to find the solution \(\large x\) before starting subspaces
ya i know that Ax = b
\[ \large \left[\begin{array}{cc}1&1\\1&-2\\-2&1 \end{array}\right] \left(\begin{array} \\ c_1 \\c_2 \end{array}\right) = \left(\begin{array} \\ 0 \\0 \\0 \end{array}\right) \] is same as \[\large Ax = b\]
\[ \large A = \left[\begin{array}{cc}1&1\\1&-2\\-2&1 \end{array}\right] \\~\\\large x = \left(\begin{array} \\ c_1 \\c_2 \end{array}\right) \\~\\\large b = \left(\begin{array} \\ 0 \\0 \\0 \end{array}\right) \]
ohhhh lol should have told before i know how to do it
i knw you knw :) ok so tell me how do you go about solving \(Ax = b\) ?
using elimination ofcourse
ya wait it will take me a little as i am novice
c1 + c2 = 0 c1 - 2c2 = 0 -2c2 + c2 =0 1 2 1 -2 -2 1 is it correct or did i mess up?
\[\large \left[\begin{array}{cc}1&1\\1&-2\\-2&1 \end{array}\right] \left(\begin{array} \\ c_1 \\c_2 \end{array}\right) = \left(\begin{array} \\ 0 \\0 \\0 \end{array}\right)\] the augmented matrix is \[\large \left[\begin{array}{|cc|c}1&1&0\\1&-2&0\\-2&1&0 \end{array}\right] \]
again, we want to solve it in matrix form using elimination
oh crap i know i know but forgot need a startup
my teacher already went over this
the augmented matrix is \[\large {\begin{align} \\ &\left[\begin{array}{|cc|c}1&1&0\\1&-2&0\\-2&1&0 \end{array}\right] \\ ~\\ R2-R1 ~&\left[\begin{array}{|cc|c}1&1&0\\0&-3&0\\-2&1&0 \end{array}\right] \\ ~\\ R3+2R1 ~&\left[\begin{array}{|cc|c}1&1&0\\0&-3&0\\0 &3&0 \end{array}\right] \\ ~\\ R3+R2 ~&\left[\begin{array}{|cc|c}1&1&0\\0&-3&0\\0 &0&0 \end{array}\right] \end{align}} \]
so the solution is : -3c2 = 0 c1 + c2 = 0 solving it gives you : c1=c2=0
thats the only solution ^^ that proves us that the column vectors of \(\large A\) are linearly independent
In general, for a \(\large m \times n\) matrix : rank = \(\large n\) means the column vectors are linearly independent
So you don't need to actually solve the system - you can simply find the rank of matrix with given vectors
rank tells you about independence
whoooo too much work, i will still need to watch a video how you eliminated stuff as my teacher didnt explan it well.... Thanks a lot Ganesh :)
row elimination works the same way as elimination method with system of equations : you can add linear combinations of other rows to a row
i think its too much information for you one shot
i know haha, but i am in college i gettaa take all info in or ill miss
atleast i hope you got what "independence" means and how it is realted to "rank" of a matrix
oh yes i did understand it, and the credit for that goes to you. Thank you
thats the only important thing, basis and span are just definitions based on independence
"k" independent vectors can span a "k" dimensional space
basis is just a "set of independent vectors" with which u can "span" a particular space.
\[\large \left[\begin{array}{cc} 1&0 \\ 0 & 1 \end{array}\right] \] the column vectors in this matrix form a "basis" for "2" dimensional vector space because : 1) the column vectors are independent 2) the column vectros span the 2 dimensional space
as you can see they are just definitions based on independent vectors
can you cook up an example set of vectors that CANNOT be a basis ?
1 1 1 1
Join our real-time social learning platform and learn together with your friends!