wat do they mean by this:1.A set{alpha(i)} of non-zero vector is linearly dependent if and only if some (alpha)k is a linear combination of the alpha(j) with j
a linear combination of vectors us a set of vectors added together multiplied by some constant
say we have a set of n vectors\[\{\vec v_1,\vec v_2,...,\vec v_n\}\]if any vector \(\vec v_k\) in the set can be written as\[\vec v_k=c_1\vec v_1+c_2\vec v_2+...+c_n\vec v_n\]where \(c_1,c_2,..c_n\) are scalars and at least two are non-zero
for example \[\vec v_1=\langle1,2,3\rangle\]\[\vec v_2=\langle1,0,3\rangle\]\[\vec v_3=\langle3,2,9\rangle\]are not a set of linearly independent vector because one can be written as a linear combination of the other two\[\vec v_3=\vec v_1+2\vec v_2\]
the notation you have presented is strange, I just trying to illustrate the concept
on the first question wat do they mean by saying j<k
that I'm not too sure about... I think I need to be more familiar with exactly what they mean by k, i, and j in your book
k;i;j are the foot notes
yes, but does i pertain to the number of vectors and j to the number of elements? is k a random vector in the set (I think that's what they mean by that part)
1.A set \(\{\alpha_i\}\) of non-zero vector is linearly dependent if and only if some \(\alpha_k\) is a linear combination of the \(\alpha_j\) with \(j<k\). 2.A set \(\{\alpha_i\}\) of non-zero vector is linearly independent iff \(\alpha_k\) is not an element of \(\langle\alpha_j,......,\alpha_{k-1}\rangle\) is this exactly how it is stated?
yes
I feel like I understand linear independence, but this phrasing is confusing me I'm not sure why they require j<k
@phi
but he is not there .Thanks
sorry, he is online though... hopefully he'll help us out good luck!
they are indexing (labeling) each vector. If you start with vector 1, and start adding vectors, if you get to vector k, and it is linearly dependent on the previous vectors 1 to k-1, then the set is linearly dependent. It is (obviously?) true. If you get to the end of the all the vectors and none are linearly dependent, then the set is independent.
ok, that makes sense I just couldn't see the point of arranging the vectors in some order like that, such that the vector in question v_k is at the end of the list.
it makes sense
I'm guessing so you can go on to define a basis (minimum number of vectors that are independent)
such that the vector in question v_k is at the end of the list. that is a bit misleading. say a max of three vectors can be independent and you start with v1,v2,v3,v4 any of the 4 could be in the independent set e.g. (v4,v3,v2) the last one will always be dependent of the first 3.
does the definition of basis reflects only to the finite sets only?
I only learned what Strang taught in his course.
@jacobian check this
Join our real-time social learning platform and learn together with your friends!