\(a_1x + b_1 y = c_1\) \(a_2x + b_2 y = c_2\) \(a_3x + b_3 y = c_3\)
clearly, changing the order of these equations will not change the solution. so exchanging rows is perfectly fine.
similar reasoning u can apply for other two operations as well
Hmm, I think "clearly, ..." will not be accepted in a proof...
it is accepted in many proofs, u prove whats not obvious to u. not whats self-evident to u
But you are trying to proof that exchanging two rows will not affect the solution of the system of linear equations, and then you can "clearly, it does not affect the solution set", that means you have not done any proof at all...
ok i get ur point, let me think a bit
meatime let me call few linear algebra experts : @amistre64 @Loser66
There are plenty of books dealing with that proof, if you're looking for something rigorous then maybe Fischer Linear Algebra might help you, Jänich Linear Algebra, or even Gilbert Strang's Linear Algebra (an introduction). In most books, they introduce a notation for the rows, numerating them through in the like of \(I, II, III\). Changing two rows then means simply that \(I=II \wedge II=I\) (a bit of an overkill here), which has the same information as the original \(I, II, III\) script.
it might also help you if you do not simple read one row of a matrix like \[ \large a_1 x_1 + a_2 x_2 + \dots + a_n x_n=c_1 \tag{\(R_1=I\)}\] as an equation but as the geometric representation of it, which is an affine hyperplane. The differential geometry of a hyperplane can get complicated, but for visualization/intuition purpose it is a good exercise to understand.
the row operations are nothing more than the consequences of working systems of equations without matrixes. A matrix is not a "new" math for it, but is rather a way to organize the information to focus on the real work that is going on.
\[a_1 x_1 + a_2 x_2 + \dots + a_n x_n=c_1\]this is simply the result of dotting 2 vectors: a row vector and a column vector. \[\vec a=(a_1,a_2,\dots,a_n)\\\vec x=(x_1,x_2,...,x_n)\]
if we randomly take any 2 of these dot products, we have this to work with \[\vec a_I\cdot \vec x+\vec a_j\cdot \vec x=b_1\]
just some thoughts at the moment, i dont have time to play with it more :(
I am just not familiar with writing proofs, not even sure how to start for this one. I'm trying to look up some books, but I can't find Fischer Linear Algebra. For Jänich Linear Algebra, I've just skimmed through the chapter about system of linear equations, but couldn't find a "detailed" proof there. (The part with a little explanation is the part about elementary column operations). The book actually begins with vector space and linear map, which I haven't learnt in my current course yet. I've also skimmed through Gilbert Strang's Linear Algebra (4th edition), couldn't find a proof there as well.
For Type II: Suppose we have a system of n equations in n unknowns \(x_1, x_2, x_3, ..., x_n\). Suppose we have a Type II elementary row operation corresponding to multiplying the i-th row of the augmented matrix by a non-zeron constant k. <Part 1> Suppose \((w_1, w_2, w_2, ..., w_n)\) is a solution to the original system. Then, \((w_1, w_2, w_2, ..., w_n)\) satisfies all but the i-th equation of the new system since only the i-th equation has been changed. Suppose the original i-th equation is \(a_1x_1 + a_2x_2+...+a_nx_n = b\), then the i-th equation in the new system is \(ka_1x_1 + ka_2x_2+...+ka_nx_n = kb\). Since \((w_1, w_2, w_2, ..., w_n)\) satisfies the original i-th equation , we have \(a_1w_1 + a_2w_2+...+a_nw_n = b\). Multiplying both sides by an non-zero constant k, we have \(ka_1w_1 + ka_2w_2+...+ka_nw_n = kb\), i.e. \((w_1, w_2, w_2, ..., w_n)\) also satisfies the i-th euqaiton in the new system. Therefore \((w_1, w_2, w_2, ..., w_n)\) is also a solution to the new system. Here is what we did in the lecture. My teacher said it is only he first part of the proof, we should go home and do the second part ourselves, and try to prove type I and type III as well. Though, I don't even know what to prove in the second part of type II.
This proof seems a bit concluded to me to be honest. Because first your professor says that your solution vector \((w_1, w_2, w_3, \dots , w_n)\) satisfies all but the \(i-\)th row, and then he proves that \((w_1, w_2, w_3, \dots w_n)\) nevertheless satisfies the \(i-\)th row.
<Part 1> Suppose (w1,w2,w2,...,wn) is a solution to the original system. Then, (w1,w2,w2,...,wn) satisfies all but the i-th equation of the new system since only the i-th equation has been changed. <-- This is to be proved, which was proved in the next paragraph.
Why should there be a second part of this II though?
I don't know...
the way I read II: (II) Multiply both sides of \(\underline{\text{a}}\) row by a non zero constant, is that you only have to do it for one row, and show that multiplication of one row does not change the solution space. My next guess would have been to show that it has to be a non zero constant, but also that seems a bit out of the air for me, because the statement is \(\underline{\text{by a nonzero constant}}\)
Hmm, alright. But what happen if we multiply it by the constant zero? [Intuitively, we will lose some information since we have one equation less.] Suppose we have \((w_1, w_2, w_3, \dots , w_n)\) as a solution to the system and \(a_1x_1 + a_2x_2+...+a_nx_n = b\), then we will have \(a_1w_1 + a_2w_2+...+a_nw_n = b\). Multiplying both sides by zero, we get \(0 \cdot a_1w_1 +0 \cdot a_2w_2+...+0 \cdot a_nw_n = 0 \cdot b\), i.e. 0 = 0. What's next?
I assume you didn't have the dimension formula for matrices yet? If you did then you could argue with the general solution to a system of linear equations which is of the form: \[\large x=x_p + \ker A \] Where \(A\) is the coefficient Matrix representing your system of equations and \(x_p\) is a particular solution to \(Ax=b\) your system of linear equations. Then you could show that \( \dim \ker A' < \dim \ker A\) such that the two systems can not have the same room of solutions.
Your assumption is right. What is "ker A"?
\( \ker A = \lbrace x \in \mathbb{R}^n \mid Ax=0 \rbrace \) a subspace of \(\mathbb{R}^n\)
Called the Kernel or Nullspace.
But I do really doubt that your Professor wanted you to do this exercise, I do agree though that it sounds very confusing the way he has written it down. I couldn't figure out a second part myself to that problem.
Hmm, I'll ask again what to prove in the second part, hopefully in the next lesson. What about type I and III? Any hints for a start?
Would just do it like your Professor did it with the other one. Suppose that \((w_1, w_2, \dots , w_n\)) is a particular solution to \(Ax=b\). Now in the new matrix we swap the \(i-\)th with the \(j-\)th row. Your solution \((w_1, w_2, \dots , w_n\)) did already solve each of those rows separately, so you can argument that swapping does two rows has no effect on your solution set. Set \(i=j\) to highlight the swap of the rows. Then you have \[\large a_{1j}x_1 + \dots a_{nj}x_n=b_j \text{ in the i-th row} \\ \large a_{1i}x_1 + \dots a_{ni}x_n=b_i \text{ in the j-th row} \] Your solution vector works for both equations still, hence the solution set of \(Ax=b\) and \( A'x=b\) is the same.
I'll try the third one and post my work here tomorrow. Thanks for your help so far!
You're very welcome, I am also curious about what your Professor will say for the Second Part of II, because I can't see it myself :-) Good luck!
Suppose we have a system of m equations in n unknowns \(x_1,x_2,x_3,...,x_n\), and \((w_1,w_2,w_3,...,w_n)\) is a solution to the original system.
<Type I> Suppose we have a type I elementary row operation corresponding to exchanging the i-th row and the j-th row of the augmented matrix. We need to prove that \((w_1,w_2,w_3,...,w_n)\) satisfies all but the i-th equation and the j-th equation of the new system since only the i-th and the j-th equation have been changed. Suppose the original i-th equation is \(a_{1_{i}}x_{1}+a_{2_{i}}x_{2}+...+a_{n_{i}}x_{n} = b_{i}\), and the original j-th equation is \(a_{1_{j}}x{1}+a_{2_{j}}x_{2}+...+a_{n_{j}}x_{n} = b_{j}\). Then the i-th equation in the new system is \(a_{1_{j}}x_{1}+a_{2_{j}}x_{2}+...+a_{n_{j}}x_{n} = b_{j}\). Since \((w_1,w_2,w_3,...,w_n)\) satisfies the original j-th equation, we have \(a_{1_{j}}w{1}+a_{2_{j}}w_{2}+...+a_{n_{j}}w_{n} = b_{j}\), i.e. \((w_1,w_2,w_3,...,w_n)\) also satisfies the i-th equation in the new system. Similarly, the j-th equation in the new system is \(a_{1_{i}}x_{1}+a_{2_{i}}x_{2}+...+a_{n_{i}}x_{n} = b_{i}\). Since \((w_1,w_2,w_3,...,w_n)\) satisfies the original i-th equation, we have \(a_{1_{i}}w{1}+a_{2_{i}}w_{2}+...+a_{n_{i}}w_{n} = b_{i}\), i.e. \((w_1,w_2,w_3,...,w_n)\) also satisfies the j-th equation in the new system. Since \((w_1,w_2,w_3,...,w_n)\) satisfies both the i-th and the j-th equation in the new system, therefore, \((w_1,w_2,w_3,...,w_n)\) is a solution to the new system.
<Type II> Suppose we have a Type II elementary row operation corresponding to multiplying the i-th row of the augmented matrix by a non-zero constant k. We need to prove that \((w_1,w_2,w_3,...,w_n)\) satisfies all but the i-th equation of the new system since only the i-th equation has been changed. Suppose the original i-th equation is \(a_1x_1+a_2x_2+...+a_nx_n=b\), then the i-th equation in the new system is \(ka_1x_1+ka_2x_2+...+ka_nx_n=kb\). Since \((w_1,w_2,w_3,...,w_n)\) satisfies the original i-th equation , we have \(a_1w_1+a_2w_2+...+a_nw_n=b\). Multiplying both sides by an non-zero constant k, we have \(ka_1w_1+ka_2w_2+...+ka_nw_n=kb\), i.e. \((w_1,w_2,w_3,...,w_n)\) also satisfies the i-th equation in the new system. Therefore, \((w_1,w_2,w_3,...,w_n)\) is also a solution to the new system.
<Type III> Suppose we have a Type III elementary row operation corresponding to adding a multiple, k, of the j-th row to another row, the i-th row, of the augmented matrix. We need to prove that \((w_1,w_2,w_3,...,w_n)\) satisfies all but the i-th equation of the new system since only the i-th equation has been changed. Suppose the original i-th equation is \(a_{1_{i}}x_{1}+a_{2_{i}}x_{2}+...+a_{n_{i}}x_{n} = b_{i}\), and the original j-th equation is \(a_{1_{j}}x{1}+a_{2_{j}}x_{2}+...+a_{n_{j}}x_{n} = b_{j}\). Then the i-th equation in the new system is \((a_{1_{i}}x_{1}+a_{2}x_{2}+...+a_{n_{i}}x_{n}) + ka_{1_{j}}x{1}+ka_{2_{j}}x_{2}+...+ka_{n_{j}}x_{n})= b_{i} + kb_{j}\). Since \((w_1,w_2,w_3,...,w_n)\) satisfies the original i-th equation and the original j-th equation, we have \(a_{1_{i}}w_{1}+a_{2_{i}}w_{2}+...+a_{n_{i}}w_{n} = b_{i}\), and \(a_{1_{j}}w{1}+a_{2_{j}}w_{2}+...+a_{n_{j}}w_{n} = b_{j}\). Multiplying both sides of the original j-th equation by an constant k, we have \(ka_{1_{j}}w{1}+ka_{2_{j}}w_{2}+...+ka_{n_{j}}w_{n} = kb_{j}\). Adding the original i-th equation and the original j-th equation multiplied by a k, we have \((a_{1_{i}}w_{1}+a_{2_{i}}w_{2}+...+a_{n_{i}}w_{n}) + (ka_{1_{j}}w{1}+ka_{2_{j}}w_{2}+...+ka_{n_{j}}w_{n}) = b_{i} + kb_{j}\), i.e. \((w_1,w_2,w_3,...,w_n)\) also satisfies the i-th equation in the new system. Therefore, \((w_1,w_2,w_3,...,w_n)\) is also a solution to the new system.
Does it look good? Did I make any mistakes?
@RolyPoly, I have just read through your proof and it looks good to me. I will get back to you if I find something you could improve.
Thanks, and thanks a lot! :D
What we did was to prove the new system being equivalent to the old system by the solution of the old system. The second part is to prove the new system being equivalent to the old system by the solution of the new system. I hope I did not misunderstand my teacher's words.
Join our real-time social learning platform and learn together with your friends!