linear algebra: A=Lu
\[EA=u\\A=E^{-1}u\] is it guaranteed that \(E^{-1}\) is a lower triangular matrix? (if no row exchanges are made?) i don't understand why we are factoring it this way, does it help solve AX=b faster ?
under the section: how expensive is elimination is talking about cost of finding A^-1 , or of factoring A as Lu, or of solving Ax=b ?
@ganeshie8 @phi
Hi
hey :) finally changed ur pic i see i do like ice cream :P
Haha that is just to give myself some motivation to eat
is it guaranteed that E−1 is a lower triangular matrix? notice E is the identity matrix, with one additional value at location row i, col j when we do elimination, we always eliminate entries in A below its diagonal (assuming square, and full rank)
that means all the E's are lower triangular. And we can show that multiplying lower triangular matrices results in a lower triangular matrix.
hmm... i think i get that, i'll try with some examples later and see if i get it fully
so why are we doing this? how does it help with gauss elimination ?
Don't you appreciate that lower triangular matrix represents a set of equations that can be solved trivially ?
Look at below eqns x1 = 2 x1 + x2 = 3 etc
yes, that was the previous lecture \[AX=b\\(EA)X=Eb\] EA is a lower triangluar matrix from which we can pretty much easily read off the values for the unknowns i dont get why we are factoring A=Lu
I thought Lecture 2 goes into the details of how to solve Ax= b using LU factorization
lecture 2 does not discuss A=Lu only E(A)=U and how this makes solving for X easy
lecture 3 say that we can factorize A as Lu, but dosen't seem to explain why we want to. is it covered in subsequent lectures?
how is \[AX=b\\(Lu)X=b\] easier than \[(EA)X=Eb\\UX=Eb\]
i haven't gone through the problems/ recitation video.. is it covered in those?
LU x = b first you solve L y = b using forward substitution to find y then you solve Ux = y using back substituion
whats 'y' is this method covered later on? is it computationally less expensive than the earlier method (EA)X=Eb ?
y is an intermediate result (a vector) https://en.wikipedia.org/wiki/Triangular_matrix#Forward_and_back_substitution
thanks, i've got to go now, please do post any thoughts you have. i'll read that when i can and get back to you :)
in other words with LU x = b Ux will be a vector. call it y. L y = b we know L and b, so we find y knowing Ux = y and knowing y and U, we solve for x
"E is the identity matrix, with one additional value at location row i, col j" this is always true assuming a typical case?
here in the pdf we have \[E_{32}=\left[\begin{matrix}1 & 0 & 0 \\ 0 & 1 & 0\\ 0 & -5 & 1 \end{matrix}\right]\] and \[E_{32}^{-1}=\left[\begin{matrix}1 & 0 & 0 \\ 0 & 1 & 0\\ 0 & 5 & 1 \end{matrix}\right]\] as in, the inverse of the eliminaiton matrix is just swapping signs of elements below the diagonal is that how Prof. Strang is finding inverses at about 22:00? (i just know the 18.02 method adj(a)/det(a) ) http://ocw.mit.edu/courses/mathematics/18-06sc-linear-algebra-fall-2011/ax-b-and-the-four-subspaces/factorization-into-a-lu/
is that the advantage of A=Lu ? if we know the each of the elimination matrices E_32 , E_31, E_21 then L is just those three sort of superimposed with signs below the diagonal swapped?
@phi @ganeshie8
Join our real-time social learning platform and learn together with your friends!