#18 https://drive.google.com/file/d/0B7ss_Y8M_VEgWWJ2ZEJWOHZGZWc/view?usp=drivesdk
Clearly, 1) if p1 and p2 are in S, then p1+p2 is also in the S. 2) if p is in S, then c*p is also in the S. So S is a subspace.
How to find a basis ?
A definite integral is a linear operator, here I'll put it side by side just for clarity, \[\int_0^1 cf(x)+kg(x)dx= c\int_0^1 f(x)dx + k \int_0^1g(x)dx\] \[A(cu+kv)=c(Au)+k(Av)\] So although this will look awkward to you maybe, we can represent the integral operator just like this: \[A=\left( \int_0^1 dx\right)\] and now we can look at how the components of a column vector in the basis \(P_3 = \{1,x,x^2,x^3\}\) look after transformation, \[v=\begin{pmatrix}a_0\\a_1\\a_2\\a_3\end{pmatrix}\] The whole point is you are really looking for the null space satisfied by this equation: \[Av = 0\]
One of the best things about a linear operator is that if you know how its basis vectors change, you know how ANY vector changes and that's really powerful. I'll show you how we can use that to find the first column of the matrix A. So let's look at transforming the first basis vector, which represents just a constant. \[\left(\int_0^1 dx\right) a_0 = a_0\] Make sure that works. In matrices it looks like this: \[\begin{pmatrix} ? & ? & ? & ?\\ ? & ? & ? & ?\\ ? & ? & ? & ?\\ ? & ? & ? & ? \end{pmatrix}\begin{pmatrix} 1\\ 0\\ 0\\ 0 \end{pmatrix}=\begin{pmatrix} 1\\ 0\\ 0\\ 0 \end{pmatrix}\] But look, that matrix multiplication means "one of the first columns plus 0 of the other 3 columns" which means that the right hand side of the equation is EXACTLY the first column of the matrix. \[A=\begin{pmatrix} 1 & ? & ? & ?\\ 0 & ? & ? & ?\\ 0 & ? & ? & ?\\ 0 & ? & ? & ? \end{pmatrix}\] You have 3 more unit vector multiplications to go, I am not sure if I made it quite so clear what I'm doing but it should be a relief at some point that you can figure out linear transformations this easily.
I feel like that example might have been a little too simple that it ended up being slightly confusing because the integral ended up just mapping \(a_0\) to \(a_0\) lol. Oh well I can easily copy paste another one for the second or third column if you're still feeling kinda confused.
I can see Ax as taking combinations of column vectors of A with weights set by the components of x
x-1/2 x^2 - 1/3 x^3 - 1/4 After some guessing, I figured above is a basis. Wish there is another way to work this that is a bit less adhoc..
Do you know how to find the nullspace of a matrix?
Yes I know how to find all 4 subspaces
1 1/2 1/3 1/4 0 0 0 0 0 0 0 0 0 0 0 0
I think that matrix represents the given integral transformation
Nullspace basis can be -1/2 1 0 0 -1/3 0 1 0 -1/4 0 0 1
Wow! Really interesting how the columnspace is just an one dimensional real number line and how the nullspace is some 3 dimensional plane in 4D spanned by those basis vectors. Thanks @kainui :)
Haha I think you did most of the work on your own there. I'll never forget the example that my linear algebra professor said that made null spaces make sense to me. The real world is 3D space, but a map is 2D, so all that entire 1D z-direction, "up" like heights of stuff, is the null space that all gets sent to the same level. For me that really helps me intuitively understand null spaces. Generally speaking, this is sorta why we can understand projections as not being invertible and that they are idempotent. Come to think of it, I think it was you who showed me (or I saw you show someone else) this trick for making projection matrices that project onto a vector. \[P = \frac{vv^\top}{v^\top v}\]
That's a neat example. For me, all subspaces made sense only after solving a network using matrices. Remember this from one if strang lecs ? If x represents the potentials at nodes, then Ax represents the potential differences cAx can be the ohms law, which gives the currents entering nodes : y A^Ty = 0 represents KCL, which is same as finding left nullspace
Join our real-time social learning platform and learn together with your friends!