@Kainui , can you explain to me how Part I is related to II and III? I understand how to do Part I, I just can't figure out for the life of me how it is at all relevant to the Gram-Schmidt Process. http://i.imgur.com/eBcqfEy.png
Ahh so you're familiar with GS from linear algebra or is it completely new concept to you?
It's a new concept to me, and I understand how to do it with respect to vectors, but have yet to see a (at least one I can understand) simple example dealing with trying to make functions orthogonal using Gram-Schmidt.
But yeah, I just don't understand how part i.) relates to the rest, everything else seems related.
This is really one of my favorite things to learn since it really made me appreciate integrals that much more. So the idea is that functions are really infinite dimensional vectors and the integral is their dot product.
To see the analogy, here's a regular dot product of two 3D vectors: \[\Large \bar A \cdot \bar B = a_1b_1+a_2b_2+a_3b_3\] And here's the integral \[\Large \int\limits _0^1 f(x)g(x)dx = f(0)g(0)+f(0.01)g(0.01)+...+f(1)g(1)\]
When you say "infinite dimensional vectors", you don't mean like, an infinitely long vector, but rather a vector in R^infinity?
You're right, in a sense. \[\Large \int\limits_a^b f(x)f(x)dx = 1 \\ \Large \text{ "normalized vector"}\] \[\Large \int\limits_a^b f(x)g(x)dx = 0 \\ \Large \text{ "orthogonal vectors"}\]
Really they are infinitely long but the dx is the infinitesimally small part that weighs each of the infinite number of "components"
then the integral adds up all infinite number of these infinitely small pairs of terms in the dot product. =P
Yeah, so now you should really be able to just directly apply linear algebra on them like you would with vectors, have fun hahaha Another direct application of this seemingly esoteric thing (This is Hilbert space BTW) is probability. See, what's the probability of finding someone who's exactly 6 feet tall? Well you might find someone who's 6.000000000000001 feet tall right? So really there's only ever an infinitesimally small chance that you will find someone of any specific height. That's why the area under the gaussian distribution (bell curve) is 1. The more you know right? =P
This sounds cool, but I think it was too much for me, lmao. I don't think I'll be able to grasp all of that quite yet, but I could ask: Does the answer to i.) (or its solution) affect or inform the mechanics of solving the other parts of the problem?
(Sorry if all that well-types and informative stuff seems like it went to waste, I'm still looking over it, heh)
Yeah it would appear to be useful haha. No, that's fine take your time I mean it's a weird but cool concept haha
So is the integrand of I_n supposed to be the "dot product" of two functions relevant? I'd expect it to be functions f_0 and f_1 or some combination of the available three, then.
And is it appropriate in this to call dx a unit vector, or does it behave like a unit vector in the context of these infinitely long vectors?
For some problems, like Sturm-Liouville, you will need a special extra factor to make them have their orthogonality and the range over which they're orthogonal is different. I'm not entirely sure since I haven't seen this problem before but I am thinking the "dot product" here would be: \[\Large "f_n \cdot f_m" = \int\limits_0^\infty (x^n x^m) e^{-x}dx\]
No don't think of dx as being a unit vector. The dx is more like what allows us to turn this infinitely large dot product back down into a real number. Think of it as sort of cancelling the affect of being an infinite sum, which is the integral part. So all integrals can be thought of as just an infinite sum of infinitely small things. (even integrals that don't have infinity as bounds, the infinity is between the "rectangles" lol)
A unit vector would be something like this: \[\Large f(x) = \cos(x) \\ \Large \int\limits_0^{2\pi} \cos^2(x) \frac{dx}{\pi}=1\] In fact, this is the key sort of fact that fourier series rely on, since it can be shown that ALL trig functions are orthogonal to others with different frequencies: \[\Large m \ne n \\ \Large \int\limits_0^{2\pi} \cos(nx)\cos(mx) \frac{dx}{\pi}=0\] So this will allow you to solve for the coefficients in a fourier series since this sort of integral will destroy all the other terms except a single one you are looking for -- if that makes sense. It will eventually I hope, since it is quite amazing.
Wait, so the magnitude of an integral is roughly equivalent conceptually to the magnitude of a vector?
Yes, exactly.
Think of \[\Large a_n\] as the nth component of the vector a and think of \[\Large f(x)\] as the xth component of the vector f
Does that mean that implicitly the upper and lower bounds of an indefinite integral (sounds nonsensical saying it that way) are =-infinity and infinity, respectively?
No, see there are bounds on all these integrals. For instance, 0 to infinity, 0 to 2pi, 0 to 1 are all common bounds. You can think of this as being the sort of analogue of a 100 dimensional vector versus like a 3 dimensional or 2 dimensional vector. In every case though there are an infinite number of points to pick between 0 and 1 and 0 and 2pi, even though they don't have a bound as infinity, it doesn't need there to be one.
Continuous as opposed to discrete. So what happens if there aren't bounds? Does that mean you can't treat it as a vector? Because I think I'm misunderstanding you, and I think what I just said is not true. And is virtually any definite integral evaluated to have a value of 1, a unit vector?
(PS thanks so much for your time)
Here is a set of polynomials that form an "orthonormal basis" and their integrals from -1 to 1 are either 0 if you have different or 1 if they are the same http://en.wikipedia.org/wiki/Legendre_polynomials#mediaviewer/File:Legendrepolynomials6.svg
Different what, or same what? (lol sorry, throwing a lot of questions out here)
Yeah you're right, we're dealing with a continuum of "dimensions". There have to be bounds, but you can have your bounds be -infinity to +infinity which is the case for Hermite Polynomials. (The picture I just showed are Legendre polynomials) I guess one way to think of not having bounds would be to completely remove the the ability to compare the vectors. It would be roughly like taking the dot product of a 2D vector with a 3D vector, but slightly worse than that.
So look at the picture, when I say "different" I mean \[\Large \int\limits_{-1}^1 P_n(x)P_m(x) dx = 0 \text{ for n } \ne \text{ m}\] and for "same I mean \[\Large \int\limits_{-1}^1 P_n^2(x) dx = 1\] See how the first condition is just orthogonal vectors and this second condition is the dot product with itself (length)? That picture has some polynomials in you can easily compute, maybe try them out, take the "dot product of P1 with P0 and P1 with itself and se what you get. =)
(Reading this, one sec)
Yeah take your time I'm not trying to rush you, just trying to give you the most painless path to knowledge =)
Some examples: Orthogonal functions: http://www.wolframalpha.com/input/?i=integral+cos%283x%29cos%285x%29%2Fpi+dx+from+0+to+2pi Length of "unit function" http://www.wolframalpha.com/input/?i=integral+cos%5E2%28x%29%2Fpi+dx+from+0+to+2pi
In that first example on wolfram alpha change the 3 and 5 to ANY integer value and you will get 0 if they are different and 1 if they are the same. =)
Maybe some other day I can show you how to derive the coefficients on a fourier series, since it's really one of the most enlightening reasons to have orthogonal polynomials in the first place.
Alright, yeah. I'm getting tested partly on Fourier series soon, so I'll have to learn eventually, heh. Right now just trying to figure stuff out. So I'll come full circle and ask one time, even if it sounds painfully simplistic or repetitive:: How does viewing Part i affect my knowledge or workings of Part ii or iii in the original problems? Does it alter any information very specifically?
Yes definitely. Notice in part 3,4, and 6 you have this little bit of notation: \[\Large \{[0,\infty);e^{-x}\}\] This is what tells you that the dot product has this sort of "template" I'll help you with part iv to start you off: \[\Large "f_0 \cdot f_0"= \int\limits_0^\infty f_0f_0 e^{-x}dx=\int\limits_0^\infty 1^2 e^{-x}dx=1!=1\] Now we can start doing Gram-Schmidt process by finding the projection of f0 on f_1 and subtracting it out, coming up in a second.
I'm putting apostrophes to show the dot product sort of analogue \[\Large proj_{f_0}(f_1) = \frac{"f_0 \cdot f_1"}{"f_1 \cdot f_1"}\]\[\Large =\frac{\int\limits_0^\infty x^0 x^1 e^{-x}dx}{\int\limits_0^\infty x^1 x^1 e^{-x}dx}\] Hopefully this is making it more obvious now?
Whoops for that post before the last one I put that it was 1! when I should have put 0!, which is also 1, so no big deal.
Continuing on and I suppose I should follow their convention, forgive me for kind of skipping ahead: \[\Large \hat g_0(x) = f_0(x) \text{ already normalized}\\ \Large \hat g_1(x) = f_1(x) -\frac{1}{2} \text{ unormalized}\] So maybe you see how the GS process is coming through here. I am gonna let you figure the rest out. Remember, to normalize a function all youhave to do is take the dot product with itself and divide by the square root of that value, just like vectors! =)
By all means keep asking questions if you have them, but I won't work out any more stuff like this haha. When is this due by the way?
Yeah, this is starting to make sense to me; thanks so much for the help, this has been a really interesting conversation on this stuff! I think I'm going to go sleep for a little and then wake up early to look again at this stuff before work. Oh, this isn't due, this is just from an example set. My exam is on Monday, and he's just been throwing a ton of stuff at us in a short amount of time; I'm almost certainly not going to do well (I have another exam on Monday, and then another on Tuesday, and several assignments due on Monday and Tuesday and I'm working tomorrow, like today, from 9 AM - 9 PM technically)-but I'm going to try, lol.
Also don't feel bad about "wasting my time" or anything like that. I come here for just as much the benefit of reviewing and learning as you do and I think everyone has something to gain from having fun here haha. It's not every day I get to introduce someone to one of my favorite concepts in calculus, most people I talk to barely know what the quadratic equation is lol.
Yeah. Well, thanks so much! I'm going to probably just change what I'm studying for now to a different subject. Thank you, and goodnight!
Yeah later
@Kainui , could you help me out on this once again? I'm trying the Gram-Schmidt procedure for a certain set of functions, but am making a mistake somewhere.
Join our real-time social learning platform and learn together with your friends!