Pade Approximants, continuation of earlier thread
@Kainui alrighty here we go:
Imagine that we have a power series that converges inside some radius of convergence. \[ f(x) = \sum_n a_n x^n \] To have a concrete example, let's take the natural log: \[ \ln(1+x) = x - x^2/2 + x^3/3 + ...\]
In that approach, we approximate f(x) as a polynomial. For most reasonable functions that's possible, at least within a particular interval (its radius of convergence). But I bet we can do better -- Instead, let's approximate it as the ratio of two polynomials: \[ f(x) = \frac{\sum_{n=0}^\infty a_n x^n}{1 + \sum_{m=1}^\infty b_mx^m} \]
The 1 is there instead of b_0 because a ratio is only defined up to a multiplicative constant on top and bottom -- so if b_1 wasn't zero, we could just divide top and bottom by b_0 to make it 1 and then start from there.
Sure I think I'm following so far, I'm curious how this is going to pan out.
This is called a pade approximant, and is denoted \[ P^n_m (x) \] We can solve for the a's and b's by the following rule: Truncate the original power series as follows: \[ f(x) = \sum_{r=0}^R c_r x^r \] Then we set this equal to a Pade Approximant, truncated as appropriate like this: \[ f(x) = \sum_{r=0}^R c_r x^r = \frac{ \sum_{n=0}^N a_nx^n }{1 + \sum_{m=1}^M b_m x^m} = P^N_M(x) \]
I suppose I can see how c_0=a_0 but from there I'm stuck in finding other coefficients.
Now we cross multiply: \[ \sum_{r=0}^R c_r x^r + \left(\sum_{r=0}^R c_r x^r\right)\left(\sum_{m=1}^M b_mx^m\right) = \sum_{n=0}^N a_n x^n \] The second term on the right is \[ \sum_{r=0}^R \sum_{m=1}^M c_r b_m x^{r+m} \] if we introduce the variable q = r+m, so r = q-m, that's really \[ \sum_{q=0}^R \sum_{m=1}^{M} c_{q-m} b_{m} x^q \] And just for future convenience, since the dummy indices don't matter, let's switch q<-->r... \[ = \sum_{r=0}^R \sum_{m=1}^{M} c_{r-m} b_{m} x^r \] Plugging this in, we can factor: \[ \sum_{r=0}^R \left(c_r + \sum_{m=1}^{M} c_{r-m}b_{m} \right)x^r\]
Sorry for the delay -- I haven't done this in awhile and relabeled my indices improperly :)
Hmm I am pretty sure I'm ok with the whole dummy variable and then factoring out. Seems a little weird but I can't find anything wrong with it. =)
\[a_n=c_n+\sum_{m}^{M}c_{n-m} b_m\] That's my next move. Bathroom brb.
Oops, my limit was not right: \[ \sum_{q=1}^{R+M} \sum_{m=1}^{M} c_{q-m} b_{m} x^q \] which after our relabeling would yield: \[ c_0 + \sum_{r=1}^R \left(c_r + \sum_{m=1}^{M} c_{r-m}b_{m} \right)x^r + \sum_{q=R+1}^{R+M}\sum_{m=1}^M c_{q-m}b_m x^q\] Note that the subscript on the c's can't go any higher than R -> so q-m <= R -> so m >= q-R, and we can make one final revision to our work: \[ c_0 + \sum_{r=1}^R \left(c_r + \sum_{m=1}^{M} c_{r-m}b_{m} \right)x^r + \sum_{q=R+1}^{R+M}\sum_{m=q-R}^M c_{q-m}b_m x^q\]
What a mess... but I put it in three parts for a reason. Setting it equal to the right hand side of our original equation, \[ c_0 + \sum_{r=1}^R \left(c_r + \sum_{m=1}^{M} c_{r-m}b_{m} \right)x^r + \sum_{q=R+1}^{R+M}\sum_{m=q-R}^M c_{q-m}b_m x^q = a_0 + \sum_{n=1}^N a_n x^n \]
Clearly c_0 = a_0. For all r <=N, \[ c_r + \sum_{m=1}^{M} c_{r-m}b_{m} = a_r \] Which gives us N equations involving the *known* coefficents c, the M-1 unknown coefficients b, and the N unknown coefficients a. For all r such that N < r <= R, there's no term on the right hand side to match, so \[ c_r + \sum_{m=1}^{M} c_{r-m}b_{m} = 0 \] Which provides us with a further R-N equations... and finally, the same must be true of the third term on the left, so for all q such that R+1 <= q <= R+M, \[\sum_{m=q-R}^M c_{q-m}b_m = 0 \] Yielding M-1 equations involving the known coefficients c and the M-1 unknown coefficients b.
I reaaalllly hope I didn't make a big typo. Anyway, let's count our equations.... the first part gives us N equations involving the a's and b's. Notice that the second and third parts involve only the unknown M's, so some of them must be redundant -- giving us a total of M-1 equations for the b's. So we have N + M -1 equations, and N + M -1 unknowns. Set up a matrix to solve them and bam, it can be done, as long as N + M -1 <= R ( you could have that greater than r, but a bunch of things would just be equal to zero then and you'd end up with what you'd have had if N+M-1 = R).
I didn't mean to go into so much detail about solving for the coefficients -- the point is, if N + M-1 <= R, this problem can be solved every time (with the sole condition that f(x) is finite at the origin) Now for the part that's interesting and non algebraic... also much shorter.
The point of this whole mess is that we can compute a pade approximant for the natural log. Imagine we took the Taylor series (T) out to 10 terms and then solved for P^5_5: \[\ln(1+x) \approx T^{10}(x) \rightarrow \ln(1+x) \approx P^5_5(x) \] Now we plug in x = 0.2. The Taylor series does fairly well in approximating , while the Pade Approximant actually does MUCH better. But did I type all that out to get a slightly better version of a Taylor series? Of course not. Let's plug in x = 2. This is WELL outside the radius of convergence for the Taylor series, and just as we'd expect, the value it gives is complete nonsense. By the tenth term, the numerators are vastly larger than the denominators and the sum keeps both alternating between + and - and getting larger and larger and larger.
But then here's the magic trick: Plug x = 2 into your Pade approximant, and your answer will be accurate to about 0.0001%.
What how is that... lol Alright why do you suspect that's true?
You should stop and think for a second why this is magic -- we used the coefficients of a series that *diverges* to compute a quantity that converges *beautifully*
Because I've done it.
In terms of a formal proof? I don't have one. I am unaware that one exists.
But notice -- we didn't need the power series as a function of x -- we just needed access to its coefficients. That implies that if you had plugged a number in beforehand, and just given me \[ T^{10}(2) \] as a series of constant terms, I could simply use each term as the coefficient of a power series in x, compute my pade approximant, set x = 1, and get the same answer as I would have if you'd given me the original Taylor series and then we'd set x = 2. You see?
Nah, I don't care for formal proofs. But the intuition interests me a lot.
Hmm I think I see what you mean. Are you basically saying that a Taylor series is a crude approximation of the Pade Approximation?
So if you computed the first ten terms in the powers series of almost any function f(x) you like, plugged in x = 3, gave them to me as a list of numbers, and then I computed the pade approximant, I would have a good guess for f(3) even if the list you gave me diverged *wildly*
The reason this is deeply exciting in Physics (as well as many other subjects) is that in advanced physics, we can never solve for things exactly. We can only approximate them, and then find successively better approximations (each successive term is much harder than the last)
So if I calculated a quantity in Quantum Field theory, I might find that the successive approximation terms actually got bigger and bigger -- which is really bad, because then I can't just cut it off after three or four terms, right? But if I compute the pade approximant that corresponds to those terms, I get a finite answer that could be remarkably close to the actual value of whatever thing I'm trying to calculate
Sure, well once you get down to a certain point, the continuous character of an integral meets the discreteness of atoms... Well... Kind of I guess, since they're also waves I guess lol.
Well shows what little I know, I know pretty much nothing about quantum field theory and have to begin somewhere. Any good online resources for something like that? I think I'll have to come back here and reread some of the Pade approximation stuff later to fully absorb what you're saying I'm afraid.
In short, the Pade Approximant method works because although the technique you're TRYING to use to describe a quantity (i.e. a taylor series) might not work, the terms in that taylor series still contain information about whatever the fundamental quantity actually is, and the pade can pierce through the failure of your series to acquire information about the quantity and make a best guess as to its actual value.
QFT isn't something you can learn online. Not unless you've had vast amounts of training in Physics up to this point. Don't worry about it -- just amuse yourself with the idea that divergent series still contain information about the function they're failing to represent properly, and that there are still ways to get at it.
I've basically just done some simple stuff in quantum mechanics like solve the time independent schrodinger equation and play with harmonic oscillators and Raman spectroscopy kind of things. Just not really sure where to go from there. Isn't QFT like some sort of principle of least action Feynman thing with line integrals?
Not really, though they play a role. Pick up a book on quantum mechanics and work through time dependent problems, perturbation theory, etc. Classical field theory describes systems of many particles (fluids, etc) as fields (pressure fields, velocity fields). Quantizing this approach leads to quantum field theory, but it's rife with difficulties, it's very very VERY subtle, it's immensely complicated, and it is extremely abstract. It's upper level grad school work.
It requires a deep understanding of symmetry ( not just spatial symmetry ) so that's another area of QM you should familiarize yourself with.
Hmm well alright. I'll be graduating this semester with a degree in Chemistry and a minor in Math, if I hadn't originally started with Biochemistry I think I'd probably be getting a Physics degree by now, but it's a little late for that.
Join our real-time social learning platform and learn together with your friends!