Ask your own question, for FREE!
Mathematics 66 Online
OpenStudy (anonymous):

Delta Epsilon Proof Tutorial (Advanced Level) This tutorial will require you to have a good understanding of algebra. I believe this tutorial will be helpful to tutors more so than students, since many people on OS are unfamiliar or weary of Delta Epsilon.

OpenStudy (anonymous):

@hartnn There are some typos, but I think it should help quite a bit. Let me know if there are any confusing parts.

OpenStudy (anonymous):

I'm going to fix some of the typos now.

OpenStudy (anonymous):

\(\Large \text{The Delta Epsilon Definition}\) The following limit: \[ \lim_{x\to a}f(x)=L \]Is defined as: For any \(\epsilon\gt0\), there must be a \(\delta\gt0\) such that: If \(x\neq a\) and \(a-\delta \lt x \lt a+\delta \) then \(L-\epsilon \lt f(x) \lt L+\epsilon\). We we often will write this as: \[ \forall \epsilon\gt0\exists\delta\gt0\forall x \quad0\lt|x-a|\lt\delta \implies |f(x)-L|\lt\epsilon \]Understand that \(\epsilon\) represents how far \(f(x)\) is allowed to stray from \(L\), while \(\delta\) represents how far \(x\) is allowed to stray from \(a\). Also notice that \(0\lt|x-a|\), which means that \(x\neq a\). This is actually the main reason we use limits. Limits allow us to operate under the assumption that \(x\neq a\). This means that \(f(a)\) does not need to be defined, it just need to be an indeterminate form.

OpenStudy (anonymous):

\(\Large \text{The Delta Epsilon Game}\) Think of it as a game we are playing against our opponent. Our opponent is just not convinced and will do whatever is possible to find a counter example. It isn't enough to put the opponent in check, we must put them in checkmate to win this game. Like any other proof, there can't be any counter examples or holes. Notice the order of the qualifiers. \[ \forall \epsilon\exists\delta\forall x \quad0\lt|x-a|\lt\delta \implies |f(x)-L|\lt\epsilon \]Each qualifier is a step in our game. 1. The opponent chooses \(\epsilon \). 2. We choose \(\delta\). 3. The opponent chooses \(x\). The order matters because it determines what each player knows. When the opponent choose \(\epsilon\), he doesn't know anything about \(\delta\) or \(x\). When we choose \(\delta\), we know \(\epsilon\) but not \(x\). When the opponent choose \(x\), they know about \(\delta\) and \(\epsilon\). As a consequence of all this \(\delta\) cannot depend on \(x\) since it is not known what \(x\) is when we are made to chose \(\delta\). When we choose \(\delta\), it must depend entirely on \(\epsilon\). So ultimately \(\delta\) must be a function of \(\epsilon\). \(\Large \text{The Optimal Strategy}\) The ultimate strategy is to let \(\delta\) to be as small as possible. If we are ever given a choice between \(h\) and \(k\), we want to chose \(\delta=\min(h,k)\). Why is this? The reason is that the smaller we let \(\delta\) be, the less \(x\) values we have, which means that there are less \(f(x)\) values, and less chances of \(|f(x)-L|\ge \epsilon\). Remember how conditional statements work. \(a\implies b\) is false when \(a\) is true and \(b\) is false. If \(|x-a|<\delta\) and \(|f(x)-L|\ge \epsilon\) for any \(\epsilon\), then we lose the game. Unfortunately, we are stuck with the restriction \(0\lt\delta\). So what is the smallest possible \(\delta\) we could choose? Well that is like asking what is the largest number. You can always make a number bigger by adding \(1\), and you can always find a smaller positive number by dividing by \(2\). In order to win this games, we need to find a function \(\delta=g(\epsilon)\), which will work regardless of the \(\epsilon\) value thrown at us. Fortunately our function can use other functions such as \(\min\).

OpenStudy (anonymous):

\(\Large \text{Delta Epsilon and Polynomials and Rational functions}\) Usually Calculus students will be asked to do delta epsilon proofs for polynomials and rational functions. Here is an algorithm that will always work for polynomials and rational functions, and may sometimes work for other functions. Consider the rational function \(f(x)\) with the following limit:\[ \lim_{x\to a}f(x) = L \]For this limit to exist, it must be the case that \((x-a)\) is a factor of \(f(x)-L\). Let's look at what our definition will look like:\[ 0\lt|x-a|\lt \delta \implies |f(x)-L|\lt\epsilon \]Since it is rational function, we will be able to factor out \(x-a\) from the numerator.\[ 0\lt|x-a|\lt \delta \implies \left|(x-a)\frac{f(x)-L}{x-a}\right|\lt\epsilon\\ 0\lt|x-a|\lt \delta \implies |x-a|\left|\frac{f(x)-L}{x-a}\right|\lt\epsilon \]Next, we will divide both sides of the \(\epsilon\) inequality of whatever remains after having factored out \(x-a\).\[ 0\lt |x-a|\lt\delta \implies |x-a|\lt\epsilon\left|\frac{x-a}{f(x)-L}\right| \]At this point, we would win the game if we could say that: \[ \delta=g(\epsilon,x)=\epsilon\left|\frac{x-a}{f(x)-L}\right| \]However, we are not allowed to do this because \(\delta\) must be independent of \(x\). Remember that we are only given an \(\epsilon\) value but not an \(x\) value. The good news is that we control what \(x\) can be through \(\delta\). \(\Large \text{Fail-safe: Delta Subscript Zero}\) In this case, what we generally do is pick some \(\delta_0\) and let \(\delta =\min(\delta_0,g(\epsilon) )\). This means \(\delta\leq \delta_0\). Which means: \[ |x-a|\lt\delta\implies |x-a|\lt\delta_0\implies a-\delta_0 \lt x\lt a+\delta_0 \]Next we let: \[ g(\epsilon) = \min_{a-\delta_0\lt x\lt a+\delta_0}\left(\epsilon\left|\frac{x-a}{f(x)-L}\right|\right) = \epsilon \left(\min_{a-\delta_0\lt x\lt a+\delta_0}\left(\left|\frac{x-a}{f(x)-L}\right|\right)\right) \]All we need to do now is to prove that the minimum does not equal \(0\). To do that, let's talk a bit about: \[ h(x)=\left|\frac{x-a}{f(x)-L}\right| \]First of all \(h(x)\geq 0\), so at least we know it won't be negative. Second of all \(h(x)\) is a rational function. This means that it is going to have a finite number of roots. Therefore, so long as we pick a \(\delta_0\) such that \(h(x)\neq 0\) for all \(x\in[a-\delta_0,a+\delta_0]\) then we can be certain that: \[ g(\epsilon) = \epsilon \min(h(x))\gt 0 \] Please note that these proofs will expect you to explicitly pick a \(\delta_0\), and then you will have to actually find \(\min(h(x))\), to get rid of any \(x\) terms.

OpenStudy (anonymous):

\(\Large\text{Delta Epsilon Example}\) I've been very abstract, but it's time for an actual example to help make sense of it all. Let's prove that: \[ \lim_{x\to 2}\frac{x^2-x-2}{x-2} = 3 \]First let's take note that: \(x^2-x-2 = (x-2)(x+1)\). We knew we had to have an indeterminate form, so we know that \(x-2\) has to be a factor of the numerator. Now on to our definition: \[ 0\lt|x-2|\lt \delta \implies |f(x)-3|\lt \epsilon\\ 0\lt|x-2|\lt \delta \implies \left|\frac{x^2-x-2}{x-2}-3\right|\lt \epsilon \\ 0\lt|x-2|\lt \delta \implies \left|\frac{(x-2)(x+1)}{x-2}-3\right|\lt \epsilon \\ 0\lt|x-2|\lt \delta \implies |(x+1)-3|\lt \epsilon \\ 0\lt|x-2|\lt \delta \implies |x-2|\lt \epsilon \]Well, this one turned out nice and easy. All we have to do is let \(\delta = g(\epsilon) =\epsilon\). Let's to a slightly harder one:\[ \lim_{x\to 2} x^2-x-2= 0 \] Now on to our definition: \[ 0\lt|x-2|\lt \delta \implies |f(x)-0|\lt \epsilon\\ 0\lt|x-2|\lt \delta \implies \left|(x^2-x-2)\right|\lt \epsilon \\ 0\lt|x-2|\lt \delta \implies |(x-2)(x+1)|\lt \epsilon \\ 0\lt|x-2|\lt \delta \implies |x-2||x+1|\lt \epsilon\\ 0\lt|x-2|\lt \delta \implies |x-2|\lt \frac{\epsilon}{|x+1|} \]We want to say: \[ \delta= \frac{\epsilon}{|x+1|} \]But we can't, so we need to pick a \(\delta_0\). Fortunately, \(1/|x+1|\) doesn't have any roots to worry about, so we can pick any \(\delta_0\) we want. In these cases, I like to pick \(\delta_0=1\) because it is an easy number to work with. \[ |x-2|\lt\delta\implies |x-2|\lt1\implies -2-1\lt x\lt -2+1 \implies -3\lt x\lt -1 \]Now we minimize \(1/|x+1|\) on the interval \([-3,-1]\): \[ \min\left(\frac{1}{|x+1|}\right) = \frac{1}{\max(|x+1|)} = \frac 12 \]Finally, to finish the proof: \[ \delta = g(\epsilon) = \min\left(\delta_0,\frac{\epsilon}{|x+1|}\right) = \min\left(1,\frac{\epsilon}2\right) \]

OpenStudy (zzr0ck3r):

maybe append this with sequences?

OpenStudy (zzr0ck3r):

It is much nicer, imo, to use the sequential definition of a limit of a function.

OpenStudy (zzr0ck3r):

I just realized this;)

OpenStudy (anonymous):

What do you mean the sequential definition?

OpenStudy (anonymous):

I'm using a definition used by text books that actually expect you to do these types of proofs, rather than using limit rules.

OpenStudy (zzr0ck3r):

not limit rules

OpenStudy (zzr0ck3r):

there are many definitions that are equivalent, its just easier to work with the sequential definition.

OpenStudy (jhannybean):

I got a little confused towards the end of your second example

OpenStudy (anonymous):

Which part was confusing?

OpenStudy (jhannybean):

\(\color{blue}{\text{Originally Posted by}}\) @wio We want to say: \[ \delta= \frac{\epsilon}{|x+1|} \]But we can't, so we need to pick a \(\delta_0\). Fortunately, \(1/|x+1|\) doesn't have any roots to worry about, so we can pick any \(\delta_0\) we want. In these cases, I like to pick \(\delta_0=1\) because it is an easy number to work with. \[ |x-2|\lt\delta\implies |x-2|\lt1\implies -2-1\lt x\lt -2+1 \implies -3\lt x\lt -1 \]Now we minimize \(1/|x+1|\) on the interval \([-3,-1]\): \[ \min\left(\frac{1}{|x+1|}\right) = \frac{1}{\max(|x+1|)} = \frac 12 \]Finally, to finish the proof: \[ \delta = g(\epsilon) = \min\left(\delta_0,\frac{\epsilon}{|x+1|}\right) = \min\left(1,\frac{\epsilon}2\right) \] \(\color{blue}{\text{End of Quote}}\) That portion.

OpenStudy (zzr0ck3r):

lol

OpenStudy (zzr0ck3r):

I only laugh because this is the most failed class at my school. and you are ttrying to figure it out in 5 mins

OpenStudy (zzr0ck3r):

I T.A. for this class and 35% passed

OpenStudy (jhannybean):

I get most of it,I'm not understanding the notation he used to minimize the functino because I learned it a little differently.

OpenStudy (anonymous):

Okay, do you understand that we have to eliminate the \(x\)?

OpenStudy (anonymous):

Oh, you don't get how I minimized the function?

OpenStudy (jhannybean):

How you minimized it.

OpenStudy (zzr0ck3r):

For \(f:D\rightarrow \mathbb{R}\) we have \(\lim_{x\rightarrow x_0}f(x)=L\) provided that if \(x_n\) is a sequence that goes to \(x_0\) on \(D-\{x_0\}\), then \(f(x_n)\rightarrow f(x_0)\) This is another useful definition, and its exactly the same.

OpenStudy (anonymous):

Well, do you understand how \(\max(|x+1|)\) would minimize the inverse? And you understand that since \(|x+1|\) is just a V shaped line? So it means that it must be maximized at the endpoint of the interval. The endpoints were \(-3\) and \(-1\). Plugging them in get \(2\) and \(0\) respectively. So \(\max(|x+1|)=2\).

OpenStudy (jhannybean):

That makes more sense.

OpenStudy (anonymous):

@zzr0ck3r It's subjective as to what definition is easier. You can't say that your definition is always easier. What really matters here is what was introduced to the student. In many cases it will be Delta Epsilon. You can make your own sequence tutorial if you want. I don't see why I have to throw away Delta Epsilon just because you don't like it.

OpenStudy (anonymous):

@Jhannybean I sort of hand waved the \(\min(\ldots)\) part because finding minimums is usually easier than actually doing these proofs.

OpenStudy (jhannybean):

I see.

OpenStudy (anonymous):

But it is also why I say "I believe this tutorial will be helpful to tutors more so than students"

OpenStudy (jhannybean):

:) Understandable.

OpenStudy (zzr0ck3r):

We should do a series on these starting at sequences, then continuity of functions, then limits, then differentiation (using the definitions) I think its a huge problem, in America at least, that these sort of concepts are practically unheard of at the lower level mathematics (200 and down) I have met about 40 people that told me, they had no idea what they were getting into when they chose to be a math major, because the stuff you do in calc and dif eq, and in alg, is nothing like the methods we use in topology /analysis. So many people get scared of when they see these for the first time... We could be HEROS!

OpenStudy (watchmath):

How do you proof using epsilon-delta that if \(\lim_{x\to c} f(x)=L\neq 0\) then \(\lim_{x\to c} \frac{1}{f(x)}=\frac{1}{L}\)

ganeshie8 (ganeshie8):

I think the sequence characterization immediately follows epsilon delta stuff in any analysis book : \(\lim \limits_{x\to a} f(x) = l \iff \) \(\forall \) sequences \(\{a_n\}\), \(a_n \to a \implies f(a_n) \to l\) compact sets is another neat definition if you really want to get fancy :D

OpenStudy (zzr0ck3r):

much better, same with continuity preimage of open set is open BAM!

OpenStudy (anonymous):

I don't really think about Epsilon Delta personally as my understanding of limits. The way I understand them is that you are removing the boundary of the domain, and removing the boundary of the codomain as a result. You are showing that when all of the domain is finally cut away, the only thing left in the codomain is the limit being approached.

OpenStudy (zzr0ck3r):

there is no epsilon delta in the book we teach this class out of(for limits) its only the sequential version. @ganeshie8

OpenStudy (zzr0ck3r):

of course you have \(\epsilon-N\) now, but that is easier to work with.

ganeshie8 (ganeshie8):

I see that works for limits of sequences but dont we need epsilon delta for functions ?

OpenStudy (zzr0ck3r):

nope

OpenStudy (zzr0ck3r):

the definitions are equivalent....

ganeshie8 (ganeshie8):

Right, im talking about \(\epsilon -N\) versus \(\epsilon - \delta \)

OpenStudy (zzr0ck3r):

there are no "deltas" in sequence, and you notice the sequential definition says nothing of epsilon or delta

OpenStudy (zzr0ck3r):

right, I think of N as a little easier d lol

ganeshie8 (ganeshie8):

if i remember correctly N is for limits at infinity

OpenStudy (zzr0ck3r):

limit of sequences are always infinite

OpenStudy (zzr0ck3r):

you literally don't need a delta in an analysis book

OpenStudy (zzr0ck3r):

So who wants to tackle continuity?

ganeshie8 (ganeshie8):

``` preimage of open set is open ``` Is this the definition of continuity ?

OpenStudy (zzr0ck3r):

one of them

OpenStudy (zzr0ck3r):

its the topology definition

OpenStudy (zzr0ck3r):

ill do an example of the epsilon delta proof so people can see how they are related A function \(f:D\rightarrow \mathbb{R}\) is continuous at \(x_0\) provided \(\forall \epsilon > 0 \exists \delta >0\forall x\in D |x-x_0|<\delta \implies |f(x) - f(x_0)|<\epsilon\) Let us look at the example, \[f:(0,\infty) \underset{x\mapsto \frac{1}{x}}{\longrightarrow} \mathbb{R}\] The first thing I do is consider what the result would look like. \(|\frac{1}{x}-\frac{1}{c}|=\frac{|x-c|}{xc}\). The real problem here is the \(x\) on the bottom. Since \(c,\epsilon\) come before \(\delta\) in our definition, it is ok do define \(\delta\) dependent on them, but since \(x\) comes after \(\delta\) we can't have \(\delta\) depending on \(x\). (\(\delta\) has been chosen before \(x\) even entered in the picture). So it would be nice if we could find a number \(y\) s.t. we have \(\frac{1}{x}<y\) because then we could say \(\frac{|x-c|}{xc}<y\frac{|x-c|}{c}\). Then we could replace \(|x-c|\) with \(\delta\), then shove it all under \(\epsilon\) and solve for \(\delta\)(remember we are trying to hunt for a delta). So we would have \(y\frac{|x-c|}{c}<\epsilon\) then \(y\frac{\delta}{c}<\epsilon\) and we solve for \(\delta\) and get \(\delta < c\frac{\epsilon}{y}\) so we would have our delta, and we would be done. So now we find the \(y\). \\ One thing I always do is start with \(|x-c|<\delta\) and see what we can do. Well this implies \(c-\delta<x<c+\delta\) and if \(c-\delta>0\) we can do this \(\frac{1}{x}<\frac{1}{c-\delta}\), So we found our \(y\). To make sure \(c-\delta > 0\) we replace \(\delta\) with \(\frac{c}{2}\). So if \(\delta < \frac{c}{2}\) we have \(|x-c|<\delta \implies |x-c|<\frac{c}{2}\implies \frac{1}{x}< \frac{1}{c-\frac{c}{2}} = \frac{2}{c}\). \\ So now we have \(\frac{|x-c|}{xc}<\frac{2}{c}\frac{|x-c|}{c}=\frac{2|x-c|}{c^2}\) and now I replace \(|x-c|\) with \(\delta\) and shove it all under \(\epsilon\) and then solve for \(\delta\). \(\frac{2\delta}{c^2}<\epsilon \iff \delta < \frac{c^2\epsilon }{2}\). So now we tie it all together. \\ \\ proof: Let \(c\in (0,\infty)\). Let \(\epsilon > 0\) be given. Set \(\delta < \min\{\frac{c}{2}, \frac{c^2\epsilon}{2}\}\). Then \(\forall x\in (0, \infty)\) we have, \[|x-c|<\delta \implies | \frac{1}{x}-\frac{1}{c} | = \frac{|x-c|}{xc} < \frac{2|x-c|}{c^2} < \frac{2\delta}{c^2}<\epsilon\] as required.

OpenStudy (zzr0ck3r):

I wrote this for my class, then I put this example on the final lol

OpenStudy (zzr0ck3r):

this shows its continuous for all x_0 in D=R^+

OpenStudy (zzr0ck3r):

g.n. from canada

OpenStudy (anonymous):

@watchmath The limit you gave tells me that:\[ 0\lt |x-c|\lt \delta_1 \implies |f(x)-L|\lt \epsilon_1 \]There is a function \(\delta_1= g_1(\epsilon_1)\).\[ 0\lt |x-c|\lt \delta \implies \left|\frac{1}{f(x)}-\frac 1L\right|\lt \epsilon\\ 0\lt |x-c|\lt \delta \implies \left|\frac{L-f(x)}{f(x)L}\right|\lt \epsilon\\ 0\lt |x-c|\lt \delta \implies \left|\frac{f(x)-L}{f(x)L}\right|\lt \epsilon\\ 0\lt |x-c|\lt \delta \implies \left|f(x)-L\right|\lt \epsilon|f(x)L| \]We want to say: \[ \delta = g_1(\epsilon|f(x)L|)=g_1(\epsilon_1) \] We know that \(L\neq 0 \implies 0\lt |L|\) We know there is some \(\delta_0=g_1(\epsilon_0 )= g_1(|L|/2)\) such that:\[ |f(x)-L|<\frac{|L|}{2} \implies L-\frac{|L|}{2}<f(x)< L+\frac{|L|}{2} \]If \(L\) is positive then:\[ L-\frac{|L|}{2} = \frac{L}{2} <f(x) \implies |L|/2 < |f(x)| \implies b= |L|^2/2 < |f(x)||L| \]If it \(L\) is negative:\[ -\left(L+\frac{|L|}{2}\right) < -f(x) \implies -L/2 < -f(x) \implies b= |L|^2/2 < |f(x)||L| \]So we let \[ \delta = \min\left(\delta_0= g_1\left(\frac{|L|}{2}\right), g_1\left(\frac{\epsilon|L|^2}{2}\right)\right) \]I'm sorry it took so long. This thread got so laggy due to other people complaining in the thread. Also, I'm tired.

OpenStudy (anonymous):

@watchmath The last part is really tricky, but you know how you said \(L\neq 0\)? Well we know there is a \(\delta\) range that corresponds to an \(\epsilon\) range which will ensure that \(f(x)\) closer to \(L\) than it is to \(0\). This means we can ensure that \(|L/2| <|f(x)|\). Then all we have to do is substitute in \(|L/2|\) for \(|f(x)|\) since we know it will be smaller.

OpenStudy (anonymous):

One way to get good at these proofs is to prove certain limit rules. \[ \lim_{x\to a}c = c \]So: \[ |x-a|<\delta \implies |c-c| < \epsilon \\ |x-a|<\delta \implies 0< \epsilon \]There is no way for the right side to be false, which means the conditional is always true.

OpenStudy (anonymous):

\[ \lim_{x\to a}x = a \]So \[ |x-a|<\delta \implies |x-a|<\epsilon \]Just let \(\delta = \epsilon\).

OpenStudy (watchmath):

Thanks. What I do below is basically the same as what you did. But I think it is better for us to think how we can bound \[\frac{1}{|f(x)|}\] For \(x\) close enough to \(c\) say \(0<|x-c|<\delta_1\) one can make \(|f(x)-L|<|L|/2\). But \(|L|-|f(x)|\leq |f(x)-L|<\frac{|L|}{2}\). Hence \(|f(x)|>|L|/2\) and \[\frac{1}{|f(x)|}<2/|L|\] Now there exist \(\delta_2>0\) so that for \(0<|x-c|<\delta_2\) one have \(|f(x)-L|<L^2\epsilon/2\). Now if we take \(\delta:=\min\{\delta_1,\delta_2\}\) then for all \(0<|x-c|<\delta\) we have \[\left |\frac{1}{f(x)}-\frac{1}{L}\right |=\frac{|f(x)-L|}{|L||f(x)|}<\frac{2|f(x)-L|}{L^2}<\epsilon\]

OpenStudy (anonymous):

That's extremely close to what I did, but I tried to be extremely rigorous because it is easy to make a wrong assumption for these proofs.

OpenStudy (anonymous):

One funny thing about these proofs if that we always do them backwards. They say "don't assume what you are trying to prove is true", however you can start with what you're trying to prove so long as all your steps are bi-conditional.

OpenStudy (anonymous):

If we were to write the proofs in order, they'd be even more confusing.

OpenStudy (kainui):

I feel like epsilon delta proofs just sort of tuck the infinities away and that it's really just throwing away calculus as it was discovered and creating a forgery that acts and appears like calculus, but really isn't. So what do you say to that? haha =P

OpenStudy (anonymous):

I don't really see how that is the case

OpenStudy (watchmath):

Epsilon-delta proof is just a way to ensure the rigorousness of intuition behind calculus. Surely the intuition is lost in this delta-epsilon yoga thingy. If you still want to do calculus while it is still intuitive (say the idea of infinitesimal) you may want do do calculus using "non standard" analysis.

OpenStudy (anonymous):

I think limits are inherently more complicated than they would seem intuitively, and that the Epsilon-Delta gives a really good understanding of limits. I think most people who have an intuitive understanding of limits but struggle with Epsilon -delta don't really understand limits as well on a technical level.

OpenStudy (anonymous):

Also, the definition could be extended to any algebra with a metric operation between the inputs of a function and the outputs of a function.

OpenStudy (zzr0ck3r):

@Kainui I am confused by this statement. This is the rigor of calculus. Also how we define it....

OpenStudy (zzr0ck3r):

I feel calc without delta epsilon is not calc at all, but just playing with what calc provides.

OpenStudy (kainui):

How come there isn't a single epsilon or delta in the work of Newton and Leibniz?

OpenStudy (zzr0ck3r):

The greeks invented geometry and didn't know about irrational numbers. what is your point?

OpenStudy (zzr0ck3r):

What about the work by millions of people after Newton...

OpenStudy (zzr0ck3r):

What about Hilbert.

OpenStudy (zzr0ck3r):

lol

OpenStudy (kainui):

Look I'm not saying it's not nice to have things that are ambiguous cleared up, but it just feels like e/d proofs are more smoke in mirrors trying to explain "what" calculus is, but I don't believe it. I believe calculus, like all math, is a discovered phenomena and is just as scientific in its discovery as anything else. To say "we define it to work this way and set theory as its basis" just seems to be mathematicians fooling themselves into thinking math is invented and not discovered. Have you ever used the square root function and felt like the limitation that a function be 1-to-1 was sort of an artificial construction?

OpenStudy (zzr0ck3r):

Its not smoke and mirrors its rigor. No offense but I wont even read the rest if you say its smoke and mirrors. Of course Newton was more smoke in mirrors because it was not rigorus..

OpenStudy (zzr0ck3r):

Forget I said anything.

OpenStudy (kainui):

Well if you refuse to read my arguments, then I'm gone.

OpenStudy (kainui):

I'm actually just disappointed. The worthwhile conversations are ones in which people disagree. It's a shame you are close minded about it.

myininaya (myininaya):

hey @wio can you prove this for me using epsilon delta thingy \[\lim_{x \rightarrow 0}\frac{1}{x^2}=\infty\]

myininaya (myininaya):

\[|x-x_n|<\delta \implies |\frac{1}{x^2}-\frac{1}{x^2_n}|< \epsilon \\ \text{ } \\ \text{ } \text{ } \implies |\frac{x_n^2-x^2}{x^2x_n^2}|< \epsilon \implies |\frac{x^2-x^2_n}{x^2x_n^2}| < \epsilon \\ \text { } \text{ } \text{ } \implies |x-x_n| < \frac{\epsilon(x^2x_n^2)}{|x+x_n|}\] would it go something like this for choosing delta also x_n->

myininaya (myininaya):

0

OpenStudy (anonymous):

Sure. When it comes to limits that tend to infinity, you have to change the epsilon inequality (and when it comes to limits where the input tends to infinity, you have to change the delta inequality). The definition for this case is: \[ \forall \epsilon \exists\delta\forall x\quad |x-0|<\delta\implies \epsilon<\frac{1}{x^2} \]it simplifies to: \[ |x|<\delta\implies \epsilon<\frac{1}{x^2} \\ |x|<\delta\implies \epsilon<\frac{1}{|x|^2}\\ |x|<\delta\implies |x|^2<\frac{1}{\epsilon}\\ |x|<\delta\implies |x|<\sqrt{\frac{1}{\epsilon}} \]So we can choose: \[ \delta = g(\epsilon) = \sqrt{\frac 1{\epsilon}} \]

myininaya (myininaya):

\[|x-0|<\delta=\sqrt{\frac{1}{\epsilon }} \\ \lim_{L \rightarrow \infty}|f(x)-L|=\lim_{x_n \rightarrow 0}|\frac{1}{x^2}-\frac{1}{x_n^2}| \] \[\lim_{x_n \rightarrow 0}|\frac{x_n^2-x^2}{x^2x^2_n}| =\lim_{x_n \rightarrow 0}\frac{1}{x^2x_n^2} |x-x_n||x+x_n| <\lim_{x \rightarrow 0} \frac{1}{x^2x_n^2} \sqrt{\frac{1}{\epsilon } } |x+x_n|\] I don't know if this is the correct way to contrust the proof after finding delta and also I don't know how to finish it if it is

myininaya (myininaya):

that last thing is x_n->0 (not x->0)

OpenStudy (anonymous):

I'm not familiar with the exact definition you are using. If I were, I might be able to help more.

myininaya (myininaya):

I was trying to show |f(x)-L|<epsilon for when |x-a|<delta

myininaya (myininaya):

where L->infinity and a is 0

myininaya (myininaya):

wait.. \[< \frac{1}{x^2\frac{1}{\epsilon}} \sqrt{\frac{1}{\epsilon}} \sqrt{\frac{1}{\epsilon }}=\frac{1}{x^2} \]?

myininaya (myininaya):

so we aren't suppose to show <epsilon maybe

OpenStudy (anonymous):

For limits headed towards infinity, we're not showing that the gap between them and infinity is shrinking... infinity will always be infinity away from a finite number. Instead we show that we will always be able to go bigger any any number that is provided. That is why we used \(\epsilon < f(x)\) instead of \(|f(x)-L|<\epsilon\). In most cases they will even change \(\epsilon\) to \(N\) since \(\epsilon\) is associated with an extremely small positive value.

myininaya (myininaya):

stupid question would you consider what you to be the proof or just the construction of delta or both?

myininaya (myininaya):

what you did*

OpenStudy (anonymous):

What I did was essentially an attempt to find \(\delta = g(\epsilon)\). Once I found \(g\), my proof would basically be writing all of my work backwards. I don't go through that labor because it is somewhat pointless. The steps for these proofs tend to be bidirectional, so we don't have the issues that come with "assuming what is to be proven".

myininaya (myininaya):

Also thanks for this tutorial I'm really glad you brought this up so I could remember to bring up that question.

myininaya (myininaya):

and if I wanted to prove this \[\lim_{x \rightarrow 0}\frac{1}{x^3} dne \] i would do a proof by contradiction assume the limit does exist and it is L right?

Can't find your answer? Make a FREE account and ask your own questions, OR help others and earn volunteer hours!

Join our real-time social learning platform and learn together with your friends!
Latest Questions
ilovemybf: i need more drawing ideas
18 minutes ago 4 Replies 1 Medal
toga: what is a Mayuri
56 minutes ago 3 Replies 1 Medal
Midnight97: Here is a beat I made let me know what y'all think
1 hour ago 24 Replies 2 Medals
toga: who thinks that there should be more titles
2 hours ago 5 Replies 0 Medals
Midnight97: Can I get some help because I forgot how to do this
3 hours ago 5 Replies 2 Medals
Can't find your answer? Make a FREE account and ask your own questions, OR help others and earn volunteer hours!

Join our real-time social learning platform and learn together with your friends!