Hi everybody, If the Decrementing function exists (abs(ans**2 - x)) why you need another condition (ans <= x) to exit loop ? Question ref: Lecture 3: Problem Solving Prof. John Guttag 4:29 min prof says: If such a function exists than the loop is guaranteed to always terminate. Ref: Lecture code (PY): lec03.py while abs(ans**2 - x) >= epsilon and ans <= x:
They are completely different tests, one is ans**2-x and the other is ans-x, more or less, plus the difference in sign of ans. Check each case with some numbers.
Yes, but why two tests when is one enaugh? I have found answer myself: 1. Decrementing function is ans <= x not this one abs(ans**2 - x) 2. abs(ans**2 - x) is helper function that stops computation before ans reach x So program will work without test abs(ans**2 - x) but with additional necessary computations.
If both are needed, it could be implemented as a compound condition. Sometimes we may want to exit the loop without completing it, that is where we implement the other condition in the middle of the loop. In other words, it depends on the problem itself, on which we have insufficient information.
If you just properly understand my question i will give you the medal for effort. look at my question reference: Question ref: Lecture 3: Problem Solving Prof. John Guttag video lecture 3: 4:29 min prof says: If such a function exists than the loop is guaranteed to always terminate. here is all: http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-00sc-introduction-to-computer-science-and-programming-spring-2011/unit-1/lecture-3-problem-solving/
Assuming you are referring to the following snippet: ``` x = 12345 epsilon = 0.01 numGuesses = 0 low = 0.0 high = x ans = (high + low)/2.0 while abs(ans**2 - x) >= epsilon and ans <= x: #print low, high, ans numGuesses += 1 if ans**2 < x: low = ans else: high = ans ans = (high + low)/2.0 #print 'numGuesses =', numGuesses print ans, 'is close to square root of', x ``` Professor is attempting to find the square-root of a positive number (not necessarily a perfect square nor integer) by the method of bisection. It is kind of a trial and error method that guarantees an accuracy of \(x*2^{-n}\) after n iterations/tries. Here professor sets an error limit epsilon of 0.01 units, which is the upper bound of ans**2-x. Given the initial values of ``` low = 0.0 high = x ans = (high + low)/2.0 ``` and the algorithm of modifying "ans" that he used, the additional condition of ans<=x is not logically required. It is unfortunately that he did not explain why he put this condition in. I believe he puts it in as "defensive programming" to eliminate iterations running wild. It is a good practice to put in checks for things that "should not" happen. In C and C++, this is usually done using the "assert" function, which will not baffle readers wondering why that condition was included. By the way, this code will work for all real numbers as well, but as long as x >= 1. The algorithm does not solve for values of x in the range 0<x<1. Finally, it is helpful of you to post the complete code, which shows that context is very important in understanding code. If you expect readers to refer to professors book or code online instead of posting it, you could provide the links.
Thank you for your effort. I exactly know how is program executed. My question is about Decrementing function. What is guaranteed that any loop will terminate? It is Decrementing function which : 1.map set of program variables to integer 2.starts with non negative value 3.whan is <=0, loop terimates 4.decreased each iteration of loop In code below when ans<=x loop breaks, not if abs(ans**2 - x) >= epsilon x = 25 epsilon = 0.000000000001 numGuesses = 0 ans = 0.0 while abs(ans**2 - x) >= epsilon and ans <= x: ans += 0.00001 numGuesses += 1 print 'numGuesses =', numGuesses if abs(ans**2 - x) >= epsilon: print 'Failed on square root of', x else: print ans, 'is close to square root of', x So, if Decrementing function always breaks loop why we need another condition to break loop? I have found answer : second condition is helper to break loop earlier. If it's not break loop, as it is in example code before, than Decrementing function breaks the loop.
I don't know what you mean by decrementing condition and second condition since they are not defined with relation to the code. I assume you mean "decrementing condition" as "abs(ans**2 - x) >= epsilon" and "second condition" as "ans <= x" If that's the case, my previous comment applies: ``` ...the additional condition of ans<=x is not logically required. It is unfortunately that he did not explain why he put this condition in. I believe he puts it in as "defensive programming" to eliminate iterations running wild. It is a good practice to put in checks for things that "should not" happen. In C and C++, this is usually done using the "assert" function, which will not baffle readers wondering why that condition was included. ``` Referring to the code you just posted, here are my additional comments. The proposed search you just posted is a linear search with a constant step, which is very small compared to the initial guess of 0. As a result, it requires about 500,000 steps before reaching the solution of 5.0. During this time, it has accumulated rounding error of about 10^(-16) 500,000 times, thus with an error of 5*10^-11. If we take the proposed root as 5-5*10^-11, the value (5-5*10^-11)^2 - 25 exceeds epsilon of 10^-12 => the test fails. It's the "catch all" "second condition" that stops the iterations. To see this, if you rerun with initial value ans=4.9999 (instead of 0), you will find that it converges at ans=4.9999999999999964 (and not 5). This shows that it's the accumulation of round-off error that failed the "decrement condition". On the other hand, with bisection search (code from Professor), there is always round-off error, but they do not accumulate, since the approximation is recalculated every time, and hence self-correcting. The "second condition" is not really needed, but just for defensive programming A different way to maintain the linear search with large number of iteration without accumulation of round-off errors is to use an integer counter, and convert the integer to a float at each iteration to act as the "ans" variable. Something like ``` ans=0; For i in range(25*100000) ans=i/1000000 # Python has maxint of around 10^19 ``` Finally, the simplest way to solve this problem is to increase epsilon to a reasonable value, in the order of the increment of x. If epsilon is set to 10^-7 (instead of 10^-12, convergence is normal. In summary, floating point arithmetic is not exact on ANY binary computer, so watch out for traps like this.
As you can see from your analyze abs(ans**2 - x) >= epsilon is not guaranteed function that you will always break the loop. Your code analysis is ok. But there is some abstract function that is always presented, explicitly or implicitly in any not infinite loop - Decrementing function. Decrementing function here is ans<=x <=> f(ans)=x-ans 1.map set of program variables to integer [x = 25=const, ans=0] 2.starts with non negative value [ f(ans)=x-ans = 25-0 = 25 > 0 ] 3.whan is <=0, loop terimates [ if f(ans) <= 0: break loop] 4.decreased each iteration of loop [ans += 0.00001 => f(ans) -> decrease] abs(ans**2 - x) >= epsilon is helper function that breaks loop (if breaks) so ans don't need to go to x which means that program didn't find root. Decrementing function is something that breaks loop for shore - it is defensive Programming This is how I understand prof. Jhon Guttag from lecture: http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-00sc-introduction-to-computer-science-and-programming-spring-2011/unit-1/lecture-3-problem-solving/#?w=535
``` abs(ans**2 - x) >= epsilon is helper function that breaks loop (if breaks) so ans don't need to go to x which means that program didn't find root. ``` Actually abs(ans**2 - x) >= epsilon is a condition that root is not yet found (which continues the while loop). The negation abs(ans**2 - x) < epsilon tells us that root has been located and therefore exit loop. "ans>x " is a safeguard, and could have been ans>(x/2) \(\forall\)x>2 and hence saves some unnecessary cycles. If you are looking for guarantees in convergence (or exiting a loop), it would be more fruitful to look at the algorithm than the condition, which depends solely on the algorithm. You have proposed a linear search for square-root, and professor a binary search. But there are other algorithms even back in the Babylonian days that are algorithmically more efficient and guarantee convergence. I can post one if you're intereted.
Thank you very much. This conversation was very helpful for me. Best regards.
You're welcome! Nice talking to you! :)
Join our real-time social learning platform and learn together with your friends!