Ask your own question, for FREE!
MIT 6.00 Intro Computer Science (OCW) 20 Online
OpenStudy (anonymous):

In lecture 8, Guttag says that "not surprisingly" the recursive and the iterative algorithms have the same order of growth. Is that always the case? it's also on the practice quiz: "Are recursive solutions usually more computationally efficient than iterative ones?" I answered False, (because of the example above) and sort of think i'm I'm wrong:). Although intutively, in terms of efficiency, it shouldn't matter from direction you go, so to speak. (this question may reveal the true level of my math skills...) I'm also not sure i understood the difference between order of complexity and order of growth. How are they related? sorry, this question is getting kind of long, but this was so weird: http://dpaste.com/770935/ thanks again!

OpenStudy (anonymous):

Hey Weltgeist, with regards to the code u posted there's the problem with floating point numbers representaion in memory and hence equality in them is not as trivial as in case of integers. When comparing floating numbers we dont just equate them because there's a high probability that they might not be truely equal bcs of representation limitations of fractional numbers(such as 0.1) in binary code, so we define a constant and see if the difference between the two values is less than that constant then they must be equal. Choosing a sufficient constant is upto you. but in this case 0.1 should suffice. x = 10.0 epsilon=0.1 for i in range(10): x += 0.1 print (x -11.0)<epsilon try converting 0.1 to binary to understand that u cant perfectly represent it memory and hence u get a value which is finitely less than 11 but when printing the compiler fixes it for you and shows 11.0 Im a newbie sorry if u found answer too long..

OpenStudy (anonymous):

and with respect to recursion and iterative approach I can say they are inter convertible and hence they have the same order of growth, but recursive functions at times cn be slower because of memory overheads so a little less efficient. I did an amateur blog post on something similar comparing iterative and recursive algorithms on time scales check it out maybe u'll find it usefull http://goo.gl/08qM8

OpenStudy (anonymous):

@kpelletij8 is right, they do have the same order of growth, but recursion is often less efficient because it often needs to remember a lot of different states until it reaches a base case, and then unpack the results until it gets the answer. For example, let's say you want a program that just takes counts the numbers from 1 to n. def addup(n): if n==1: return 1 else: return n + addup(n-1) If you run addup(1000) you're going to have this long string of numbers once it reaches the base case like "1000 + 999 + 998 + 997 + ... + 1." These numbers have to be stored before they're used. If it were an iterative program, you'd probably just update the sum as you went, you wouldn't wait until the final iteration to add up all the numbers. That said, there are ways to make recursion more efficient, AND there are languages that are built specifically to make recursion fast (I think Scheme and Lisp are...). Recursive programs are usually simpler to understand and more beautiful and elegant, though.

OpenStudy (anonymous):

Hmm, looks like kpelletij's name is altered whenever you type it. Ha.

OpenStudy (anonymous):

@shandelman yupz thts a real bummer :P and yes some languages use tail recursion optimisations, which provides constant stack size for any number of calls and hence getting same performance as iterative methods.. and some solutions are more intuitive in recursions and some in iterative.

OpenStudy (anonymous):

@kpelletij8 i get the idea of binary representation not being exact, but why then is the second case true? and how are you supposed to know? (I mean, it's on the quiz so it can't be arbitrary). and thanks a lot for the explanation of iteration/recursion! although your blog is still a bit over my head:-) @shandelman @kpelletij8 is order of complexity sort of the same as order of growth? ie upper bound or worst case of how many times you need to go through the loop. thanks again:-)

OpenStudy (anonymous):

@weltgeist yeah it can be bit tricky to know when it'll work or not and hence its best to use a comparison variable (epsilon) via the difference method, its a sure-shot thing if you select epsilon judiciously.Maybe he wanted you to learn about this, it will be a part of a future lecture too i think. In my opinion that second case worked bcs you basically subtracted every thing you added (in the inaccurate representation of 0.1). yes for most of algorithm order of growth is directly proportional to the complexity, so you can for most cases consider them being one and the same thing, but since they are different terms they do have different meanings. and dont worry, if you didnt get some things on that blogpost.. my basic idea ws to show you some statistical data in favour of recursive algorithms.. those are just rudimentary things you will get too very soon.. :)

Can't find your answer? Make a FREE account and ask your own questions, OR help others and earn volunteer hours!

Join our real-time social learning platform and learn together with your friends!
Can't find your answer? Make a FREE account and ask your own questions, OR help others and earn volunteer hours!

Join our real-time social learning platform and learn together with your friends!