Ask your own question, for FREE!
MIT 6.00 Intro Computer Science (OCW) 10 Online
OpenStudy (anonymous):

In general I understand what binary is and how it is different from deci, but I am confused on page 30 in Guttag's book Introduction to Computation and Programming using Pythin. In it he says "the number 8 would be represented as (1.0, 11). Converting 8 to binary, I get 1000 (on, off, off, off). Thanks,

OpenStudy (anonymous):

hm im sorry im not reallly sure, im struggling with that stuff aswell.

OpenStudy (e.mccormick):

I don't have the book, but looked at the errata. He seems to be using some sort of shifted reguster notation.

OpenStudy (anonymous):

thanks for your reply. I also checked the errata and didn't see anything. Here is a picture of the text: http://imgur.com/IwvLzrO I can't find anything about it (or shifted register notation). So if it's not that important, I'll just move on.

OpenStudy (e.mccormick):

Ah, \(2^{11}\) it is probably a large register being read left to right for digits.

OpenStudy (e.mccormick):

In binary, \(1.0\times 2^{11} = 100000000000\) If you have 16 bit word, that is 0000100000000000 The leading bit, if I recall, is special. That means the numericall used bits are 000100000000000. This, being read in reverse order, is 1000. So I think he is talking about the word in RAM and not just the number.

OpenStudy (anonymous):

hmm, ok. Thank you for your help mccormick

OpenStudy (e.mccormick):

For why it might be reverse in RAM, big or little endian: http://en.wikipedia.org/wiki/Endianness

OpenStudy (e.mccormick):

I am not 100% sure that is what Gutag is thinking about, but it is the only way I think it might work. Though it could still be a mistake on his part and me might have meant (1.0, 3)!

OpenStudy (anonymous):

It is in his discussion on the accuracy of float numbers. Here is more of the text: http://imgur.com/oGLxxrt,CqH8bb8#0

OpenStudy (e.mccormick):

Yah, floating point makes for errors. This is part of why the epsilon-delta definition of the limit is used in computer science. You can't get there 100% with floating point, but you can get close enough to be useful.

OpenStudy (anonymous):

I understand that. But how does 8 become (1.0, 11)?

OpenStudy (e.mccormick):

Well, that explanation I gave is my best guess for it. Past that, Gutag would have to define it better. Or I wonder if google can help... let's see. http://www.c64-wiki.com/index.php/Floating_point_arithmetic The sign bit! Probably the special thing I was remembering. And another explanation: http://stackoverflow.com/questions/6910115/how-to-represent-float-number-in-memory-in-c

OpenStudy (anonymous):

ok. I can't say I fully understand it yet, but you've given me a direction work towards. Thank you mccormick. best,

OpenStudy (theeric):

Neat stuff! So, `(1.0, 11)` is like `1.0 << 11` if that notation is like scientific notation! Since this method yields the described result, I'm going to post how I understand this. @e.mccormick actually sort of mentioned it, and I'll post that at the bottom. So, we'll start out with \(8_d\). I'll use the subscript "\(d\)" to indicated "decimal," and "\(b\)" for "binary." So we have \(8_d\). You've already converted it to binary, and did it well, and got \(1000_b\). Good job, by the way! I don't know where you are in studying this, but that stuff is tricky at first. Now, let's take a step back to decimal numbers that use scientific notation. The number \(1,234_d\) can also be represented \(1.234_d \times 1000_d\), or \(1.234_d \times 10_d^{3_d}\). The one on the right, \(1.234\times 10^3\), is scientific notation. Maybe Guttag would like to represent that as \((1.234, ~3)\). But I'll touch on that later. If you're comfortable with "decimal is base 10" and "binary is base 2," then skip this next paragraph. Since we're talking about two number systems, it important to give some significance to the obvious: pulling out the multiple of 10 allowed us to keep the same numbers because it was the base of the decimal system. Each digit is 10 times more or less than it's neighbor. Think of 1, 2, 3, 4, 5, 6, 7, 8, or 9 versus 10, 20, 30, 40, 50, 60, 70, 80, or 90. Binary is base 2, rather than 10. In binary, it's only 2 times more or less than its neighbor, think of \(10_b\) versus \(1_b\). That is \(2_d\) versus \(1_b\), if that helps! So, let's see if I can explain this.. First I'll make it very general. Guttag's notation looks like \((n,~x)=n\times10_b^x\) . Here, \(n\) is for number and \(x\) is for exponent to the base (the base is \(10_b\) for binary). So, \((1.0,~11)=1.0_b\times10_b^{11_b}\). That exponent is \(3_d\), since \(11_b=3_d\). That might make it easier to understand that your number "\(1.0_b\)" will move \(3_d\) binary digits since you are multiplying by the binary base (\(10_b\)) \(3_d\) times. I mean, multiplying by \(10_b\) a total of \(11_b\) times. :) e.mccormick said: "I am not 100% sure that is what Gutag is thinking about, but it is the only way I think it might work. Though it could still be a mistake on his part and me might have meant (1.0, 3)!" So it looks like that's what Guttag did, but with binary! Working in binary all the way!!

OpenStudy (theeric):

And this is relevant to floating point, just because we're talking about something raised to the power of the base. (like scientific notation) That's part of how floating point works using the IEEE 74 standard for floating point representation. It's more involved, but it really works like that. It has a number, and it has info for an exponent. It also has a signed bit :P So, if you have like 1 decimal digit for the number, and 1 for the exponent.. Well, number line time. Middle school math has never been so applicable... :P I was going to draw one... But I can get a better image with wolframalpha.com. Unfortunately, it's in decimal. See this: http://www.wolframalpha.com/input/?i=number+line+1%2C+2%2C+3%2C+10%2C+20%2C+30%2C+100%2C+200%2C+300%2C+1000%2C+2000%2C+3000 If you look at \(1\times10^1=10\) and \(2\times 10^1 =20\) and \(3\times 10^1=30\), you see you have a gap of 10 between numbers that you can't express. Now try the next exponent \(1\times10^2=100\) and \(2\times 10^2 =200\) and \(3\times 10^2=300\), you see you have a gap of 100 between numbers that you can't express. It gets worse with greater exponents. This is a loss of precision, since you can't get to so many numbers. Now, we use more digits for the number, so we have smaller gaps. Greater exponents give us bigger numbers. We have less precision farther from 0, pretty much, and more precision with more data about the number. So having a bunch of digits for only a few exponents is still pretty good. \(1.001001100101011000101001000101_b\times10_b^{110_b}\) is \((1.001001100101011000101001000101,~110)\) which is \(73.584141075611072_d\) All those numbers really helped the accuracy, as well as a small exponent comparatively.

OpenStudy (theeric):

That's a lot, and I haven't seen Guttag's work before so I don't know if that's what he's going for. But feel free to ask questions!

OpenStudy (anonymous):

"Floating point works the same way,except we represent the significant digits and exponents in binary rather than decimal and raise 2 rather than 10 to the exponent. For example, the number 8 would be represented as the pair (1.0, 11)" Professor Guttag

OpenStudy (e.mccormick):

@bard What we are looking at is why it is to the 11th power.

OpenStudy (theeric):

My theory is that the "11" is in binary, and \(11_b=3_d\). So, \(\large1.0_{b\ or\ d} \times 2_d^{3_d}\equiv 1.0_{b\ or\ d}\times10_b^{11_b}\) And that's a scientific notation in binary. (It's "b or d" because \(1.0_b=1.0_d\).) And, I didn't think to use color before, but now I have! \(\color{blue}{1.0}_{b\ or\ d}\times10_b^{\color{blue}{11}_b}\color{blue}{\rightarrow}(\color{blue}{1.0},\ \color{blue}{11})\) So, I don't think it's \(\large 2_d^{11_d}\), I think it's \(\large2_d^{11_b}\). That is, \(\large2_d^{3_d}=8_d=1000_b\). Am I making any sense? I think his notation \((1.0,\ 11)\) is completely in binary.

OpenStudy (theeric):

You guys also mentioned the signed bit, so I'll be happy to refresh memories or make new ones if you want. Just let me know! I don't want to clutter this post up unnecessarily like might have done with the floating point information.

OpenStudy (e.mccormick):

No, the 3 to 11 thing does not make sense. It is number of places. As a raw binary number, 8 is 1000. However, that is not what it necessarily is in memory because this is a floating point number. Memory addressing of floating point is different. That much I know for certain. That is why I think it is just that.

OpenStudy (e.mccormick):

Oh, and 1 as a floating point would not be (1.0,11) because it is not the same bit as it would be for 8.0.

OpenStudy (e.mccormick):

Ah, I found another thing about it: http://kipirvine.com/asm/workbook/floating_tut.htm

OpenStudy (e.mccormick):

And this one: http://sandbox.mc.edu/~bennet/cs110/flt/dtof.html

OpenStudy (theeric):

Yeah! Floating point representation is a little complex. But, putting representation aside, we can still have binary decimal numbers in scientific notation. Floating point numbers still follow a scientific notation-like feel. The sign is for positive or negative. The exponent bits as listed in http://kipirvine.com/asm/workbook/floating_tut.htm are related to the exponent of scientific notation (not the same), and the mantissa is the decimal part of the scientific notation. Depending on the exponent bits, the number has either 0.(mantissa) or 1.(mantissa) I think.

OpenStudy (e.mccormick):

Yah, if it is 0.(mantissa) that would add the extra bit requirment. That and the endian stuff.

OpenStudy (theeric):

Haha, I'm getting lost. I didn't see how the notation was described to lead into floating point. I think it could be used to discuss how it works, but you're definitely right that the floating point representation is different. Also, not every system uses the same floating point representation. The one I was talking about was IEEE 74 standard, I think. That link, http://kipirvine.com/asm/workbook/floating_tut.htm, didn't seem to mention that the exponent represents numbers 0 to 1 when the exponent is all 0's and represents infinity and NaNs (not a number) when the exponent is all 1's, as I've learned. Take care!

Can't find your answer? Make a FREE account and ask your own questions, OR help others and earn volunteer hours!

Join our real-time social learning platform and learn together with your friends!
Can't find your answer? Make a FREE account and ask your own questions, OR help others and earn volunteer hours!

Join our real-time social learning platform and learn together with your friends!