Ask your own question, for FREE!
Mathematics 8 Online
OpenStudy (anonymous):

I'm trying to mathematically formalize the "green eyed logic puzzle" problem described in this video: https://www.youtube.com/watch?v=98TQv5IAtY8

OpenStudy (anonymous):

Let me rephrase the problem in a different way that is somewhat equivalent: 1) There are 100 robots. 2) Every robot knows everything that can be logically deduced from what they currently know. 3) Every robot knows the color of every other robot. 4) Every robot knows whether or not another robot has shut down. 5) If a robot knows its own color to be red, it will shut down at the first second of the next minute. 6) Every robot knows the facts listed above. 7) Every robot is red. We introduce the following fact: 8) Every robot receives a message, which they know has been sent to all other robots, which states that at least one robot is red. The question of the problem is then: What happens to the robots? The answer presented by the video is that all robots shut down after 100 minutes. I've been thinking about how to formalize the problem.

OpenStudy (anonymous):

I'm going to start by simply only considering the colors of the robots, and only considering \(4\) robots. We'll give each robot a number from \(1\) to \(n\). We'll represent a robot's knowledge base as a tuple of sets. The \(i\)th element in the tuple will be the possible states a robot thinks the \(i\)th robot can be in. So for the \(1\)st robot, we will represent it's knowledge base as follows\[ (\{n,r\}, \{r\}, \{r\}, \{r\}) \]Another way we could represent this knowledge base is as follows\[ \{(n,r,r,r) ,(r,r,r,r)\} \]

OpenStudy (anonymous):

Now I'm going to introduce a new concept, a means of combining two knowledge bases. We can think of it as this: What robot \(1\) can deduce robot \(2\)'s knowledge base to be. First, consider robot \(2\)'s knowledge base: \[ (\{r\}, \{n,r\}, \{r\},\{r\}) \]However, when robot \(1\) is deducing what robot \(2\) can know, it must add its own ignorance into \(2\)'s ignorance. Even thought robot \(1\) knows robot \(2\) knows robot \(1\)'s color, it has to consider both possibilities. The result is as follows: \[ (\{n,r\}, \{n,r\}, \{r\}, \{r\}) \]Another way to represent this is: \[ \{(n,n,r,r), (n,r,r,r), (r,n,r,r), (r,r,r,r)\} \]

OpenStudy (anonymous):

Finally we can consider what robot \(1\) must deduces that \(2\) must deduce that \(3\) must deduce that \(4\) must deduce, and the result is: \[ (\{n,r\},\{n,r\},\{n,r\},\{n,r\}) \]And another way to represent this is as: \[ \{n,r\}\times\{n,r\}\times\{n,r\}\times\{n,r\} \]Where \(\times\) is the Cartesian product.

OpenStudy (anonymous):

I will call this the "global knowledge". This represents the list of all states a robot can deduce the other robots could think to be possible. Remember that when your knowledge set is large, that means there is more uncertainty and so you are actually more ignorant. The omniscience robot would have a knowledge set of one state (the truth), while the completely ignorant robot would have a knowledge set of all states (every possibility). Knowledge lets you eliminate states. The set \(\{n, r\}^4\) also incidentally the set of all possible states, meaning that the global knowledge is completely ignorant.

OpenStudy (anonymous):

The local knowledge for any robot is its own knowledge set. A statement is said to introduce no information if it doesn't modify any robot's local knowledge. However, though the message sent out all robots did not modify any local knowledge, it modified the global knowledge. It made the global knowledge out to be: \[\{n,r\}^4 \setminus \{(n,n,n,n)\}\]

OpenStudy (anonymous):

Even \(2^4 = 16\) is a lot to write out, let's just start with a case of \(2\) robots... The knowledge set of robot \(1\): \[ \{(n, r),(r,r)\} \]The knowledge set of robot \(2\): \[ \{(r, n),(r,r)\} \]The knowledge set of the messages sent out to the robots: \[ \{(n, r), (r, n), (r,r)\} = \{n,r\} \times \{n,r\} \setminus \{(n,n)\} \]That is, since at least one robot is red, the state \((n,n)\) can be ruled out. All robots intersect this with there own knowledge set and see no change, so technically none of them learned anything new from it. However, let's consider what robot \(1\) could consider robot \(2\) to know before the message: \[ \{(n,n), (n,r), (r,n), (r,r)\} = \{n,r\}\times \{n,r\} \]To be clear, robot \(1\) knows that that robot \(2\) knows robot \(1\)'s color to be red or not red... but it doesn't know what \(2\) knows. Hence this is what robot \(1\) can deduce what robot \(2\) might know. The message doesn't modify robot \(1\)'s own knowledge set, but it does modify this knowledge set to be \[ \{(n, r), (r, n), (r,r)\} \]Since robot \(1\) knows this message was sent to robot \(2\) as well.

OpenStudy (anonymous):

i'm sorry to come to someone else's tag here and ask for help but this is an emergency! and I need help ASAP! i'd really appreciate it if one of u helped!

OpenStudy (anonymous):

Hello?? Anyone here?

OpenStudy (anonymous):

Now let's go to the \(n=3\) case. Here is the global knowledge set. I'm partitioning it based on whether robot \(1\) is red or not\[ \{(n,n,r), (n,r,n), (n,r,r)\} \cup\{ (r,n,n),(r,n,r), (r,r,n), (r,r,r)\} = A\cup B \]The \(A\) case is very familiar. It is like the \(n=2\) case is happening for robots \(2\) and \(3\).

OpenStudy (anonymous):

Guys, I'm open to suggestions, questions, etc.

OpenStudy (anonymous):

I'm not completely sure what I'm doing here.

OpenStudy (usukidoll):

I see set theory being used.. like A or B so the set of a is umm the chances of the robot being not red or red and the set of B is a bigger chance of the robot being red.

OpenStudy (anonymous):

Yes

OpenStudy (anonymous):

\((r,n,n)\) means robot 1 is red, robot 2 is not red, robot 3 is not red.

OpenStudy (anonymous):

Robot 1 can deduce \((r,r,r)\) and \((n,r,r)\) because it knows the other two are red.

OpenStudy (anonymous):

Robot 2 can deduce \((r,n,r)\) and \((r,r,r)\) because it knows the other two are red.

OpenStudy (anonymous):

Robot 1 asks the question: What can both myself and robot 2 know? It realizes that both know the third robot is red. It deduces: \((n,r,r),(n,n,r),(r,n,r), (r,r,r) \). That is, it deduces ever state where the final coordinate is \(r\).

OpenStudy (usukidoll):

(r,r,r) means robot 1 is red...so as robot 2 and 3

OpenStudy (anonymous):

Then robot 1 asks the question, what do we all know together. Sadly, there is nothing that they all collectively know.

OpenStudy (dan815):

here's another way to go about it, as time passes they know its not just atleast one but atleast 2 amd 3 and so on

OpenStudy (dan815):

see if you can model that

OpenStudy (dan815):

for example the 3 case, once a day passes and no one has left, all 3 know now that atleast 2 of them must have green eyes

OpenStudy (dan815):

how to model that hmm

OpenStudy (usukidoll):

|dw:1435310256377:dw|

Can't find your answer? Make a FREE account and ask your own questions, OR help others and earn volunteer hours!

Join our real-time social learning platform and learn together with your friends!
Can't find your answer? Make a FREE account and ask your own questions, OR help others and earn volunteer hours!

Join our real-time social learning platform and learn together with your friends!