How to represent quaternions as a complex number with complex numbers for the "real" and "imaginary" parts. (lame short guide for myself and for funsies)
*
My internet died, but I'm back!
So we have the standard form of a quaternion, which looks like this \[\Large a+bi+cj+dk\] and quaternions have this special famous relationship discovered my Hamilton \[\Large -1=i^2=j^2=k^2=ijk\] Remember, quaternions aren't commutative, but they can anticommute which is nice. Now let's take the identities above and write: \[\Large -1 = ijk \\ \Large i=jk\] So all I did was left multiply by i and we got this nice relationship. Playing around you get all the familiar crossproduct identities, but just to complete this example, we have \[\Large -i=kj\] So let's multiply i and -i together and we get: \[\Large i(-i)=jk(kj) \\ \Large -(-1)=j(-1)j \\ \Large 1 = 1\] And really there's no difference between i, j, and k, just close your eyes and rotate your coordinates around... But wait I promised you representation of quaternions as complex numbers! Next post!
We have \[\Large a+bi+cj+dk\] We can use our identity that I showed above, \[\Large ij=k\] and plug this in to get: \[\Large a+bi + cj+dij\] This looks a lot like clifford algebra now, but nevermind that. Now we can factor out the j from the right two terms to get: \[\Large (a+bi)+(c+di)j\] So now we can represent quaternions as a tuple of complex numbers. Fancy, and that's the end of the guide.
so a is the real component, and b, c, d are the components, of the three distinct imaginary flavours?
That's one interpretation but it might more useful to think of it like a is the scalar component and b, c, and d are the vectors in 3D space. Another alternate way to think of it is that a is a scalar, b and c are vectors, and d is a bivector. Perhaps that is confusing to read, I should have called it w+xi+yj+zk.
Hmm I want to make an example but I am not sure what to show, what part seems most unclear perhaps?
Ok I've thought of something, one sec
|dw:1419261516135:dw| So to keep it simple, let's say we have two vectors, a and b with b having components perpendicular and parallel to a (as will be true for all vectors) So we can rewrite b in terms of its parallel and perpendicular components: \[\Large \bar b = b_1 i + b_2 j\] and for simplicity, a just has all of its direction in the j direction \[\Large \bar a = ai\]
*you mean\[\Large \bar a = aj\]right?
Whoops my bad, I already typed all this out I guess that's what I get for drawing the picture and then not writing it down. Pretend it's on the other axis please XD If a, b1, b2 make you uncomfortable make up values and just plug them in. So if we multiply these vectors we have: \[\Large \bar a \bar b = (ai)(b_1i+b_2j)=-ab_1+ab_2ij\] or we could multiply them the other way around to get \[\Large \bar b \bar a = (b_1i+b_2j)(ai)=-ab_1+b_2aji = -ab_1-b_2aij\] The coefficients are just scalars so it oesn't matter their order.
So we have \[\Large \bar a \bar b + \bar b \bar a = -2ab_1\]\[\Large \frac{\bar a \bar b + \bar b \bar a}{2}=-a b_1 \cos \theta\] and similarly we have \[\Large \frac{\bar a \bar b - \bar b \bar a}{2}=a b_2 \sin \theta \ ij \] So no we essentially have derived these rules by just using the multiplication.
In fact we can now use these specific case rules to see that they now leap frog themselves into all general cases \[\Large \frac{\bar a \bar b + \bar b \bar a}{2}=-AB \cos \theta\] \[\Large \frac{\bar a \bar b - \bar b \bar a}{2}=AB \sin \theta \ \gamma\] here I just used A and B to represent their lengths and gamma is just a unit length vector perpendicular to the plane just like we had before. Ok so maybe this isn't quite what you want I'm not sure.
can you put the angles on the/a diagram
|dw:1419262843955:dw| Sure, I also redrew a so that it corresponds to what I wrote. Also I realize this is a less than rigorous treatment, it's more or less an outline but it should show enough that you can confirm a particular case actually works and that this noncommutative multiplication gives you access to sine and cosine whether you add or subtract them, which is incredibly simple and handy.
If we want to step down to complex numbers, we have a simple analogue that might help: \[\Large z= a+bi\] so a is the projection onto the real axis and b is the projection onto the imaginary axis. So we can find the real and imaginary parts like this: \[\Large \frac{z+z^*}{2}=a=r \cos \theta \\ \Large \frac{z-z^*}{2i}=b=r \sin \theta\] Although this is sort of misleading in a way, I'm sort of showing an analogy since complex numbers are commutative I can't exactly do a perfect example.
One last thing to note on quaternions is this also means there's a nice relationship: \[\Large ab=ba\] If they commute it means they are parallel (just like scalars commute on the real number line) \[\Large ab=-ba\] If they anticommute it means they are perpendicular. And we can of course use these two facts not just to check but also construct geometric objects. I hope that is as interesting and exciting to other people as it is to me haha. I think more generally though everyone should learn clifford algebra also known as geometric algebra. It encapsulates complex numbers, quaternions, vectors, matrices, tensors, and several other things I haven't even really learned yet like spinors. However it is actually a pretty simple thing. Some resources for the interested: http://www.av8n.com/physics/clifford-intro.htm http://slehar.wordpress.com/2014/06/26/geometric-algebra-projective-geometry/ http://www.itpa.lt/~acus/Knygos/Clifford_algebra_books/p_Leo_Dorst,_Daniel_Fontijne,_Stephen_Mann%5D_Geometr%28BookFi.org%29.pdf In clifford algebra you can represent Maxwell's 4 equations as a single equation. Exciting. =)
A simple example of the commutative rule with a complicated vector: \[\Large a= 2i+3j \\ \Large b = 4i+6j\] We can factor out a 2 from b, \[\Large b=2(2i+3j)=2a\] So indeed it is obvious that they are commutative!\[\Large ab=ba \\ \Large a(2a)=(2a)a\] So what does anticommutative look like? Well let's take the constructive approach and make a vector perpendicular to the vector a above by making c anticommute with it: \[\Large c=x i+yj\] Multiply through with a on the right \[\Large a c = (2x+3y)+(2y-3x)k\] So if we were to multiply this the other way instead of getting k, we'd have -k but the scalar part would remain unchanged. \[\Large ca = (2x+3y)-(2y-3x)k\] Confirm this if you don't believe it! Now we see that in order to be perpendicular they have to anticommute, but the scalar part didn't! We just set this equal to 0 and solve (or plug into ab=-ba, which is essentially what you're doing) : \[\Large 2x+3y=0\] So now you can plug this into the original formula for c to get: \[\Large c=x i +\frac{-2}{3}xj=x( i-\frac{2}{3}j)\] However there are an infinite number of these perpendicular ones, since x is a variable (and this should make sense) so we can replace x with 3x to get: \[\Large c= x(3i-2j)\] So kind of fun, hope it helped you to play around, it's helpful to me to write this at least haha.
Exercise: (I will check it if you want) Note that in the last example c turned out to be a vector perpendicular to a. But this is simply a line in the xy-plane, isn't there technically an entire plane that extends into the z axis as well (with a k unit vector) that's perpendicular to a? What do I have to change? Change it, and then derive it. =)
Join our real-time social learning platform and learn together with your friends!