For Parth: \(T=\frac{1}{2}\sqrt{\sum_{(a\, b\, c)} \left| \det \left | \begin{array}{ccc} x_{a,1}& x_{a,2}& x_{a,3}\\ x_{b,1}& x_{b,2}& x_{b,3}\\ 1& 1& 1 \end{array} \right | \right|^2}.\)
And when you edit the \(\TeX\), refresh the page.
Forgot about refreshing.
This is beautiful; I don't get any of it!
It's a combinatorical sum. It's the real reasoning behind an extremely nasty formula.
By looking at the 3-tuple problem in this venue, you can generalize to n-tuples.
How did you even generalize it?
Oh, sorry for asking. I already know that Limitless the Great is obviously great.
I first found the ugly formula on Wiki. Then, I stared at it for a while and it hit me: This is a combinatorical sum in hiding. That's when I rewrote it as you see above. You won't see this anywhere else on the web.
Also, when I say generalize, I mean that by looking at this combinatorical interpretation of it with respect to summations and determinants, one gets a hint at what the n-tuple version will look like.
I get everything you are doing here! Now, I just need to learn about matrices.
Bets?
Matrices are simple, Parth.
They are arrays of numbers.
I don't see any good resource on learning about them. And that I know :_0
They only take on more advanced notions and properties in linear algebra when you analyze their importance in GL_2 and general matrix algebra.
@abb0t I have seen you doing differential equations. =\
John, do you have any more innovations? We could make a team in Google Science Fair.
Abstract/linear algebra are where things get complicated. Even then, it's not that bad. You don't have a lot to learn right now about matrices. Learn the basic operations, the importance of scalars, the eigenvector equation, the determinant rules for 2x2 and 3x3, the generalized summation (even if you can't use it) for n x n determinants, and the aspects of matrix exponentiation along with some other basics.
The eigenvector equation is extremely important. \(\vec{v}\) and \(\lambda\) are called eigenvectors and eigenvalues if and only if \[A\vec{v}=\lambda\vec{v}.\] Note that \(A\in \text{M}_n(\mathbb{R})\), \(\lambda \in \mathbb{R}\), and \(\vec{v}\in \mathbb{R}^n\).
\(\text{M}_n(\mathbb{R})\) is the group (or ring) of n by n square matrices with real entries.
Ah, ah.
So essentially, \(A\) is a just a matrix? And \(\lambda, A\) are scalars, yes?
Be careful. It is very important to pay close attention to the way I described them. \(A\) is a _square_ matrix. \(\lambda\) is a real scalar. \(\vec{v}\) is an _n_ dimensional vector. Notice that n appears in the description of the matrix's dimensions and the vectors: the dimensions must be equal.
The rest of the stuff I mentioned is really easy. You can learn all of that by tomorrow. Here is something I think is worth knowing, though: http://en.wikipedia.org/wiki/Matrix_exponential
Not Wiki. Not Wiki.
Wiki is where I learn.
Look where I am now. :-)
Not me.
Ok, fine. http://mathworld.wolfram.com/MatrixExponential.html
Yeah, yeah, you're genius :-)
NOOOOOOOOOOOOOOOOOOOOOO!!!!!!!!!!!!!!!!!!1
I'd rather take Wiki. -_-
INTERESTING. Here's something neat! . . .
I don't have the patience to rewrite all that, but it is pretty awesome.
I don't have the patience to read all that, but it is pretty awesome.
:-)
Phew, I can finally understand what this question means at least.
Join our real-time social learning platform and learn together with your friends!