Ask your own question, for FREE!
Mathematics 13 Online
OpenStudy (irishboy123):

Can you do this without choosing a co-ordinate system?

OpenStudy (irishboy123):

\(\vec \nabla \bullet \left( \dfrac{\vec r}{|\vec r |}\right)\) it is given on the basis that \(\vec r = \left(\begin{matrix}x \\ y \\ z\end{matrix}\right)\) ... in Cartesian ... but: \(\vec r = r \ \hat r\) ...amd maybe there are some identities. which i doubt.

OpenStudy (irishboy123):

@ljetibo

OpenStudy (ljetibo):

Yep, I think I recognize this term from Green theorem proof or somewhere, it turns out it's $$\vec \nabla \bullet \left( \dfrac{\vec r}{|\vec r |}\right) = 4\pi\delta(r-r')$$ where the r' is your point in space, but I can't remember *exactly* what it looked like, but something like that, I'll look up the proof tomorrow evening because I've got stuff to do in the morning.

OpenStudy (ljetibo):

Oh, sry, I've just remembered where I saw it and that was for $$\nabla^2 \frac{1}{\vec r - \vec r'} = -4\pi\delta(\vec r-\vec r')$$ which is not the same, at all

OpenStudy (irishboy123):

tu the next one up is the curl of the same

OpenStudy (ljetibo):

I'll give it my best tomorrow evening, I have some unavoidable business in the morning

OpenStudy (ljetibo):

Hey, I've had some time and came back, but once I read the question again, one thing doesn't quite make sense to me, what exactly do you mean if this can be done without choosing a coordinate system? You've got to pick one, otherwise how would you write the vector r without a linearly independent base spanning the space? At best I can try to do the problem for some general curvilinear coordinate system in some general case of infinite linear combination of base vectors. If you mean "how do I express this over the vector r again?" then just do the math, and then rotate things around again until you get back the vector r. Keep in mind that you're going from vector space to scalar field since we're talking about divergence here so you shouldn't get a vector at the end, except maybe its magnitude or something.

OpenStudy (irishboy123):

thx @ljetibo, really appreciate your taking a look. the inspiration is possibly those few times when you can make a vector equation make sense in and of itself.

OpenStudy (irishboy123):

my follow up question is this: does a tensor choose a coordinate system??

OpenStudy (ljetibo):

I mean that is a quite a hard question to answer, yes and no. It's got a lot to do with linear algebra and representing linear maps over matrices. I can answer it but a fair warning that a wall of text is incoming. I'll do it if you commit to reading it. I'll have to dig up on linear algebra books too (if I can find some) I have too to remind myself of some things.

OpenStudy (irishboy123):

@ljetibo as stated, appreciate your efforts here but i won't put you to the chore of more just yet this is a bug bear of mine so i'll be back on it eventually.

OpenStudy (ljetibo):

I don't mind it, I spread it out throughout the day when I have some time with no work to do. It's good repetition for me too, and a step away from boring everyday stuff. I just don't want to look as desperately trying to be smart and avoiding simple answers for the sake of sounding smart. It's not that a tensor chooses the coordinate system it's that the operators are defined over the representation of the vector/scalar field you're on (depending on the operator). This is a lot of linear algebra theory here. I don't know how much you're familiar with it. I'm not even sure that I know enough technical math English to explain it. I'd have to dust off a lot of old theoretical knowledge to do this properly, which isn't easy with google as your source to remember from, so take the following with a grain of salt. If you've only thought of matrices as a rectangular-ly neatly organized numbers that shorten the writing times then you're wrong. The matrices physicist are most familiar with are the ones defined over vector spaces as a linear mapping.. Vector spaces are sets that have defined addition and scalar multiplication on them that are associative, commutative, distributive and have a defined identity value (a "1"). You can say that this space is spanned by any set of vectors with which you can represent any other vector in that space. In general this set doesn't need to be a finite set and the spaces don't have to be finite either, but I've only ever dealt with finite spaces and there is a bit of difference. What is said here is valid for finite spaces with finite spanning sets. For example the R^3 space is spanned by {[1,2,3], [0,1,4], [4,3,1]}. Because you can make out any other vector in that space as some linear combination of those vectors. If this spanning set is also linearly independent, then you call that set the basis of the vector space. In that way you have the basis of R^3 usually as {[1,0,0}, [0,1,0], [0,01]}. These kinds of bases are called Hamel's basis (there are other ones too, such as topological....). From this point on you've a lot of proofs and theorems describing various properties of bases. Important properties too, what follows is really dependent on these properties. The big ones are that every vector space has a basis and that different bases are equipotent (!!, meaning they are bijective).... Continuing on we can define a linear operator as a mapping between two spaces U, V over a field F, f:U->V, if f preserves the properties of additivity and homogeneity. I.e. mirroring f(x,y) = (y,x) is a linear operator. Or rotation, or translation f(x,y) = (x+a, y+a). If we're talking about the polynomial space then derivation and integration are linear operators too. From this point on you go to show a lot of properties of linear maps/operators. Stuff like isomorphic lin. operators (lin. op. and a bijection), homomorphism, that a linear operator is uniquely given by its "action" on the basis of a space(!!!) and other really important stuff. Ok, FINALLY, we get to matrices. For any bases A=(a1,....an) and B(b1.....bm) of spaces U and V respectively, we can find a linear operator: $$f(a_k) = \sum_{i=1}^{m}\alpha_{ik}b_{i}$$ k=1,.....n which we call a matrix of operator f in the pair of basis A and B. (pay really close attention to the indices in there). Because we know that a linear operator is uniquely given by its action on a base, we know that it will be uniquely given by the scalars alpha_ik. We know this because we've defined a matrix to be the coefficients in linear combination of basis vectors B of the result of "acting" on a basis vector of A. All we need is to do to make this look like the matrix we're used to is to do it for the whole basis a and write those coefficients in a neat rectangular scheme. This leads us to a better definition of a matrix as a mapping f: D_mn -> F of cartesian product $$D_{mn} = \{1, \dots, m\}\times\{1, \dots, n\}$$ over a field F. Then you go on and show all the basic properties matrices have. Things like transposed matriex, main diagonal, second diagonal, zeroth matrix, identity matrix... Weirdly, dimension, rank and defect are properties of isomorphisms but apply to matrices too. Then you define how to multiply matrices with scalars from field F and you show that this is actually a space (see top to remind yourself what is a veor space).. Then we add in matrix multiplication and then you show that matrix space with multiplication is an entire algebra over the field F. I'm not sure if there's a theorem dealing with this but once you pick your basis then you can write any operator in matrix form. This is because the "action" of a linear map/operator on the basis is unique and if you pick the second basis to be that same basis you're acting on then this is trivially a bijection and the operator that creates the matrix is isomorphic. Then it likes to get weird with Jordan normal forms and matrices of matrices.... I.e. the basis of 2x2 matrix space M22 is: $$\left(\begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array}\right), \left(\begin{array}{cc} 0 & 1 \\ 0 & 0 \end{array}\right), \left(\begin{array}{cc} 0 & 0 \\ 1 & 0 \end{array}\right), \left(\begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array}\right)$$. Because any matrix in that space is expressible over a linear combination of these matrices. So derivation over polynomial space is another basic example. For any polynomial up to order of two: $$\text{base} = \{1, x, x^2\} \\ \text{operator "action" on the base:}\\ D(1) = 0\\ D(x) = 1\\ D(x^2) = 2x\\ D = \left( \begin{array}{ccc} 0 & 1 & 0 \\ 0 & 0 & 2 \\ 0 & 0 & 0 \end{array} \right) $$ So that, for example a poynomial y = 1 + 2x +3x^2 once you've multiplied them $$DA = \left( \begin{array}{ccc} 0 & 1 & 0 \\ 0 & 0 & 2 \\ 0 & 0 & 0 \end{array} \right)\left(\begin{array}{c}1 \\ 2 \\ 3\end{array}\right) = \left(\begin{array}{c}2 \\ 6 \\ 0\end{array}\right) $$ or 2+6x. As you can see if you try to generalize this operator for *all* polynomials and express is in their base you would have to pick an infinite base {1, x, x^2, ....., x^n ....} once you've picked it the matrix representation of any polynomial in that base would be an upper triangular matrix with row order n as the only number in a particular row and column pair. Now I mention polynomial examples here often because that's precisely what the gradient, divergence, curl and Laplacian are - derivation operators just a bit different in definition. Without the choice of a basis you can't express things as matrices. I'm really hoping that by this point you get the picture. There are no matrices without the choice of a basis. Tensors being a "generalization" of matrices can't be written if you don't have a base in which you want to represent them in. Do keep in mind that bases aren't exactly coordinate systems. However, the only difference I can think of between a basis and a coordinate system is the fact that a coordinate system is an ordered set, when a basis is just a set of vectors. Without an ordered set the [i,j,k] would be equivalent to [j,ik] which it isn't. But then again these coordinate system don't have to be like the coordinate systems you're imagining probably or in the sense you might imagine them in classical mechanics. I created a coordinate system "implicitly" when writing the polynomial above, I always refered to the basis in the particular order const-x-x^2 and I've just written them in column vector in the same way, and I've read them afterwards the same exact way. This ordering just defined a new coordinate system. So defining a coordinate system is just defining an ordered set of basis vectors. Take another example to see how basis and coordinate system don't have to "jive" with what you might be thinking. For example in quantum mechanics you often express the operators as matrices in the space of their own eigenvalues, angular momentum operator for example is defined as $$<lm'|L_z|lm> = \hbar \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -1 \end{array} \right)$$ This is the result of the fact that we've "ignored" the radial part of the wavefunction and expressed the wavefunction over spherical harmonics and that is the basis in which we've written that particular matrix representation of angular momentum operator in. The bra and ket |lm> and <lm'| are shorthand notation of the basis vectors of: $$\Psi = \sum\Psi_i Y_l^m = \Psi_1 Y_1^1 + \Psi_2 Y_1^0 + \Psi_3 Y_1^{-1}$$ this is the "coordinate system" we used. I suppose you can say this is a spherical coordinate system because those are the variables spherical harmonics are expressed in? But the truth is that "we've" constructed spherical harmonics as an complete orthogonal set of functions on the sphere precisely to be able to express a basis for the SO(3) symmetry group. Alongside ordering them, which seems trivial based on the fact they're dependant on l and m this is a coordinate system. It's units are spherical harmonics. That's why multipole expansion works so neatly, you say this function on a sphere is a 1 meter in Y11, 2pi meters in Y2-1 direction and 5 in Y00. I don't know how much you picked up/out of this, hopefully some. The point is that matrices are representation of linear operators, or vectors of a vector space V over a field F and without having a basis to represent them you're not going to be able to write the matrix. Tensors are generalized matrices basically, therefore choosing a basis to represent them in is mandatory. I am unsure if that automatically implies the existence of a coordinate system but as far as I can remember coordinatization is just defining an ordered basis set, which is not a big problem once a basis has been selected, even if it doesn't quite jive with what you had in mind as a coordinate system (representing some actual things like meters etc...). The thing about tensors is that they're that much more than just matrices. Different kinds of tensors can correspond to the same matrix. Tensors have extra operations defined on them, such as tensor multiplication, contraction, raising or lowering an index and so on... A tensor of rank (1, 1) is a matrix, a tensor of rank (0,0) is a scalar, a tensor of rank (1,0) is a vector. Without getting myself bogged down here the short answer is that as far as the context of your question goes you can say tensors are like matrices and therefore this applies to them too. HOOOOOOLY, longer than I though, but back to your original question I suppose? What I'm asking is if you're actually asking for a solution to that problem in a different coordinate system spanned by some different basis vectors in which case you can have a solution for curvilinear coordinates: $$\nabla\cdot \vec F= \frac{1}{\prod h_i} \sum\frac{\partial}{\partial}h_jh_kF_i$$ where the h's are Lame's coefficients. This is as general as it gets, I can show you how to get to this eq. if you want. however you might want to rephrase the question to reflect what you're actually after. Even rephrased like this, you still have to get rid of the pole at r=0 in there, and the more general you go the worse it is. or are you asking for the solution to that expression to be done just expressed over r, but the actual calculations can be in whatever coordinate system I want? In that case just pick the spherical coordinate system and do the similar thing as with the equation I posted, saying not that lim r->r' but lim r->0. I wonder if they have a word limit.

OpenStudy (irishboy123):

holy pellet!!! i've been watching this, off and on: https://www.youtube.com/watch?v=uP8V-O8hncI the problem is that i can follow it because he keeps switching back into a co-ordinate system. and i don't even know if that makes sense anymore. So go for Linear Algebra?

OpenStudy (irishboy123):

that was supposed to say Holy shιt!!!!!

OpenStudy (ljetibo):

In the video you posted he talks about minimization of the function that is an addition of two distances, mentioning that you don't have to set a coordinate system to do that. And sure enough you don't have to have a coordinate system in place to define a metric. But considering I've not watched any of his further videos and only scoped out the following one I can see that he jumped right into defining a coordinate system, even if the title of the video was covariant basis. In general you dont' have to have a full coordinate system set up to represent things with, just having a linearly independent basis is enough. But like I said, I don't remember there being a large step from a basis to a coordinate system, it doesn't incur any drastic cost in my mind. No special new things are added, you don't lose anything afaik, why not just go on and do one. But I think that in some grander scheme of things this is just an introductory video, maybe he'll circle back to the points he made in video 3 in his video 5, or 7. After all they are about 12 minutes long each, which isn't that much, you'd need 10 videos to fill a single college lecture. It's hard to say where exactly he's planing on going, but following his vids most certainly can't be harmful. If you're thinking of linear algebra as "the better thing" you're probably wrong. I don't know how complex his lectures go or what he assumes the preexisting knowledge his students have should be. Maybe they already had linear algebra and are supposed to be familiar with all the topics and objects, maybe not. If you see yourself lagging a lot - sure, why not go back to basics and construct rings, fields and vector spaces out of sets. But if he's not assuming any prior knowledge he'll have to do some lin. alg. at some point, maybe you'll benefit from his approach more. I checked out the recommended book from beneath the video and the comments and reviews sure didn't say you have to have some kind of amazing in depth knowledge about lin. alg., calculus to be able to understand it. Considering that the tensor is only defined in chapter 6 of a 300something page book I'd say that there's quite a bit of "meat" still waiting for you in those videos. If you can keep up, why not keep going if you're already deep enough. It's not like the knowledge will evaporate, at best it'll only get cemented in and cleared up if you ever do l.a. later on. Just as heads up though, I've only ever encountered lin. alg. as a heavy duty math taxing annoying line of lema's, propositions, theorems, corollaries, and proofs. Half the time you don't even know what you prooved. It was also all very very abstract to me at the time. If I hadn't been in the lectures I don't know how I'd fair. Especially since there were never any examples, just proofs forever. I flunked it the first time and hated it, had to carry the subject into the following year. I spent a lot of time on it and at some point something clicked then and I aced it in the end. I don't know if there's a better book or a class or something than what I had. I think that if you can take it up as a class in local uni/college, it would probably help you out a lot. Even if you're not enrolled, just go in a class and crash it. It's a really big subject after all.

OpenStudy (irishboy123):

cheers mate :-) i'm gonna keep going with that vid series though i don't seem to have the time to spend on it, the basic idea is more than fascinating but he keeps coming back inside a system. that is what is bugging me.

Can't find your answer? Make a FREE account and ask your own questions, OR help others and earn volunteer hours!

Join our real-time social learning platform and learn together with your friends!
Can't find your answer? Make a FREE account and ask your own questions, OR help others and earn volunteer hours!

Join our real-time social learning platform and learn together with your friends!