If \(X_1,\ldots,X_n\) is a random sample from a continuous distribution with pdf \(f_{\theta}(x)\). Why is the joint pdf of the order statistics \(X_{(1)},\ldots,X_{(n)}\) the following: \[\Large f_{x_{(1)},\ldots,x_{(n)}}(x_1,\ldots,x_n)=n!\prod_{i=1}^nf_{\theta}(x_i)\] I'm not sure where there \(n!\) is coming from. Is the joint pdf of all n order statistics not just a "re-ordering" of the random variables \(X_1,\ldots,X_n\)? And the joint distribution of \(X_1,\ldots,X_n\) is just \(\large\prod_{i=1}^n f_{\theta}(x_i)\).
news to me. There seems to be a proof of a similar theorem for pmf's on the 2nd page of this pdf http://www.math.ntu.edu.tw/~hchen/teaching/StatInference/notes/lecture37.pdf I'm trying to see how they might have similar arguments
This would signal to me that those permutations of the random variables are also in consideration with this function. Each permutation has the same probability density, and there are n! possible permutations of X1, ..., Xn objects. It seems you must multiply those values together to get the joint probability density function. I did some checking around because this is not quite my subject either, but correct me please if I have anything to improve on here. :) http://www.math.uah.edu/stat/sample/OrderStatistics2.html | # 9 also gives a similar story, and might provide some additional information.
Oh I see where they are going. Thanks =]
Join our real-time social learning platform and learn together with your friends!