Find the distribution function of the random variable Y given by Y = X1 + X2. X1 and X2 are independent and identically distributed with common density: f(x)=e^-x, x>0
convolution
Have you done convolutions before?
I am not familiar with convolutions
The convolution is a operation that takes two functions and outputs a new function. Given two functions f(x) and g(x), their convolution is defined by\[(f \ast g)(z) = \int_{-\infty}^\infty f(x)g(z-x)dx.\]If you have two independent random variables with densities f(x) and g(x), the density of the sum of these random variables is the convolution of the densities:\[f_{X_1+X_2}(z) = (f\ast g)(z).\]Using this, lets find the density of \[X_1+X_2\colon\]\begin{eqnarray*}f_{X_1+X_2}(z) &=&(f_{X_1} \ast f_{X_2})(z)\\&=& \int_{-\infty}^{\infty} f_{X_1}(x)f_{X_2}(z-x)dx\\&=&\int_{-\infty}^\infty e^{-x} \mathbb{1}_{[0,\infty)}(x)e^{-(z-x)}\mathbb{1}_{[0,\infty)}(z-x)dx\\&=&\int_0^z e^{-x}e^{x-z}dx\\&=&\int_0^z e^{-z}dx=z e^{-z},\qquad z > 0.\end{eqnarray*}the distribution is then\begin{eqnarray*}F_{X_1+X_2}(z) &=& \int_{-\infty}^z f_{X_1+X_2}(x)dx\\&=&\int_0^z x e^{-x}dx=1-e^{-z}(z+1).\end{eqnarray*}
I think I understand what you're saying. I've not studied this before, so it may take some time to sink in. Thank you.
Out of curiosity, what if we were finding the product of the variables instead of the sum? I don't need to find a solution to that problem, but would the process be similar in any way?
Firstly we need the joint density of the random vector \[(X, Y).\]If you have two independent random variabes\[X, Y\]with densities\[f_X, f_Y\]respectively, the joint density of the random vector\[(X, Y)\]is simply the product of the densities:\[f_{X,Y}(x,y) = f_X(x) \cdot f_Y(y).\]If the random variables are not independent, you probably know the joint density already (if you didn't you wouldn't know anything about the dependency of the random variables and the problem would be unsolvable). Let\[U = X, V = XY.\]The first step is finding the density of the random vector\[(U, V) = (X, XY).\]We define the transformation \[\Upsilon(x, y) = (u, v) = (x, xy).\]The inverse of this transformation is\[\Upsilon^{-1}(u, v) = (x, y) = \left(u, \frac{v}{u}\right)\]and the Jacobian of the inverse is\begin{pmatrix}1 & 0 \\ -\frac{v}{u^2} & \frac{1}{u} \end{pmatrix}and the absolute value of determinant of this Jacobian is\[\left|\det J_{\Upsilon^{-1}}\right(u, v)| = \left|\frac{1}{u}\right|.\]By the change of variable theorem (I really have no idea how it's called in English, but it's a result of multivariate calculus):\begin{eqnarray*}\int_{-\infty}^\infty\int_{-\infty}^\infty f_{X,Y}(x, y)dxdy &=& \int_{-\infty}^\infty\int_{-\infty}^\infty f_{X,Y}\left(\Upsilon^{-1}(u,v)\right)|\det J_{\Upsilon^{-1}}(u,v)|dudv\end{eqnarray*}which means the second integrand must be the density of\[(U, V)\](since it satisfies the definition), so this yields:\[f_{U, V}(u, v) = f_{X,Y}\left(u, \frac{v}{u}\right)\left|\frac{1}{u}\right|.\]Finally, by marginalizing the joint density we can find the density of XY:\[f_V(v) = f_{XY}(v) = \int_{-\infty}^\infty f_{X,Y}\left(u, \frac{v}{u}\right)\left|\frac{1}{u}\right|du.\]Notice that this method works for other transformations (such as X/Y or whatever). You just need to change your transformation, find its inverse, find the Jacobian for that inverse and marginalize the joint density.
Join our real-time social learning platform and learn together with your friends!