## Interview Question

Quantitative Research Interview

# Drawing a pair of (x, y) from a joint Gaussian distribution with 0 covariance. Knowing the stndard deviations of x and y and knowing z = x + y, what is your best guess for x?

23

Z sigma_x^2/ (sigma_x^2 + sigma_y^2). This is because X and Z are jointly normal and their covariance is equal to the variance of x. Therefore, the correlation coefficient is equal to sigma_x/sigma_z, and as E(X|Z)= rho. (sigma_x/sigma_z). Z, replacing the fact that the variance of the sum is the sum of the variance for independent (normal) R.V.s will give us the answer!

Anonymous on

2

[ z * sigma_y^ (-2) ] / [ sigma_x^ (-2) + sigma_y^ (-2) ] From signal processing point of view, x is the signal, y is the noise, and z is the observation. We know X has a prior distribution X ~ N(0, sigma_x^ 2 ), noise Y has distribution Y ~ N(0, sigma_y^ 2 ) and the value Z = z, the questions is what is the MMSE estimate of X given Z, i.e., E(X|Z)? Using Bayesian theorem, or Gauss Markov Theorem, one can show that : E(X|Z) = [ z * sigma_y^ (-2) + 0 * sigma_x^ (-2) ] / [ sigma_x^ (-2) + sigma_y^ (-2) ] Comments: 1. This kind of problems are very common so please keep it in mind in Gaussian case the best estimate of X is a weighted linear combination of maximum likelihood estimate (z in this problem ) and the prior mean (0 in this problem). And the weights are the the inverse of variance. 2. In multi dimension cases where x, y, z are vectors, similar rules also apply. Check Gauss Markov Theorem 3. In tuition here is the larger variance of noise y, the less trust we will assign on ML estimate, which is sigma_y^ (-2) . Correspondingly, the more trust we put on the prior of X.

Y.Wang on

0

Z sigma_x^2/ (sigma_x^2 + sigma_y^2), similar to CAPM

Anonymous on

0

If you don't know above theorems you can use good old bayes, P(X|Z) = P(Z|X)P(X)/P(Z) and set derivative=0, since you have pdfs of X and Z. But it's really messy and I don't wanna do it.

Anonymous on

0

The answer is 0.5 EX - 0.5* EY + 0.5z (EX EY is not equal to zero)

Anonymous on

0

Dc

Anonymous on

0

[ z * sigma_y^ (-2) ] / [ sigma_x^ (-2) + sigma_y^ (-2) ] From signal processing point of view, x is the signal, y is the noise, and z is the observation. We know X has a prior distribution X ~ N(0, sigma_x^ 2 ), noise Y has distribution Y ~ N(0, sigma_y^ 2 ) and the value Z = z, the questions is what is the MMSE estimate of X given Z, i.e., E(X|Z)? Using Bayesian theorem, or Gauss Markov Theorem, one can show that : E(X|Z) = [ z * sigma_y^ (-2) + 0 * sigma_x^ (-2) ] / [ sigma_x^ (-2) + sigma_y^ (-2) ] Comments: 1. This kind of problems are very common so please keep it in mind in Gaussian case the best estimate of X is a weighted linear combination of maximum likelihood estimate (z in this problem ) and the prior mean (0 in this problem). And the weights are the the inverse of variance. 2. In multi dimension cases where x, y, z are vectors, similar rules also apply. Check Gauss Markov Theorem 3. In tuition here is the larger variance of noise y, the less trust we will assign on ML estimate, which is sigma_y^ (-2) . Correspondingly, the more trust we put on the prior of X.

Anonymous on

3

z/2. Think about the 2 dimensional graph of joint density of (x, y). The condition x+y = z (here z is fixed) is a vertical plane. The intersection will be proportional to the conditional density. For any curve of such intersection, the highest point has x coordinate z/2.

zpeace on