Logo

AskSia

Plus

Q. 1. Consider the observations x[n] = A + w[n] n = 0, 1, . . . , N - 1 ...
Oct 7, 2024
Q. 1. Consider the observations x[n] = A + w[n] n = 0, 1, . . . , N - 1 where A is an unknown constant and w[n] is a sequence of independent, identi- cally distributed (IID) Gaussian random variables having a zero mean and a known variance σ 2 . (a) To estimate the unknown parameter θ 1 = A, the following estimator N-1 is proposed. Find a0 , . . . , aN -1 so that the estimator is unbiased and the vari- ance is minimized. Show all the detailed steps involved. (b) By using the optimal values of an’s obtained in part (a), find the variance of θ(ˆ)1 , denoted by var (θ(ˆ)1 ). What will happen to var (θ(ˆ)1 ) as N → ∞? (c) To estimate the unknown parameter θ2 = A2 , the following estimator is proposed. Is the estimator θ(ˆ)2 unbiased? Is θ(ˆ)2 asymptotically unbiased?
Solution by Steps
step 1
To ensure that the estimator θ^1=n=0N1anx[n] \hat{\theta}_{1} = \sum_{n=0}^{N-1} a_{n} x[n] is unbiased, we require E[θ^1]=A E[\hat{\theta}_{1}] = A . Since x[n]=A+w[n] x[n] = A + w[n] , we have: E[θ^1]=n=0N1anE[x[n]]=n=0N1an(A+E[w[n]])=An=0N1an E[\hat{\theta}_{1}] = \sum_{n=0}^{N-1} a_{n} E[x[n]] = \sum_{n=0}^{N-1} a_{n} (A + E[w[n]]) = A \sum_{n=0}^{N-1} a_{n} Thus, we need n=0N1an=1 \sum_{n=0}^{N-1} a_{n} = 1
step 2
To minimize the variance of θ^1 \hat{\theta}_{1} , we calculate var(θ^1) \operatorname{var}(\hat{\theta}_{1}) : var(θ^1)=var(n=0N1anw[n])=n=0N1an2var(w[n])=σ2n=0N1an2 \operatorname{var}(\hat{\theta}_{1}) = \operatorname{var}\left(\sum_{n=0}^{N-1} a_{n} w[n]\right) = \sum_{n=0}^{N-1} a_{n}^2 \operatorname{var}(w[n]) = \sigma^2 \sum_{n=0}^{N-1} a_{n}^2 We need to minimize n=0N1an2 \sum_{n=0}^{N-1} a_{n}^2 subject to n=0N1an=1 \sum_{n=0}^{N-1} a_{n} = 1
step 3
Using the method of Lagrange multipliers, we set up the function: L(a0,a1,,aN1,λ)=n=0N1an2λ(n=0N1an1) L(a_{0}, a_{1}, \ldots, a_{N-1}, \lambda) = \sum_{n=0}^{N-1} a_{n}^2 - \lambda \left(\sum_{n=0}^{N-1} a_{n} - 1\right) Taking partial derivatives and setting them to zero gives: Lan=2anλ=0    an=λ2 \frac{\partial L}{\partial a_{n}} = 2a_{n} - \lambda = 0 \implies a_{n} = \frac{\lambda}{2} Substituting into the constraint n=0N1an=1 \sum_{n=0}^{N-1} a_{n} = 1 leads to Nλ2=1    λ=2N N \cdot \frac{\lambda}{2} = 1 \implies \lambda = \frac{2}{N} and thus an=1N a_{n} = \frac{1}{N}
step 4
Substituting an=1N a_{n} = \frac{1}{N} into the variance expression: var(θ^1)=σ2n=0N1(1N)2=σ2NN2=σ2N \operatorname{var}(\hat{\theta}_{1}) = \sigma^2 \sum_{n=0}^{N-1} \left(\frac{1}{N}\right)^2 = \sigma^2 \cdot \frac{N}{N^2} = \frac{\sigma^2}{N} As N N \rightarrow \infty , var(θ^1)0 \operatorname{var}(\hat{\theta}_{1}) \rightarrow 0
step 5
For θ^2=(1Nn=0N1x[n])2 \hat{\theta}_{2} = \left(\frac{1}{N} \sum_{n=0}^{N-1} x[n]\right)^{2} , we find its expected value: E[θ^2]=E[(1Nn=0N1(A+w[n]))2]=E[(A+1Nn=0N1w[n])2] E[\hat{\theta}_{2}] = E\left[\left(\frac{1}{N} \sum_{n=0}^{N-1} (A + w[n])\right)^{2}\right] = E\left[\left(A + \frac{1}{N} \sum_{n=0}^{N-1} w[n]\right)^{2}\right] Using the properties of expectation, we find that E[θ^2]=A2+σ2N E[\hat{\theta}_{2}] = A^2 + \frac{\sigma^2}{N} . Thus, θ^2 \hat{\theta}_{2} is biased
step 6
To check for asymptotic unbiasedness, we consider the limit as N N \rightarrow \infty : limNE[θ^2]=A2+limNσ2N=A2 \lim_{N \to \infty} E[\hat{\theta}_{2}] = A^2 + \lim_{N \to \infty} \frac{\sigma^2}{N} = A^2 Thus, θ^2 \hat{\theta}_{2} is asymptotically unbiased
Answer
(a) an=1N a_{n} = \frac{1}{N} for n=0,1,,N1 n = 0, 1, \ldots, N-1 ; (b) var(θ^1)=σ2N \operatorname{var}(\hat{\theta}_{1}) = \frac{\sigma^2}{N} and approaches 0 as N N \to \infty ; (c) θ^2 \hat{\theta}_{2} is biased but asymptotically unbiased.
Key Concept
Unbiased estimators and their variances are crucial in statistical inference.
Explanation
The solution demonstrates how to derive unbiased estimators and analyze their variances, highlighting the importance of asymptotic properties in estimation.
© 2023 AskSia.AI all rights reserved