# Statistics Indian Statistical Service Part 3 Question-2 Solved Solution

**2.** **(a) (i) Describe a ratio estimator. Obtain the bias of a ratio estimator.**

A ratio estimator is a type of statistical estimator used to estimate the value of a population parameter by taking the ratio of two random variables. Specifically, a ratio estimator is obtained by dividing the value of one random variable (the numerator) by the value of another random variable (the denominator), both of which are measured on a sample of observations from the population.
For example, suppose we want to estimate the average income of all households in a city, and we know the total number of households and the total income earned by all households in the city. We can use a ratio estimator by dividing the total income by the total number of households to obtain an estimate of the average income per household.
The bias of a ratio estimator is a measure of how far off the estimate is from the true value of the population parameter on average when using the estimator over multiple samples. The bias of a ratio estimator can be calculated as:
Bias = (E(X) * E(Y) - E(XY)) / (E(Y))^2
where X and Y are the numerator and denominator random variables, respectively, and E denotes the expected value operator.
If the ratio estimator is unbiased, the bias will be equal to zero, and the estimate will be on average equal to the true value of the population parameter. However, if the ratio estimator is biased, the estimate will be systematically higher or lower than the true value of the population parameter, on average. Therefore, it is important to assess the bias of a ratio estimator when using it to estimate a population parameter.
**(ii) Under standard notations it is given**
*N= *10,000, n = 100, *X ***. 50, **Y = **4500, X" = 45, .5,2 = 25, sx =16, and**
**the sample correlation coefficient between x and **y, /5= **0-8.**
**Estimate the population mean using ratio estimator and its variance. **
To estimate the population mean using a ratio estimator, we use the formula:
R = (âˆ‘ Xi / N) / (âˆ‘ Yi / N)
where âˆ‘Xi is the sum of the values in the sample X, and âˆ‘Yi is the sum of the values in the sample Y.
Substituting the given values, we get:
R = [(100 * 45.5) / 10,000] / [4500 / 10,000]
= 0.0101
To estimate the population mean using a ratio estimator, we use the formula:
R = (âˆ‘ Xi / N) / (âˆ‘ Yi / N)
where âˆ‘Xi is the sum of the values in the sample X, and âˆ‘Yi is the sum of the values in the sample Y.
Substituting the given values, we get:
R = [(100 * 45.5) / 10,000] / [4500 / 10,000]
= 0.0101
The population mean, Âµ, can then be estimated using the formula:
Âµ = R * Y
= 0.0101 * 4500
= 45.45
So the estimated population mean is 45.45.
To find the variance of the ratio estimator, we use the formula:
Var(R) = [(1 - n/N) / (n - 1)] * [(1 / N) * âˆ‘(Xi - Xbar)^2 / Xbar^2]
where Xbar is the sample mean of X.
Substituting the given values, we get:
Var(R) = [(1 - 100/10,000) / (100 - 1)] * [(1 / 10,000) * âˆ‘(Xi - 45.5)^2 / 45.5^2]
= 0.000045
So the variance of the ratio estimator is 0.000045.

**2.(b) Consider the multiple linear regression model y = 0 + u where**

**y:nxl, XÎ² : nxk, Î²: kxl, u : n x 1.**

**(i) How are the properties of ordinary least squares estimator affected when X and**

*u *are correlated and plim (1 X' *u) *is also not equal to zero ? *n*

When X and u are correlated, the ordinary least squares estimator becomes biased and inconsistent. This is because the least squares estimator assumes that the errors are uncorrelated with the predictors, and if this assumption is violated, the estimator is biased.

Moreover, if plim(1 X'u) is not equal to zero, then the estimator is also biased and inconsistent. This is because the estimator assumes that the errors have mean zero, and if the error term is correlated with the predictors, the estimator will be biased.

In summary, when X and u are correlated and plim(1 X'u) is not equal to zero, the ordinary least squares estimator is biased and inconsistent. To address this issue, one can use instrumental variable regression or other methods that account for endogeneity and correlated errors.

**(ii) Derive the instrumental variables (IV) estimator of ***f3 ***when the number of instrumental variables is greater than the number of explanatory variables. Give**

**its interpretation as a two-stage least squares estimator. **

The instrumental variables (IV) estimator is used when the ordinary least squares (OLS) estimator is biased due to endogeneity, which occurs when the explanatory variables are correlated with the error term. The IV estimator uses instrumental variables that are correlated with the explanatory variables but uncorrelated with the error term to obtain unbiased estimates.

When the number of instrumental variables is greater than the number of explanatory variables, we can use a two-stage least squares (2SLS) estimator, which involves two stages:

Stage 1:

Regress each of the explanatory variables on all the instrumental variables using OLS to obtain the predicted values of the explanatory variables.

Use the predicted values of the explanatory variables as regressors in the original regression equation along with the instrumental variables to obtain the IV estimator.

Stage 2:

Estimate the error term from the first-stage regression and obtain the predicted values of the explanatory variables.

Use the predicted values of the explanatory variables as regressors in the original regression equation to obtain the 2SLS estimator.

The interpretation of the 2SLS estimator is that it provides an estimate of the causal effect of the explanatory variables on the dependent variable while controlling for the endogeneity bias. By using the instrumental variables to obtain unbiased estimates of the explanatory variables, the 2SLS estimator provides consistent estimates of the true coefficients in the presence of endogeneity.

**2.(c) (i) Define exponentially weighted moving average (EWMA) for smoothing a time**

**series and how can we use it for adoptive forecasting.**

Exponentially Weighted Moving Average (EWMA) is a popular time series forecasting technique that uses a weighted average of past observations to predict future values. The EWMA assigns exponentially decreasing weights to past observations, with more recent observations given more weight than older ones. This means that the EWMA places greater emphasis on more recent observations, making it more responsive to changes in the time series.

The formula for the EWMA is as follows:

F(t+1) = Î± * Y(t) + (1-Î±) * F(t)

where F(t+1) is the forecast for the next time period, Y(t) is the observation for the current time period, F(t) is the forecast for the current time period, and Î± is the smoothing parameter, which determines the weight given to the current observation. The value of Î± typically ranges between 0 and 1, with larger values indicating a greater emphasis on more recent observations.

To use the EWMA for adaptive forecasting, we can update the value of the smoothing parameter based on the accuracy of the forecast. One way to do this is to use a measure of forecast error, such as the mean squared error (MSE), to adjust the value of Î±. If the forecast error is high, we can increase the value of Î± to place more weight on recent observations, while if the forecast error is low, we can decrease the value of Î± to place less weight on recent observations.

This adaptive approach to forecasting using the EWMA is known as the adaptive EWMA (AEWMA) or the variable parameter EWMA (VPEWMA). By adjusting the smoothing parameter based on the accuracy of the forecast, the AEWMA can provide more accurate and responsive forecasts, particularly in dynamic and volatile environments.

**(ii) When do we use the triple exponential smoothing of Holt and Winters? Describe it for multiplicative seasonality. **

Triple Exponential Smoothing, also known as the Holt-Winters method, is a popular forecasting technique used to predict time series data with trends and seasonality. It is an extension of Double Exponential Smoothing, which includes an additional smoothing parameter to capture seasonality in the data.

Triple Exponential Smoothing is used when the time series data exhibits trends and seasonality that vary over time. It is particularly useful when the seasonal component of the time series is not constant, but changes over time.

Triple Exponential Smoothing involves three smoothing parameters: Î±, Î², and Î³, which are used to smooth the level, trend, and seasonality components of the time series, respectively. The formula for Triple Exponential Smoothing with multiplicative seasonality is as follows:

F(t+m) = (L(t) + m * T(t)) * S(t-m+1)

where F(t+m) is the forecast for m periods ahead, L(t) is the smoothed level component at time t, T(t) is the smoothed trend component at time t, and S(t-m+1) is the smoothed seasonal component m periods ago.

The smoothing equations for the level, trend, and seasonal components are as follows:

L(t) = Î± * Y(t) / S(t-m) + (1-Î±) * (L(t-1) + T(t-1))

T(t) = Î² * (L(t) - L(t-1)) + (1-Î²) * T(t-1)

S(t) = Î³ * Y(t) / L(t) + (1-Î³) * S(t-m)

where Y(t) is the actual observation at time t.

The values of the smoothing parameters Î±, Î², and Î³ can be estimated using a grid search or optimization algorithm to minimize the sum of squared errors (SSE) between the actual and predicted values of the time series.

Overall, Triple Exponential Smoothing is a powerful technique for forecasting time series data with trend and seasonality that varies over time. It provides accurate forecasts by capturing the underlying patterns in the data and adjusting the smoothing parameters based on the changing nature of the time series.

Are you interested for more such solutions?

- yes, I want to enroll now
- Thinking About it