ISS (Statistical Services) Statistics Paper II (Old Subjective Pattern): Questions 1 - 7 of 39

Access detailed explanations (illustrated with images and videos) to 39 questions. Access all new questions- tracking exam pattern and syllabus. View the complete topic-wise distribution of questions. Unlimited Access, Unlimited Time, on Unlimited Devices!

View Sample Explanation or View Features.

Rs. 200.00 -OR-

How to register? Already Subscribed?

Question 1

Edit

Appeared in Year: 2015

Describe in Detail

Essay▾

Distinguish between the single sampling plan and double sampling plan. Discuss how the O. C curves can be used for comparing two sampling plans.

Explanation

A single sampling plan in which a decision about the acceptance or rejection of a lot is based on one sample that has been inspected where double sampling plan when a decision about the acceptance or rejection of a lot has not been reached after single sample inspection from a submitted lot, a decision will always be reached when the second sample …

… (259 more words) …

Question 2

Edit

Appeared in Year: 2015

Describe in Detail

Essay▾

Derive the likelihood ratio test for comparing the means of k independent homoscedastic normal populations.

Explanation

Given that there are k independent homoscedastic normal populations that is the variance is same i.e.. ; i = 1,2, … , k. We have to test

In the X population the sample is = { (xi-1, xi-2, … , xini)

The parameter space is

The parameter space under H0 is

The likelihood function is

The unrestricted MLE is

The restricted MLE is

The likelihood ratio test is …

… (38 more words) …

Question 3

Complete Statistics
Edit

Appeared in Year: 2014

Describe in Detail

Essay▾

Define completeness. Verify whether Bin (1, p) is complete.

Explanation

Completeness: It is a property of a statistic in relation to a model for a set of observed data. In essence, it is a condition which ensures that the parameters of the probability distribution representing the model can all be estimated on the basis of the statistic: it ensures that the distributions corresponding to different values of the paramet…

… (264 more words) …

Question 4

Appeared in Year: 2014

Describe in Detail

Essay▾

For the Pareto distribution with pdf

Show that method of moments fails if 0 < λ < 1. State the method of moments estimator when λ > 1. Is it consistent? Justify your answer.

Explanation

Let X1 , X2 , … , Xn be a simple random sample of Pareto random variables with density

The mean and variance are respectively

In this we have only one parameter λ. Thus, we will only need to determine the first moment

To find the method of moments estimator for λ, we solve λ as a function of the mean µ.

Consequently, a method of moments estimate for λ …

… (83 more words) …

Question 5

Edit

Appeared in Year: 2014

Describe in Detail

Essay▾

X1 , X2 , … , Xn be a random sample from U (0, θ) . Obtain the moment estimator of θ. Also find its variance.

Explanation

Let X1 , X2 , … , Xn be a random sample from U (0, θ) . We known that

The estimating equation is

The above equation is solving for the parameter, we get the estimator by using method of moments

The variance of this estimator is

Here the sample are independent and identical distributed and

So, using this

… (2 more words) …

Question 6

Edit

Appeared in Year: 2014

Describe in Detail

Essay▾

Define estimability of a linear parametric function in a Gauss Markov model. State and prove a necessary and sufficient condition for estimability.

Explanation

Estimability: The linear parametric function c՚ β is an estimable function if there exists a vector such that

If X is of full column rank then all linear combinations of β are estimable, since is unique, that is

Suppose we are dealing with the model of estimability. Then a necessary and sufficient condition for a linear parametric function to be li…

… (122 more words) …

Question 7

Edit

Appeared in Year: 2014

Describe in Detail

Essay▾

X1 , X2 , … , Xn are i. i. d. random variables from N (θ, 1) where θ is an integer. Obtain MLE of θ.

Explanation

X1 , X2 , … , Xn are i. i. d. random variables from N (θ, 1) . The density function of X is

The likelihood function is

The log likelihood function is

Differentiate this with respect to θ and equating to zero,

Thus, the ML estimator of θ is the sample mean which is unbiased, consistent and BAN.

… (4 more words) …