The essential nature of inferential statistics as verses descriptive statistics is one of knowledge In descriptive statistics the analyst has knowledge of the population data The use of descriptive statistics such as mean mode and standard deviation is typically intended for collapsing ID: 631596
Download Presentation The PPT/PDF document "CHAPTER 9 Inference: Estimation" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.
Slide1
CHAPTER 9
Inference: Estimation
The essential nature of inferential statistics, as verses descriptive statistics is one of knowledge. In descriptive statistics, the analyst has knowledge of the population data. The use of descriptive statistics such as mean, mode, and standard deviation is typically intended for "collapsing" the population data for convenience of reporting or interpretation. In inferential statistics, knowledge about the population is limited to what can be derived from samples. For whatever reason (both economic and logical reasons) it is not possible to view all of the population data, so we must examine our sample data, and make inferences about the population. We can view this process as illustrated in the following figure: Slide2Slide3Slide4
Properties of Point Estimators
Consider three marksmen, Here we have three different situations.
Target 1 has all its shots clustered tightly together, but none of them hit the bullseye.
Target 2 has a large spread, but on average the bullseye is hit.
Target 3 has a tight cluster around the bullseye.
In statistical terminology, we say that
Target 1 is biased/ with a small variance.
Target 2 is unbiased/ with a large variance.
Target 3 is unbiased/ with a small variance.
If you were hiring for the police department, which shooter would you want? In general in statistics, we want both unbiased and small variance--an estimator that almost always is ``on target.''
Slide5
Interval Estimation
Confidence-interval estimate for a population parameter is a set of numbers obtained from the point estimate the parameter, coupled with a "percentage" or probability which characterizes how confident that we are that the parameter lies within the interval.
confidence level is the value of that "percentage" of confidence.
If we wish to be 95% confident that an randomly selected value drawn from a normal distribution, with a mean of 0 and a standard deviation of 1 will be within an interval which is constructed, this interval must be constructed with end-points -z and z such that:
Pr
( -z < Z < z) = 0.95
To accomplish the desired results, we must find the appropriate z values for the interval, i.e., this interval would look like the following:
Pr
( -1.96 < z < 1.96) = 0.95 Slide6Slide7Slide8
Example 9.2
The average zinc concentration recovered from a sample of zinc measurements in 36 different locations is found to be 2.6 grams per millilitre. Find the 95 % and 99% confidence intervals for the mean zinc concentration in the river. Assume that the population standard deviation is 0.3.
Slide9
Example 9.3
How large sample is required in Example 9.2 if we want to be 95% confident that our estimate of
is off by less than 0.05?
Slide10Slide11Slide12Slide13Slide14Slide15Slide16Slide17Slide18Slide19Slide20Slide21Slide22
9.9 Inferences on proportionsSlide23Slide24Slide25Slide26Slide27Slide28Slide29Slide30Slide31Slide32Slide33Slide34Slide35
9.14 Maximum likelihood estimation
Let
X
1
,
X
2
, ...,
X
n
be independent random variables taken from a probability distribution with probability density function f(X,
), where
is a single parameter of the distribution.
L(
X
1
,
X
2
, ...,
X
n
;
) = f(
X
1
,
X
2
, ...,
X
n
;
) = f(
X
1
;
) f(
X
2
;
)….. f(
X
n
;
)
is the joint distribution of the random variable called likelihood function.
Let
x
1
,
x
2
, ...,
x
n
denote observed values in a sample. The values are known, we observed them, we want to estimate the true population parameter
. In the discrete case the likelihood of the sample is, is the following joint probability
P(
X
1
=
x
1
,
X
2
=
x
2
,…,
X
n
=
x
n
)
The maximum likelihood estimator is one that results in a maximum value for this joint probability.Slide36Slide37