Introduction to Point Estimation

Point estimation refers to the use of sample data to provide a single best guess (known as point estimate) of an unknown population parameter. The unknown population parameter can be mean, CDF/PDF, a regression function or predicting target/Y.

We denote the point estimate of θ by θ̂/θ̂n. Note that θ is a fixed unknown quantity but the estimate depends on the data, therefore θ̂ is a random variable.

Let X1, . . . , Xn be n IID data point from some distribution F. A point estimator θ̂n of a parameter θ is some function of X1, . . . , Xn
θ̂n = g( X1, . . . , Xn)

Definition

  • bias(θ̂n) = Eθ(θ̂n) – θ
    We say that the point estimator is unbiased if bias = 0 i.e. Eθ = θ.
  • A point estimator is consistent if estimator converges in prob.
  • The distribution of point estimators is called sampling distribution.
  • The standard deviation of point estimator is called standard error denoted by se:
    se of point estimator
  • The quality of a point estimate is sometimes measured by the mean squared error or MSE, denoted by
  • The MSE can also be written as
    mse

An estimator is asymptotically normal if
asymptotically normal condition

Example:
Let X1, . . . , Xn ~ bernoulli(p) and let n = n-1∑Xi then

E(n) = n-1∑E(Xi) = p (because, E(Xi) = p),
so n is unbiased. The standard error is se √V(n) = √p(1-p)/n

Leave A Comment