# Introduction to Point Estimation

Point estimation refers to the use of sample data to provide a single best guess (known as point estimate) of an unknown population parameter. The unknown population parameter can be mean, CDF/PDF, a regression function or predicting target/Y.

**We denote the point estimate of θ by θ̂/θ̂_{n}. Note that θ is a fixed unknown quantity but the estimate depends on the data, therefore **

**θ̂****is a random variable.**

Let X

_{1}, . . . , X_{n}be n IID data point from some distribution F. A point estimatorθ̂_{n}of a parameter θ is some function of X_{1}, . . . , X_{n}θ̂_{n}= g( X_{1}, . . . , X_{n})

### Definition

- bias(
*θ̂*_{n}) = E_{θ}(*θ̂*_{n}) – θ

We say that the point estimator is unbiased if bias = 0 i.e. E_{θ}= θ. - A point estimator is consistent if .
- The distribution of point estimators is called
**sampling distribution**. - The standard deviation of point estimator is called
**standard error**denoted by:*se* - The quality of a point estimate is sometimes measured by the mean squared error or MSE, denoted by
- The MSE can also be written as

An estimator is asymptotically normal if

Example:

Let X_{1}, . . . , X_{n} ~ bernoulli(p) and let *p̂*_{n} = n^{-1}∑X_{i} then

E(*p̂*_{n}) = n^{-1}∑E(X_{i}) = p (because, E(X_{i}) = p),

so *p̂*_{n} is unbiased. The standard error is se √V(*p̂*_{n}) = √p(1-p)/n