# AkashNotes

## Maximum Likelihood Estimation

Let X1, . . . Xn be IID with PDF f(x; θ). The likelihood function is defined byThe log-likelihood function

## Parametric Inference and Method of moments

Till now, we have covered the estimation of statistical functionals i.e. functions of CDF – Fx. They are Non-parametric Inference.

## Bootstrap Confidence Interval

There are three main methods of constructing Bootstrap confidence interval. Normal Interval Pivotal Interval Percentile Interval Normal Interval The normal

## Normal (Gaussian) Distribution: Everything you need to know

Normal or Gaussian distribution is one of the most widely used distribution functions in statistics. This post covers all the

## Bootstrap

Bootstrap is a non-parametric method for estimating accuracy defined in terms of standard error, bias, variance, confidence interval, etc. Suppose

## Empirical Distribution Function and Estimation of Statistical Functionals

When starting with the inference problem, the most basic is the non-parametric estimation of CDF and functions of CDF. Let

## Confidence set

For a parameter θ, a 1-α confidence interval is Cn = (a, b)where a = A(X1,. . , Xn) and

## Introduction to Point Estimation

Point estimation refers to the use of sample data to provide a single best guess (known as point estimate) of

## Parametric and Non-Parametric models

A statistical model is a set of distribution or a set of densities. A parametric model is a statistical model

## Law of Large Number and CLT

The Weak Law of Large Numbers (WLLN) If X1, X2, . . . , Xn are IID, then This theorem

## Convergence of Random Variable

Pointwise or sure convergence A sequence of random variables {Xn}n∈N is said to converge point-wise or surely to X if Xn(ω) → X(ω), ∀

## Inequalities

Markov’s Inequalities Let X be a non-negative random variable and suppose that E(X) exists. For any t > 0, Chebyshev’s

## Variance and Covariance

Variance Variance means spread of a distribution Let X be a random variable with mean μ. The variance of X

## Expectation

The expected value, or mean, or first moment, of X is defined to beassuming that the sum (or integral) is

## Transformation of Random Variables

Suppose X is a random variable with PDF fX and CDF FX. Let Y = r(X) be a function of

## Independent and Identical Distributed Samples

Let X = (X1, . . . , Xn) where X1, . . . , Xn are random variables. We

## Independent Random Variables and Conditional Distribution

Independent Random Variables Two random variables are X and Y are independent if, for every A and B,P(X ∈ A,

## Bivariate and Marginal Distribution

Joint Mass Function Remember the probability mass function definition. That is the study of one random variable. Given two discrete

## Continuous Random Variables

Uniform Probability Distribution X has Uniform(a, b) distribution, written X~Uniform(a, b), ifwhere a < b. The distribution function is The

## Discrete Random Variables

In this section, we are going to cover some important Discrete Random Variables. Note that we will be writing X

## Introduction to Random Variables

Random Variable A random variable is a mappingX : Ω→Rthat assigns a real number X(ω) to each outcome ω Getting

## Bayes’ Theorem

Partition A partition of Ω is a sequence of disjoint sets A1, A2, … such that The Law of Total

## Introduction to Probability – 2

Uniform Probability distribution If Ω is finite and each outcome is equally likely then, where |A| denotes the number of elements in

## Introduction to Probability

Probability quantifies uncertainty. It is a measure of “how likely” an “event” can occur. Probability is measured on a scale