Thursday, October 9, 2014

Estimation Theory

Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured/empirical data that has a random component. An estimator attempts to approximate the unknown parameters using the measurements. we can see 'estimation theory'  in used in real world in radar for estimating the transit time and predicating tool such as voting in election.

There are two approaches in estimation theory

  • Probabilistic approach
  • Set-membership approach


Firstly set of statistical samples taken from a random vector (RV) of size N. Put into a vector X,

clip_image002

Secondly, there are the corresponding M parameters


clip_image002[7]

 

Now establishes their continuous probability density function (pdf) or its discrete counterpart, the probability mass function (pmf). pdf is a function that describes the relative likelihood for this random variable to take on a given value.

clip_image002[11]

Above is the boxplot and probability density function of a normal distribution N(0, σ2).

 

It is possible for the parameters themselves to have a probability distribution (e.g., Bayesian statistics/ Bayesian probability).

clip_image002[15]

 

After the model is formed, the goal is to estimate the parameters, commonly denoted by

 clip_image002[17]

The "hat" indicates the estimate.

 

One common estimator is the minimum mean squared error estimator, which utilizes the error between the estimated parameters and the actual value of the parameters. (as the basis for optimality. This error term is then squared and minimized for the MMSE estimator)

clip_image002[19]

 

Simply, I would say that

“Estimation theory is ased on the observations X1, · · · , Xn we would like to estimate unknown parameter θ0, i.e. find ˆθ = ˆθ(X1, · · · , Xn) such that ˆθ approximates θ0. In this case we also want to understand how well ˆθ approximates θ0

No comments:

Post a Comment