# Covariance function

In probability theory and statistics, covariance is a measure of how much two variables change together, and the **covariance function**, or **kernel**, describes the spatial or temporal covariance of a random variable process or field. For a random field or stochastic process *Z*(*x*) on a domain *D*, a covariance function *C*(*x*, *y*) gives the covariance of the values of the random field at the two locations *x* and *y*:

- \({\displaystyle C(x,y):=\operatorname {cov} (Z(x),Z(y))=\mathbb {E} \left[(Z(x)-\mathbb {E} (Z(x)))\cdot (Z(y)-\mathbb {E} (Z(y)))\right].\,}\)

The same *C*(*x*, *y*) is called the autocovariance function in two instances: in time series (to denote exactly the same concept except that *x* and *y* refer to locations in time rather than in space), and in multivariate random fields (to refer to the covariance of a variable with itself, as opposed to the cross covariance between two different variables at different locations, Cov(*Z*(*x*_{1}), *Y*(*x*_{2}))).^{[1]}

## Contents

- 1 Admissibility
- 2 Simplifications with stationarity
- 3 Parametric families of covariance functions
- 4 See also
- 5 References

## Admissibility

For locations *x*_{1}, *x*_{2}, …, *x*_{N} ∈ *D* the variance of every linear combination

- \({\displaystyle X=\sum _{i=1}^{N}w_{i}Z(x_{i})}\)

can be computed as

- \({\displaystyle \operatorname {var} (X)=\sum _{i=1}^{N}\sum _{j=1}^{N}w_{i}C(x_{i},x_{j})w_{j}.}\)

A function is a valid covariance function if and only if^{[2]} this variance is non-negative for all possible choices of *N* and weights *w*_{1}, …, *w*_{N}. A function with this property is called positive definite.

## Simplifications with stationarity

In case of a weakly stationary random field, where

- \({\displaystyle C(x_{i},x_{j})=C(x_{i}+h,x_{j}+h)\,}\)

for any lag *h*, the covariance function can be represented by a one-parameter function

- \({\displaystyle C_{s}(h)=C(0,h)=C(x,x+h)\,}\)

which is called a *covariogram* and also a *covariance function*. Implicitly the *C*(*x*_{i}, *x*_{j}) can be computed from *C*_{s}(*h*) by:

- \({\displaystyle C(x,y)=C_{s}(y-x).\,}\)

The positive definiteness of this single-argument version of the covariance function can be checked by Bochner's theorem.^{[2]}

## Parametric families of covariance functions

A simple stationary parametric covariance function is the "exponential covariance function"

- \({\displaystyle C(d)=\exp(-d/V)}\)

where *V* is a scaling parameter, and *d* = *d*(*x*,*y*) is the distance between two points. Sample paths of a Gaussian process with the exponential covariance function are not smooth. The "squared exponential covariance function"

- \({\displaystyle C(d)=\exp(-(d/V)^{2})}\)

is a stationary covariance function with smooth sample paths.

The Matérn covariance function and rational quadratic covariance function are two parametric families of stationary covariance functions. The Matérn family includes the exponential and squared exponential covariance functions as special cases.

## See also

- Variogram
- Random field
- Stochastic process
- Kriging
- Autocorrelation function
- Correlation function
- Positive-definite kernel