gpytorch.means

Mean

class gpytorch.means.Mean[source]

Mean function.

Standard Means

ZeroMean

class gpytorch.means.ZeroMean(batch_shape=torch.Size([]), **kwargs)[source]

ConstantMean

class gpytorch.means.ConstantMean(constant_prior=None, constant_constraint=None, batch_shape=torch.Size([]), **kwargs)[source]

A (non-zero) constant prior mean function, i.e.:

\[\mu(\mathbf x) = C\]

where \(C\) is a learned constant.

Parameters:
  • constant_prior (Prior, optional) – Prior for constant parameter \(C\).

  • constant_constraint (Interval, optional) – Constraint for constant parameter \(C\).

  • batch_shape (torch.Size, optional) – The batch shape of the learned constant(s) (default: []).

  • kwargs (Any) –

Variables:

constant (torch.Tensor) – \(C\) parameter

LinearMean

class gpytorch.means.LinearMean(input_size, batch_shape=torch.Size([]), bias=True)[source]

Specialty Means

MultitaskMean

class gpytorch.means.MultitaskMean(base_means, num_tasks)[source]

Convenience gpytorch.means.Mean implementation for defining a different mean for each task in a multitask model. Expects a list of num_tasks different mean functions, each of which is applied to the given data in forward() and returned as an n x t matrix of means, one for each task.

forward(input)[source]

Evaluate each mean in self.base_means on the input data, and return as an n x t matrix of means.

ConstantMeanGrad

class gpytorch.means.ConstantMeanGrad(prior=None, batch_shape=torch.Size([]), **kwargs)[source]

ConstantMeanGradGrad

class gpytorch.means.ConstantMeanGradGrad(prior=None, batch_shape=torch.Size([]), **kwargs)[source]

A (non-zero) constant prior mean function and its first and second derivatives, i.e.:

\[\begin{split}\mu(\mathbf x) &= C \\ \nabla \mu(\mathbf x) &= \mathbf 0 \\ \nabla^2 \mu(\mathbf x) &= \mathbf 0\end{split}\]

where \(C\) is a learned constant.

Parameters:
  • prior (Prior, optional) – Prior for constant parameter \(C\).

  • batch_shape (torch.Size, optional) – The batch shape of the learned constant(s) (default: []).

  • kwargs (Any) –

Variables:

constant (torch.Tensor) – \(C\) parameter

LinearMeanGrad

class gpytorch.means.LinearMeanGrad(input_size, batch_shape=torch.Size([]), bias=True)[source]

A linear prior mean function and its first derivative, i.e.:

\[\begin{split}\mu(\mathbf x) &= \mathbf W \cdot \mathbf x + B \\ \nabla \mu(\mathbf x) &= \mathbf W\end{split}\]

where \(\mathbf W\) and \(B\) are learned constants.

Parameters:
  • input_size (int) – dimension of input \(\mathbf x\).

  • batch_shape (torch.Size, optional) – The batch shape of the learned constant(s) (default: []).

  • bias (bool, optional) – True/False flag for whether the bias: \(B\) should be used in the mean (default: True).

Variables:

LinearMeanGradGrad

class gpytorch.means.LinearMeanGradGrad(input_size, batch_shape=torch.Size([]), bias=True)[source]

A linear prior mean function and its first and second derivatives, i.e.:

\[\begin{split}\mu(\mathbf x) &= \mathbf W \cdot \mathbf x + B \\ \nabla \mu(\mathbf x) &= \mathbf W \\ \nabla^2 \mu(\mathbf x) &= \mathbf 0 \\\end{split}\]

where \(\mathbf W\) and \(B\) are learned constants.

Parameters:
  • input_size (int) – dimension of input \(\mathbf x\).

  • batch_shape (torch.Size, optional) – The batch shape of the learned constant(s) (default: []).

  • bias (bool, optional) – True/False flag for whether the bias: \(B\) should be used in the mean (default: True).

Variables: