gpytorch.means¶
Mean¶
Standard Means¶
ZeroMean¶
ConstantMean¶
- class gpytorch.means.ConstantMean(constant_prior=None, constant_constraint=None, batch_shape=torch.Size([]), **kwargs)[source]¶
A (non-zero) constant prior mean function, i.e.:
\[\mu(\mathbf x) = C\]where \(C\) is a learned constant.
- Parameters:
constant_prior (Prior, optional) – Prior for constant parameter \(C\).
constant_constraint (Interval, optional) – Constraint for constant parameter \(C\).
batch_shape (torch.Size, optional) – The batch shape of the learned constant(s) (default: []).
kwargs (Any) –
- Variables:
constant (torch.Tensor) – \(C\) parameter
LinearMean¶
Specialty Means¶
MultitaskMean¶
- class gpytorch.means.MultitaskMean(base_means, num_tasks)[source]¶
Convenience
gpytorch.means.Mean
implementation for defining a different mean for each task in a multitask model. Expects a list of num_tasks different mean functions, each of which is applied to the given data inforward()
and returned as an n x t matrix of means, one for each task.
ConstantMeanGrad¶
ConstantMeanGradGrad¶
- class gpytorch.means.ConstantMeanGradGrad(prior=None, batch_shape=torch.Size([]), **kwargs)[source]¶
A (non-zero) constant prior mean function and its first and second derivatives, i.e.:
\[\begin{split}\mu(\mathbf x) &= C \\ \nabla \mu(\mathbf x) &= \mathbf 0 \\ \nabla^2 \mu(\mathbf x) &= \mathbf 0\end{split}\]where \(C\) is a learned constant.
- Parameters:
prior (Prior, optional) – Prior for constant parameter \(C\).
batch_shape (torch.Size, optional) – The batch shape of the learned constant(s) (default: []).
kwargs (Any) –
- Variables:
constant (torch.Tensor) – \(C\) parameter
LinearMeanGrad¶
- class gpytorch.means.LinearMeanGrad(input_size, batch_shape=torch.Size([]), bias=True)[source]¶
A linear prior mean function and its first derivative, i.e.:
\[\begin{split}\mu(\mathbf x) &= \mathbf W \cdot \mathbf x + B \\ \nabla \mu(\mathbf x) &= \mathbf W\end{split}\]where \(\mathbf W\) and \(B\) are learned constants.
- Parameters:
input_size (int) – dimension of input \(\mathbf x\).
batch_shape (torch.Size, optional) – The batch shape of the learned constant(s) (default: []).
bias (bool, optional) – True/False flag for whether the bias: \(B\) should be used in the mean (default: True).
- Variables:
weights (torch.Tensor) – \(\mathbf W\) parameter
bias (torch.Tensor) – \(B\) parameter
LinearMeanGradGrad¶
- class gpytorch.means.LinearMeanGradGrad(input_size, batch_shape=torch.Size([]), bias=True)[source]¶
A linear prior mean function and its first and second derivatives, i.e.:
\[\begin{split}\mu(\mathbf x) &= \mathbf W \cdot \mathbf x + B \\ \nabla \mu(\mathbf x) &= \mathbf W \\ \nabla^2 \mu(\mathbf x) &= \mathbf 0 \\\end{split}\]where \(\mathbf W\) and \(B\) are learned constants.
- Parameters:
input_size (int) – dimension of input \(\mathbf x\).
batch_shape (torch.Size, optional) – The batch shape of the learned constant(s) (default: []).
bias (bool, optional) – True/False flag for whether the bias: \(B\) should be used in the mean (default: True).
- Variables:
weights (torch.Tensor) – \(\mathbf W\) parameter
bias (torch.Tensor) – \(B\) parameter