gpytorch.functions

Functions

gpytorch.add_diag(input, diag)[source]

Adds a diagonal matrix s*I to the input matrix input.

Args:
input (Tensor (nxn) or (bxnxn)):
Tensor or LazyTensor wrapping matrix to add diagonal component to.
diag (scalar or Tensor (n) or Tensor (bxn) or Tensor (bx1)):
Diagonal component to add to tensor
Returns:
Tensor (bxnxn or nxn)
gpytorch.add_jitter(mat, jitter_val=0.001)[source]

Adds “jitter” to the diagonal of a matrix. This ensures that a matrix that should be positive definite is positive definate.

Args:
  • mat (matrix nxn) - Positive definite matrxi

Returns: (matrix nxn)

gpytorch.dsmm(sparse_mat, dense_mat)[source]

Performs the (batch) matrix multiplication S x D where S is a sparse matrix and D is a dense matrix

Args:
  • sparse_mat (matrix (b x)mxn) - Tensor wrapping sparse matrix
  • dense_mat (matrix (b x)nxo) - Tensor wrapping dense matrix
Returns:
  • matrix (b x)mxo - Result
gpytorch.inv_matmul(mat, right_tensor, left_tensor=None)[source]

Computes a linear solve (w.r.t mat = \(A\)) with several right hand sides \(R\). I.e. computes

… math:

\begin{equation}
    A^{-1} R,
\end{equation}

where \(R\) is right_tensor and \(A\) is mat.

If left_tensor is supplied, computes

… math:

\begin{equation}
    L A^{-1} R,
\end{equation}

where \(L\) is left_tensor. Supplying this can reduce the number of CG calls required.

Args:
  • torch.tensor (n x k) - Matrix \(R\) right hand sides
  • torch.tensor (m x n) - Optional matrix \(L\) to perform left multiplication with
Returns:
  • torch.tensor - \(A^{-1}R\) or \(LA^{-1}R\).
gpytorch.inv_quad(mat, tensor)[source]

Computes an inverse quadratic form (w.r.t mat) with several right hand sides. I.e. computes tr( tensor^T mat^{-1} tensor )

Args:
  • tensor (tensor nxk) - Vector (or matrix) for inverse quad
Returns:
  • tensor - tr( tensor^T (mat)^{-1} tensor )
gpytorch.inv_quad_logdet(mat, inv_quad_rhs=None, logdet=False, reduce_inv_quad=True)[source]

Computes an inverse quadratic form (w.r.t mat) with several right hand sides. I.e. computes tr( tensor^T mat^{-1} tensor ) In addition, computes an (approximate) log determinant of the the matrix

Args:
  • tensor (tensor nxk) - Vector (or matrix) for inverse quad
Returns:
  • scalar - tr( tensor^T (mat)^{-1} tensor )
  • scalar - log determinant
gpytorch.matmul(mat, rhs)[source]

Computes a matrix multiplication between a matrix (mat) and a right hand side (rhs). If mat is a tensor, then this is the same as torch.matmul. This function can work on lazy tensors though

Args:
  • mat (matrix nxn) - left hand size matrix
  • rhs (matrix nxk) - rhs matrix or vector
Returns:
  • matrix nxk
gpytorch.logdet(mat)[source]

Computes an (approximate) log determinant of the matrix

Returns:
  • scalar - log determinant
gpytorch.log_normal_cdf(x)[source]

Computes the element-wise log standard normal CDF of an input tensor x.

This function should always be preferred over calling normal_cdf and taking the log manually, as it is more numerically stable.

gpytorch.root_decomposition(mat)[source]

Returns a (usually low-rank) root decomposotion lazy tensor of a PSD matrix. This can be used for sampling from a Gaussian distribution, or for obtaining a low-rank version of a matrix

gpytorch.root_inv_decomposition(mat, initial_vectors=None, test_vectors=None)[source]

Returns a (usually low-rank) root decomposotion lazy tensor of a PSD matrix. This can be used for sampling from a Gaussian distribution, or for obtaining a low-rank version of a matrix