gpytorch.functions¶
Functions¶
- gpytorch.add_diagonal(input, diag)[source]¶
Adds an element to the diagonal of the matrix \(\mathbf A\).
- Parameters:
- Return type:
- Returns:
\(\mathbf A + \text{diag}(\mathbf d)\), where \(\mathbf A\) is the linear operator and \(\mathbf d\) is the diagonal component
- gpytorch.add_jitter(input, jitter_val=0.001)[source]¶
Adds jitter (i.e., a small diagonal component) to the matrix this LinearOperator represents. This is equivalent to calling
add_diagonal()
with a scalar tensor.
- gpytorch.dsmm(sparse_mat, dense_mat)[source]¶
Performs the (batch) matrix multiplication \(\mathbf{SD}\) where \(\mathbf S\) is a sparse matrix and \(\mathbf D\) is a dense matrix.
- gpytorch.diagonalization(input, method=None)[source]¶
Returns a (usually partial) diagonalization of a symmetric positive definite matrix (or batch of matrices). \(\mathbf A\). Options are either “lanczos” or “symeig”. “lanczos” runs Lanczos while “symeig” runs LinearOperator.symeig.
- Parameters:
- Return type:
- Returns:
eigenvalues and eigenvectors representing the diagonalization.
- gpytorch.inv_quad(input, inv_quad_rhs, reduce_inv_quad=True)[source]¶
Computes an inverse quadratic form (w.r.t self) with several right hand sides, i.e:
\[\text{tr}\left( \mathbf R^\top \mathbf A^{-1} \mathbf R \right),\]where \(\mathbf A\) is a positive definite matrix (or batch of matrices) and \(\mathbf R\) represents the right hand sides (
inv_quad_rhs
).If
reduce_inv_quad
is set to false (andinv_quad_rhs
is supplied), the function instead computes\[\text{diag}\left( \mathbf R^\top \mathbf A^{-1} \mathbf R \right).\]- Parameters:
input (Union) – \(\mathbf A\) - the positive definite matrix (… X N X N)
inv_quad_rhs (Tensor) – \(\mathbf R\) - the right hand sides of the inverse quadratic term (… x N x M)
reduce_inv_quad (bool) – Whether to compute \(\text{tr}\left( \mathbf R^\top \mathbf A^{-1} \mathbf R \right)\) or \(\text{diag}\left( \mathbf R^\top \mathbf A^{-1} \mathbf R \right)\).
- Return type:
- Returns:
The inverse quadratic term. If reduce_inv_quad=True, the inverse quadratic term is of shape (…). Otherwise, it is (… x M).
- gpytorch.inv_quad_logdet(input, inv_quad_rhs=None, logdet=False, reduce_inv_quad=True)[source]¶
Calls both
inv_quad_logdet()
andlogdet()
on a positive definite matrix (or batch) \(\mathbf A\). However, calling this method is far more efficient and stable than calling each method independently.- Parameters:
input (Union) – \(\mathbf A\) - the positive definite matrix (… X N X N)
inv_quad_rhs (Optional) – \(\mathbf R\) - the right hand sides of the inverse quadratic term (… x N x M)
logdet (bool) – Whether or not to compute the logdet term \(\log \vert \mathbf A \vert\).
reduce_inv_quad (bool) – Whether to compute \(\text{tr}\left( \mathbf R^\top \mathbf A^{-1} \mathbf R \right)\) or \(\text{diag}\left( \mathbf R^\top \mathbf A^{-1} \mathbf R \right)\).
- Return type:
- Returns:
The inverse quadratic term (or None), and the logdet term (or None). If reduce_inv_quad=True, the inverse quadratic term is of shape (…). Otherwise, it is (… x M).
- gpytorch.pivoted_cholesky(input, rank, error_tol=None, return_pivots=False)[source]¶
Performs a partial pivoted Cholesky factorization of a positive definite matrix (or batch of matrices). \(\mathbf L \mathbf L^\top = \mathbf A\). The partial pivoted Cholesky factor \(\mathbf L \in \mathbb R^{N \times \text{rank}}\) forms a low rank approximation to the LinearOperator.
The pivots are selected greedily, correspoading to the maximum diagonal element in the residual after each Cholesky iteration. See Harbrecht et al., 2012.
- Parameters:
input (Union) – The matrix (or batch of matrices) \(\mathbf A\) (… x N x N).
rank (int) – The size of the partial pivoted Cholesky factor.
error_tol (Optional) – Defines an optional stopping criterion. If the residual of the factorization is less than
error_tol
, then the factorization will exit early. This will result in a \(\leq \text{ rank}\) factor.return_pivots (bool) – Whether or not to return the pivots alongside the partial pivoted Cholesky factor.
- Return type:
- Returns:
The … x N x rank factor (and optionally the … x N pivots if
return_pivots
is True).
- gpytorch.root_decomposition(input, method=None)[source]¶
Returns a (usually low-rank) root decomposition linear operator of the positive definite matrix (or batch of matrices) \(\mathbf A\). This can be used for sampling from a Gaussian distribution, or for obtaining a low-rank version of a matrix.
- Parameters:
- Return type:
- Returns:
A tensor \(\mathbf R\) such that \(\mathbf R \mathbf R^\top \approx \mathbf A\).
- gpytorch.root_inv_decomposition(input, initial_vectors=None, test_vectors=None, method=None)[source]¶
Returns a (usually low-rank) inverse root decomposition linear operator of the PSD LinearOperator \(\mathbf A\). This can be used for sampling from a Gaussian distribution, or for obtaining a low-rank version of a matrix.
The root_inv_decomposition is performed using a partial Lanczos tridiagonalization.
- Parameters:
input (Union) – The matrix (or batch of matrices) \(\mathbf A\) (… x N x N).
initial_vectors (Optional) – Vectors used to initialize the Lanczos decomposition. The best initialization vector (determined by
test_vectors
) will be chosen.test_vectors (Optional) – Vectors used to test the accuracy of the decomposition.
method (Optional) – Root decomposition method to use (symeig, diagonalization, lanczos, or cholesky).
- Return type:
- Returns:
A tensor \(\mathbf R\) such that \(\mathbf R \mathbf R^\top \approx \mathbf A^{-1}\).
- gpytorch.solve(input, rhs, lhs=None)[source]¶
Given a positive definite matrix (or batch of matrices) \(\mathbf A\), computes a linear solve with right hand side \(\mathbf R\):
\[\begin{equation} \mathbf A^{-1} \mathbf R, \end{equation}\]where \(\mathbf R\) is
right_tensor
and \(\mathbf A\) is the LinearOperator.Note
Unlike
torch.linalg.solve()
, this function can take an optionalleft_tensor
attribute. If this is suppliedgpytorch.solve()
computes\[\begin{equation} \mathbf L \mathbf A^{-1} \mathbf R, \end{equation}\]where \(\mathbf L\) is
left_tensor
. Supplying this can reduce the number of solver calls required in the backward pass.
- gpytorch.sqrt_inv_matmul(input, rhs, lhs=None)[source]¶
Given a positive definite matrix (or batch of matrices) \(\mathbf A\) and a right hand size \(\mathbf R\), computes
\[\begin{equation} \mathbf A^{-1/2} \mathbf R, \end{equation}\]If
lhs
is supplied, computes\[\begin{equation} \mathbf L \mathbf A^{-1/2} \mathbf R, \end{equation}\]where \(\mathbf L\) is
lhs
. (Supplyinglhs
can reduce the number of solver calls required in the backward pass.)