# gpytorch.functions¶

## Functions¶

Adds an element to the diagonal of the matrix $$\mathbf A$$.

Parameters:
• input (LinearOperator or torch.Tensor) – The matrix (or batch of matrices) $$\mathbf A$$ (… x N x N).

• diag (torch.Tensor) – Diagonal to add

Return type:

LinearOperator

Returns:

$$\mathbf A + \text{diag}(\mathbf d)$$, where $$\mathbf A$$ is the linear operator and $$\mathbf d$$ is the diagonal component

Adds jitter (i.e., a small diagonal component) to the matrix this LinearOperator represents. This is equivalent to calling add_diagonal() with a scalar tensor.

Parameters:
• input (LinearOperator or torch.Tensor) – The matrix (or batch of matrices) $$\mathbf A$$ (… x N x N).

• jitter_val (float) – The diagonal component to add

Return type:

LinearOperator or torch.Tensor

Returns:

$$\mathbf A + \alpha (\mathbf I)$$, where $$\mathbf A$$ is the linear operator and $$\alpha$$ is jitter_val.

gpytorch.dsmm(sparse_mat, dense_mat)[source]

Performs the (batch) matrix multiplication $$\mathbf{SD}$$ where $$\mathbf S$$ is a sparse matrix and $$\mathbf D$$ is a dense matrix.

Parameters:
• sparse_mat (torch.sparse.HalfTensor or torch.sparse.FloatTensor or torch.sparse.DoubleTensor) – Sparse matrix $$\mathbf S$$ (… x M x N)

• dense_mat (torch.Tensor) – Dense matrix $$\mathbf D$$ (… x N x O)

Return type:

torch.Tensor

Returns:

$$\mathbf S \mathbf D$$ (… x M x N)

gpytorch.diagonalization(input, method=None)[source]

Returns a (usually partial) diagonalization of a symmetric positive definite matrix (or batch of matrices). $$\mathbf A$$. Options are either “lanczos” or “symeig”. “lanczos” runs Lanczos while “symeig” runs LinearOperator.symeig.

Parameters:
• input (LinearOperator or torch.Tensor) – The matrix (or batch of matrices) $$\mathbf A$$ (… x N x N).

• method (str, optional) – Specify the method to use (“lanczos” or “symeig”). The method will be determined based on size if not specified.

Return type:
Returns:

eigenvalues and eigenvectors representing the diagonalization.

Computes an inverse quadratic form (w.r.t self) with several right hand sides, i.e:

$\text{tr}\left( \mathbf R^\top \mathbf A^{-1} \mathbf R \right),$

where $$\mathbf A$$ is a positive definite matrix (or batch of matrices) and $$\mathbf R$$ represents the right hand sides (inv_quad_rhs).

If reduce_inv_quad is set to false (and inv_quad_rhs is supplied), the function instead computes

$\text{diag}\left( \mathbf R^\top \mathbf A^{-1} \mathbf R \right).$
Parameters:
• input (LinearOperator or torch.Tensor) – $$\mathbf A$$ - the positive definite matrix (… X N X N)

• inv_quad_rhs (torch.Tensor) – $$\mathbf R$$ - the right hand sides of the inverse quadratic term (… x N x M)

• reduce_inv_quad (bool) – Whether to compute $$\text{tr}\left( \mathbf R^\top \mathbf A^{-1} \mathbf R \right)$$ or $$\text{diag}\left( \mathbf R^\top \mathbf A^{-1} \mathbf R \right)$$.

Return type:

torch.Tensor

Returns:

The inverse quadratic term. If reduce_inv_quad=True, the inverse quadratic term is of shape (…). Otherwise, it is (… x M).

Calls both inv_quad_logdet() and logdet() on a positive definite matrix (or batch) $$\mathbf A$$. However, calling this method is far more efficient and stable than calling each method independently.

Parameters:
• input (LinearOperator or torch.Tensor) – $$\mathbf A$$ - the positive definite matrix (… X N X N)

• inv_quad_rhs (torch.Tensor, optional) – $$\mathbf R$$ - the right hand sides of the inverse quadratic term (… x N x M)

• logdet (bool) – Whether or not to compute the logdet term $$\log \vert \mathbf A \vert$$.

• reduce_inv_quad (bool) – Whether to compute $$\text{tr}\left( \mathbf R^\top \mathbf A^{-1} \mathbf R \right)$$ or $$\text{diag}\left( \mathbf R^\top \mathbf A^{-1} \mathbf R \right)$$.

Return type:
Returns:

The inverse quadratic term (or None), and the logdet term (or None). If reduce_inv_quad=True, the inverse quadratic term is of shape (…). Otherwise, it is (… x M).

gpytorch.pivoted_cholesky(input, rank, error_tol=None, return_pivots=False)[source]

Performs a partial pivoted Cholesky factorization of a positive definite matrix (or batch of matrices). $$\mathbf L \mathbf L^\top = \mathbf A$$. The partial pivoted Cholesky factor $$\mathbf L \in \mathbb R^{N \times \text{rank}}$$ forms a low rank approximation to the LinearOperator.

The pivots are selected greedily, correspoading to the maximum diagonal element in the residual after each Cholesky iteration. See Harbrecht et al., 2012.

Parameters:
• input (LinearOperator or torch.Tensor) – The matrix (or batch of matrices) $$\mathbf A$$ (… x N x N).

• rank (int) – The size of the partial pivoted Cholesky factor.

• error_tol (float, optional) – Defines an optional stopping criterion. If the residual of the factorization is less than error_tol, then the factorization will exit early. This will result in a $$\leq \text{ rank}$$ factor.

• return_pivots (bool) – Whether or not to return the pivots alongside the partial pivoted Cholesky factor.

Return type:
Returns:

The … x N x rank factor (and optionally the … x N pivots if return_pivots is True).

gpytorch.root_decomposition(input, method=None)[source]

Returns a (usually low-rank) root decomposition linear operator of the positive definite matrix (or batch of matrices) $$\mathbf A$$. This can be used for sampling from a Gaussian distribution, or for obtaining a low-rank version of a matrix.

Parameters:
• input (LinearOperator or torch.Tensor) – The matrix (or batch of matrices) $$\mathbf A$$ (… x N x N).

• method (str, optional) – Which method to use to perform the root decomposition. Choices are: “cholesky”, “lanczos”, “symeig”, “pivoted_cholesky”, or “svd”.

Return type:

LinearOperator

Returns:

A tensor $$\mathbf R$$ such that $$\mathbf R \mathbf R^\top \approx \mathbf A$$.

gpytorch.root_inv_decomposition(input, initial_vectors=None, test_vectors=None, method=None)[source]

Returns a (usually low-rank) inverse root decomposition linear operator of the PSD LinearOperator $$\mathbf A$$. This can be used for sampling from a Gaussian distribution, or for obtaining a low-rank version of a matrix.

The root_inv_decomposition is performed using a partial Lanczos tridiagonalization.

Parameters:
• input (LinearOperator or torch.Tensor) – The matrix (or batch of matrices) $$\mathbf A$$ (… x N x N).

• initial_vectors (torch.Tensor, optional) – Vectors used to initialize the Lanczos decomposition. The best initialization vector (determined by test_vectors) will be chosen.

• test_vectors (torch.Tensor, optional) – Vectors used to test the accuracy of the decomposition.

• method (str, optional) – Root decomposition method to use (symeig, diagonalization, lanczos, or cholesky).

Return type:

LinearOperator

Returns:

A tensor $$\mathbf R$$ such that $$\mathbf R \mathbf R^\top \approx \mathbf A^{-1}$$.

gpytorch.solve(input, rhs, lhs=None)[source]

Given a positive definite matrix (or batch of matrices) $$\mathbf A$$, computes a linear solve with right hand side $$\mathbf R$$:

$$$\mathbf A^{-1} \mathbf R,$$$

where $$\mathbf R$$ is right_tensor and $$\mathbf A$$ is the LinearOperator.

Note

Unlike torch.linalg.solve(), this function can take an optional left_tensor attribute. If this is supplied gpytorch.solve() computes

$$$\mathbf L \mathbf A^{-1} \mathbf R,$$$

where $$\mathbf L$$ is left_tensor. Supplying this can reduce the number of solver calls required in the backward pass.

Parameters:
• input (LinearOperator or torch.Tensor) – The matrix (or batch of matrices) $$\mathbf A$$ (… x N x N).

• rhs (torch.Tensor) – $$\mathbf R$$ - the right hand side

• lhs (torch.Tensor, optional) – $$\mathbf L$$ - the left hand side

Return type:

torch.Tensor

Returns:

$$\mathbf A^{-1} \mathbf R$$ or $$\mathbf L \mathbf A^{-1} \mathbf R$$.

gpytorch.sqrt_inv_matmul(input, rhs, lhs=None)[source]

Given a positive definite matrix (or batch of matrices) $$\mathbf A$$ and a right hand size $$\mathbf R$$, computes

$$$\mathbf A^{-1/2} \mathbf R,$$$

If lhs is supplied, computes

$$$\mathbf L \mathbf A^{-1/2} \mathbf R,$$$

where $$\mathbf L$$ is lhs. (Supplying lhs can reduce the number of solver calls required in the backward pass.)

Parameters:
• input (LinearOperator or torch.Tensor) – The matrix (or batch of matrices) $$\mathbf A$$ (… x N x N).

• rhs (torch.Tensor) – $$\mathbf R$$ - the right hand side

• lhs (torch.Tensor, optional) – $$\mathbf L$$ - the left hand side

Return type:

torch.Tensor

Returns:

$$\mathbf A^{-1/2} \mathbf R$$ or $$\mathbf L \mathbf A^{-1/2} \mathbf R$$.