gpytorch.utils

Utilities

gpytorch.utils.cached(method=None, name=None, ignore_args=False)[source]

A decorator allowing for specifying the name of a cache, allowing it to be modified elsewhere.

gpytorch.utils.contour_integral_quad(lazy_tensor, rhs, inverse=False, weights=None, shifts=None, max_lanczos_iter=20, num_contour_quadrature=None, shift_offset=0)[source]

Performs \(\mathbf K^{1/2} \mathbf b\) or mathbf K^{-1/2} mathbf b using contour integral quadrature.

Parameters:
  • lazy_tensor (gpytorch.lazy.LazyTensor) – LazyTensor representing \(\mathbf K\)
  • rhs (torch.Tensor) – Right hand side tensor \(\mathbf b\)
  • inverse (bool) – (default False) whether to compute \(\mathbf K^{1/2} \mathbf b\) (if False) or mathbf K^{-1/2} mathbf b (if True)
  • max_lanczos_iter (int) – (default 10) Number of Lanczos iterations to run (to estimate eigenvalues)
  • num_contour_quadrature (int) – How many quadrature samples to use for approximation. Default is in settings.
Return type:

torch.Tensor

Returns:

Approximation to \(\mathbf K^{1/2} \mathbf b\) or \(\mathbf K^{-1/2} \mathbf b\).

gpytorch.utils.linear_cg(matmul_closure, rhs, n_tridiag=0, tolerance=None, eps=1e-10, stop_updating_after=1e-10, max_iter=None, max_tridiag_iter=None, initial_guess=None, preconditioner=None)[source]

Implements the linear conjugate gradients method for (approximately) solving systems of the form

lhs result = rhs

for positive definite and symmetric matrices.

Args:
  • matmul_closure - a function which performs a left matrix multiplication with lhs_mat
  • rhs - the right-hand side of the equation
  • n_tridiag - returns a tridiagonalization of the first n_tridiag columns of rhs
  • tolerance - stop the solve when the max residual is less than this
  • eps - noise to add to prevent division by zero
  • stop_updating_after - will stop updating a vector after this residual norm is reached
  • max_iter - the maximum number of CG iterations
  • max_tridiag_iter - the maximum size of the tridiagonalization matrix
  • initial_guess - an initial guess at the solution result
  • precondition_closure - a functions which left-preconditions a supplied vector
Returns:
result - a solution to the system (if n_tridiag is 0) result, tridiags - a solution to the system, and corresponding tridiagonal matrices (if n_tridiag > 0)
class gpytorch.utils.StochasticLQ(max_iter=15, num_random_probes=10)[source]

Implements an approximate log determinant calculation for symmetric positive definite matrices using stochastic Lanczos quadrature. For efficient calculation of derivatives, We additionally compute the trace of the inverse using the same probe vector the log determinant was computed with. For more details, see Dong et al. 2017 (in submission).

evaluate(matrix_shape, eigenvalues, eigenvectors, funcs)[source]

Computes tr(f(A)) for an arbitrary list of functions, where f(A) is equivalent to applying the function elementwise to the eigenvalues of A, i.e., if A = VLambdaV^{T}, then f(A) = Vf(Lambda)V^{T}, where f(Lambda) is applied elementwise. Note that calling this function with a list of functions to apply is significantly more efficient than calling it multiple times with one function – each additional function after the first requires negligible additional computation.

Args:
  • matrix_shape (torch.Size()) - size of underlying matrix (not including batch dimensions)
  • eigenvalues (Tensor n_probes x …batch_shape x k) - batches of eigenvalues from Lanczos tridiag mats
  • eigenvectors (Tensor n_probes x …batch_shape x k x k) - batches of eigenvectors from ” ” “
  • funcs (list of closures) - A list of functions [f_1,…,f_k]. tr(f_i(A)) is computed for each function.
    Each function in the closure should expect to take a torch vector of eigenvalues as input and apply the function elementwise. For example, to compute logdet(A) = tr(log(A)), [lambda x: x.log()] would be a reasonable value of funcs.
Returns:
  • results (list of scalars) - The trace of each supplied function applied to the matrix, e.g.,
    [tr(f_1(A)),tr(f_2(A)),…,tr(f_k(A))].
gpytorch.utils.minres(matmul_closure, rhs, eps=1e-25, shifts=None, value=None, max_iter=None, preconditioner=None)[source]

Perform MINRES to find solutions to \((\mathbf K + \alpha \sigma \mathbf I) \mathbf x = \mathbf b\). Will find solutions for multiple shifts \(\sigma\) at the same time.

Parameters:
  • matmul_closure (callable) – Function to perform matmul with.
  • rhs (torch.Tensor) – The vector \(\mathbf b\) to solve against.
  • shifts (torch.Tensor) – (default None) The shift \(\sigma\) values. If set to None, then \(\sigma=0\).
  • value (float) – (default None) The multiplicative constant \(\alpha\). If set to None, then \(\alpha=0\).
  • max_iter (int) – (default None) The maximum number of minres iterations. If set to None, then uses the constant stored in gpytorch.settings.max_cg_iterations.
Return type:

torch.Tensor

Returns:

The solves \(\mathbf x\). The shape will correspond to the size of rhs and shifts.

gpytorch.utils.prod(items)[source]
gpytorch.utils.stable_pinverse(A: torch.Tensor) → torch.Tensor[source]

Compute a pseudoinverse of a matrix. Employs a stabilized QR decomposition.

gpytorch.utils.stable_qr(mat)[source]

performs a QR decomposition on the batched matrix mat. We need to use these functions because of

  1. slow batched QR in pytorch (pytorch/pytorch#22573)
  2. possible singularity in R

Lanczos Utilities

gpytorch.utils.lanczos.lanczos_tridiag(matmul_closure, max_iter, dtype, device, matrix_shape, batch_shape=torch.Size([]), init_vecs=None, num_init_vecs=1, tol=1e-05)[source]
gpytorch.utils.lanczos.lanczos_tridiag_to_diag(t_mat)[source]

Given a num_init_vecs x num_batch x k x k tridiagonal matrix t_mat, returns a num_init_vecs x num_batch x k set of eigenvalues and a num_init_vecs x num_batch x k x k set of eigenvectors.

TODO: make the eigenvalue computations done in batch mode.

Pivoted Cholesky Utilities

Quadrature Utilities

class gpytorch.utils.quadrature.GaussHermiteQuadrature1D(num_locs=None)[source]

Implements Gauss-Hermite quadrature for integrating a function with respect to several 1D Gaussian distributions in batch mode. Within GPyTorch, this is useful primarily for computing expected log likelihoods for variational inference.

This is implemented as a Module because Gauss-Hermite quadrature has a set of locations and weights that it should initialize one time, but that should obey parent calls to .cuda(), .double() etc.

forward(func, gaussian_dists)[source]

Runs Gauss-Hermite quadrature on the callable func, integrating against the Gaussian distributions specified by gaussian_dists.

Args:
  • func (callable): Function to integrate
  • gaussian_dists (Distribution): Either a MultivariateNormal whose covariance is assumed to be diagonal
    or a torch.distributions.Normal.
Returns:
  • Result of integrating func against each univariate Gaussian in gaussian_dists.

Sparse Utilities

gpytorch.utils.sparse.bdsmm(sparse, dense)[source]

Batch dense-sparse matrix multiply

gpytorch.utils.sparse.make_sparse_from_indices_and_values(interp_indices, interp_values, num_rows)[source]

This produces a sparse tensor with a fixed number of non-zero entries in each column.

Args:
  • interp_indices - Tensor (batch_size) x num_cols x n_nonzero_entries
    A matrix which has the indices of the nonzero_entries for each column
  • interp_values - Tensor (batch_size) x num_cols x n_nonzero_entries
    The corresponding values
  • num_rows - the number of rows in the result matrix
Returns:
  • SparseTensor - (batch_size) x num_cols x num_rows
gpytorch.utils.sparse.sparse_eye(size)[source]

Returns the identity matrix as a sparse matrix

gpytorch.utils.sparse.sparse_getitem(sparse, idxs)[source]
gpytorch.utils.sparse.sparse_repeat(sparse, *repeat_sizes)[source]
gpytorch.utils.sparse.to_sparse(dense)[source]

Grid Utilities

class gpytorch.utils.grid.ScaleToBounds(lower_bound, upper_bound)[source]

Scale the input data so that it lies in between the lower and upper bounds.

In training (self.train()), this module adjusts the scaling factor to the minibatch of data. During evaluation (self.eval()), this module uses the scaling factor from the previous minibatch of data.

Parameters:
  • lower_bound (float) – lower bound of scaled data
  • upper_bound (float) – upper bound of scaled data
Example:
>>> train_x = torch.randn(10, 5)
>>> module = gpytorch.utils.grid.ScaleToBounds(lower_bound=-1., upper_bound=1.)
>>>
>>> module.train()
>>> scaled_train_x = module(train_x)  # Data should be between -0.95 and 0.95
>>>
>>> module.eval()
>>> test_x = torch.randn(10, 5)
>>> scaled_test_x = module(test_x)  # Scaling is based on train_x
gpytorch.utils.grid.choose_grid_size(train_inputs, ratio=1.0, kronecker_structure=True)[source]

Given some training inputs, determine a good grid size for KISS-GP.

Parameters:
  • x (torch.Tensor (.. x n x d)) – the input data
  • ratio (float, optional) – Amount of grid points per data point (default: 1.)
  • kronecker_structure (bool, optional) – Whether or not the model will use Kronecker structure in the grid (set to True unless there is an additive or product decomposition in the prior)
Returns:

Grid size

Return type:

int

gpytorch.utils.grid.create_data_from_grid(grid: List[torch.Tensor]) → torch.Tensor[source]
Parameters:grid (List[torch.Tensor]) – Each Tensor is a 1D set of increments for the grid in that dimension
Returns:The set of points on the grid going by column-major order
Return type:torch.Tensor
gpytorch.utils.grid.create_grid(grid_sizes: List[int], grid_bounds: List[Tuple[float, float]], extend: bool = True, device='cpu', dtype=torch.float32) → List[torch.Tensor][source]

Creates a grid represented by a list of 1D Tensors representing the projections of the grid into each dimension

If extend, we extend the grid by two points past the specified boundary which can be important for getting good grid interpolations.

Parameters:
  • grid_sizes (List[Tuple[float, float]]) – Sizes of each grid dimension
  • grid_bounds – Lower and upper bounds of each grid dimension
  • device (torch.device, optional) – target device for output (default: cpu)
  • dtype (torch.dtype, optional) – target dtype for output (default: torch.float)
Returns:

Grid points for each dimension. Grid points are stored in a torch.Tensor with shape grid_sizes[i].

Return type:

List[torch.Tensor]

gpytorch.utils.grid.scale_to_bounds(x, lower_bound, upper_bound)[source]

DEPRECATRED: Use ScaleToBounds instead.

Parameters:
  • x (torch.Tensor (.. x n x d)) – the input data
  • lower_bound (float) – lower bound of scaled data
  • upper_bound (float) – upper bound of scaled data
Returns:

scaled data

Return type:

torch.Tensor (.. x n x d)