Exact GPs with Scalable (GPU) Inference

In GPyTorch, Exact GP inference is still our preferred approach to large regression datasets. By coupling GPU acceleration with BlackBox Matrix-Matrix Inference and LancZos Variance Estimates (LOVE), GPyTorch can perform inference on datasets with over 1,000,000 data points while making very few approximations.

BlackBox Matrix-Matrix Inference (BBMM)

BlackBox Matrix-Matrix Inference (introduced by Gardner et al., 2018) computes the GP marginal log likelihood using only matrix multiplication. It is stochastic, but can scale exact GPs to millions of data points.

LancZos Variance Estimates (LOVE)

LanczOs Variance Estimates (LOVE) (introduced by Pleiss et al., 2019) is a technique to rapidly speed up predictive variances and posterior sampling. Check out the GP Regression with Fast Variances and Sampling (LOVE) notebook to see how to use LOVE in GPyTorch, and how it compares to standard variance computations.

Exact GPs with GPU Acceleration

Here are examples of Exact GPs using GPU acceleration.

Scalable Posterior Sampling with CIQ

Here we provide a notebook demonstrating the use of Contour Integral Quadrature with msMINRES as described in the CIQ paper. For the most dramatic results, we recommend combining this technique with other techniques in this section like kernel checkpointing with KeOps, which would allow for posterior sampling on up to hundreds of thousands of test examples.

Scalable Kernel Approximations

While exact computations are our preferred approach, GPyTorch offer approximate kernels to reduce the asymptotic complexity of inference.

Structure-Exploiting Kernels

If your data lies on a Euclidean grid, and your GP uses a stationary kernel, the computations can be sped up dramatically. See the Grid Regression example for more info.