# Deep Gaussian Processes¶

## Introduction¶

In this notebook, we provide a GPyTorch implementation of deep Gaussian processes, where training and inference is performed using the method of Salimbeni et al., 2017 (https://arxiv.org/abs/1705.08933) adapted to CG-based inference.

We’ll be training a simple two layer deep GP on the elevators UCI dataset.

[1]:

%set_env CUDA_VISIBLE_DEVICES=0

import torch
import tqdm
import gpytorch
from gpytorch.means import ConstantMean, LinearMean
from gpytorch.kernels import RBFKernel, ScaleKernel
from gpytorch.variational import VariationalStrategy, CholeskyVariationalDistribution
from gpytorch.distributions import MultivariateNormal
from gpytorch.models import ApproximateGP, GP
from gpytorch.likelihoods import GaussianLikelihood


env: CUDA_VISIBLE_DEVICES=0

[2]:

from gpytorch.models.deep_gps import DeepGPLayer, DeepGP
from gpytorch.mlls import DeepApproximateMLL


For this example notebook, we’ll be using the elevators UCI dataset used in the paper. Running the next cell downloads a copy of the dataset that has already been scaled and normalized appropriately. For this notebook, we’ll simply be splitting the data using the first 80% of the data as training and the last 20% as testing.

Note: Running the next cell will attempt to download a ~400 KB dataset file to the current directory.

[3]:

import urllib.request
import os
from math import floor

# this is for running the notebook in our testing framework
smoke_test = ('CI' in os.environ)

if not smoke_test and not os.path.isfile('../elevators.mat'):

if smoke_test:  # this is for running the notebook in our testing framework
X, y = torch.randn(1000, 3), torch.randn(1000)
else:
X = data[:, :-1]
X = X - X.min(0)[0]
X = 2 * (X / X.max(0)[0]) - 1
y = data[:, -1]

train_n = int(floor(0.8 * len(X)))
train_x = X[:train_n, :].contiguous()
train_y = y[:train_n].contiguous()

test_x = X[train_n:, :].contiguous()
test_y = y[train_n:].contiguous()

if torch.cuda.is_available():
train_x, train_y, test_x, test_y = train_x.cuda(), train_y.cuda(), test_x.cuda(), test_y.cuda()

Downloading 'elevators' UCI dataset...

[4]:

from torch.utils.data import TensorDataset, DataLoader
train_dataset = TensorDataset(train_x, train_y)


## Defining GP layers¶

In GPyTorch, defining a GP involves extending one of our abstract GP models and defining a forward method that returns the prior. For deep GPs, things are similar, but there are two abstract GP models that must be overwritten: one for hidden layers and one for the deep GP model itself.

In the next cell, we define an example deep GP hidden layer. This looks very similar to every other variational GP you might define. However, there are a few key differences:

1. Instead of extending ApproximateGP, we extend DeepGPLayer.
2. DeepGPLayers need a number of input dimensions, a number of output dimensions, and a number of samples. This is kind of like a linear layer in a standard neural network – input_dims defines how many inputs this hidden layer will expect, and output_dims defines how many hidden GPs to create outputs for.

In this particular example, we make a particularly fancy DeepGPLayer that has “skip connections” with previous layers, similar to a ResNet.

[5]:

class ToyDeepGPHiddenLayer(DeepGPLayer):
def __init__(self, input_dims, output_dims, num_inducing=128, mean_type='constant'):
if output_dims is None:
inducing_points = torch.randn(num_inducing, input_dims)
batch_shape = torch.Size([])
else:
inducing_points = torch.randn(output_dims, num_inducing, input_dims)
batch_shape = torch.Size([output_dims])

variational_distribution = CholeskyVariationalDistribution(
num_inducing_points=num_inducing,
batch_shape=batch_shape
)

variational_strategy = VariationalStrategy(
self,
inducing_points,
variational_distribution,
learn_inducing_locations=True
)

super(ToyDeepGPHiddenLayer, self).__init__(variational_strategy, input_dims, output_dims)

if mean_type == 'constant':
self.mean_module = ConstantMean(batch_shape=batch_shape)
else:
self.mean_module = LinearMean(input_dims)
self.covar_module = ScaleKernel(
RBFKernel(batch_shape=batch_shape, ard_num_dims=input_dims),
batch_shape=batch_shape, ard_num_dims=None
)

def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return MultivariateNormal(mean_x, covar_x)

def __call__(self, x, *other_inputs, **kwargs):
"""
Overriding __call__ isn't strictly necessary, but it lets us add concatenation based skip connections
easily. For example, hidden_layer2(hidden_layer1_outputs, inputs) will pass the concatenation of the first
hidden layer's outputs and the input data to hidden_layer2.
"""
if len(other_inputs):
x = x.rsample()

processed_inputs = [
inp.unsqueeze(0).expand(gpytorch.settings.num_likelihood_samples.value(), *inp.shape)
for inp in other_inputs
]

x = torch.cat([x] + processed_inputs, dim=-1)

return super().__call__(x, are_samples=bool(len(other_inputs)))


## Building the deep GP¶

Now that we’ve defined a class for our hidden layers and a class for our output layer, we can build our deep GP. To do this, we create a Module whose forward is simply responsible for forwarding through the various layers.

This also allows for various network connectivities easily. For example calling,

hidden_rep2 = self.second_hidden_layer(hidden_rep1, inputs)


in forward would cause the second hidden layer to use both the output of the first hidden layer and the input data as inputs, concatenating the two together.

[6]:

num_output_dims = 2 if smoke_test else 10

class DeepGP(DeepGP):
def __init__(self, train_x_shape):
hidden_layer = ToyDeepGPHiddenLayer(
input_dims=train_x_shape[-1],
output_dims=num_output_dims,
mean_type='linear',
)

last_layer = ToyDeepGPHiddenLayer(
input_dims=hidden_layer.output_dims,
output_dims=None,
mean_type='constant',
)

super().__init__()

self.hidden_layer = hidden_layer
self.last_layer = last_layer
self.likelihood = GaussianLikelihood()

def forward(self, inputs):
hidden_rep1 = self.hidden_layer(inputs)
output = self.last_layer(hidden_rep1)
return output

mus = []
variances = []
lls = []
preds = self.likelihood(self(x_batch))
mus.append(preds.mean)
variances.append(preds.variance)
lls.append(model.likelihood.log_marginal(y_batch, model(x_batch)))


[7]:

model = DeepGP(train_x.shape)
if torch.cuda.is_available():
model = model.cuda()


## Objective function (approximate marginal log likelihood/ELBO)¶

Because deep GPs use some amounts of internal sampling (even in the stochastic variational setting), we need to handle the objective function (e.g. the ELBO) in a slightly different way. To do this, wrap the standard objective function (e.g. ~gpytorch.mlls.VariationalELBO) with a gpytorch.mlls.DeepApproximateMLL.

## Training/Testing¶

The training loop for a deep GP looks similar to a standard GP model with stochastic variational inference.

[8]:

# this is for running the notebook in our testing framework
num_epochs = 1 if smoke_test else 10
num_samples = 3 if smoke_test else 10

{'params': model.parameters()},
], lr=0.01)
mll = DeepApproximateMLL(VariationalELBO(model.likelihood, model, train_x.shape[-2]))

epochs_iter = tqdm.notebook.tqdm(range(num_epochs), desc="Epoch")
for i in epochs_iter:
# Within each iteration, we will go over each minibatch of data
for x_batch, y_batch in minibatch_iter:
with gpytorch.settings.num_likelihood_samples(num_samples):
output = model(x_batch)
loss = -mll(output, y_batch)
loss.backward()
optimizer.step()

minibatch_iter.set_postfix(loss=loss.item())




The output distribution of a deep GP in this framework is actually a mixture of num_samples Gaussians for each output. We get predictions the same way with all GPyTorch models, but we do currently need to do some reshaping to get the means and variances in a reasonable form.

Note that you may have to do more epochs of training than this example to get optimal performance; however, the performance on this particular dataset is pretty good after 10.

[9]:

import gpytorch
import math

test_dataset = TensorDataset(test_x, test_y)

RMSE: 0.1075204536318779, NLL: 0.08116623759269714

[ ]: