# gpytorch.optim¶

## NGD¶

class gpytorch.optim.NGD(params, num_data, lr=0.1)[source]

Implements a natural gradient descent step. It can only be used in conjunction with a _NaturalVariationalDistribution.

Example

>>> ngd_optimizer = torch.optim.NGD(model.variational_parameters(), num_data=train_y.size(0), lr=0.1)
>>> mll(gp_model(input), target).backward()
>>> ngd_optimizer.step()

Parameters
step(closure=None)[source]

Performs a single optimization step.

(Note that the closure argument is not used by this optimizer; it is simply included to be compatible with the PyTorch optimizer API.)

Parameters

closure (callable, optional) –

Return type

NoneType