GPflow
sgptools.utils.gpflow
TraceInducingPts
Bases: MonitorTask
GPflow monitoring task, used to trace the inducing points states at every step during optimization.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model |
sgpr
|
GPflow GP/SGP model |
required |
Source code in sgptools/utils/gpflow.py
get_trace()
Returns the inducing points collected at each optimization step
Returns:
Name | Type | Description |
---|---|---|
trace |
ndarray
|
(n, m, d); Array with the inducing points.
|
Source code in sgptools/utils/gpflow.py
run(**kwargs)
Method used to extract the inducing points and apply IPP fixed points transform if available
Source code in sgptools/utils/gpflow.py
get_model_params(X_train, y_train, max_steps=1500, lr=0.01, print_params=True, lengthscales=1.0, variance=1.0, noise_variance=0.1, kernel=None, return_gp=False, train_inducing_pts=False, num_inducing_pts=500, **kwargs)
Train a GP on the given training set. Trains a sparse GP if the training set is larger than 1000 samples.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
X_train |
ndarray
|
(n, d); Training set inputs |
required |
y_train |
ndarray
|
(n, 1); Training set labels |
required |
max_steps |
int
|
Maximum number of optimization steps |
1500
|
lr |
float
|
Optimization learning rate |
0.01
|
print_params |
bool
|
If True, prints the optimized GP parameters |
True
|
lengthscales |
float or list
|
Kernel lengthscale(s), if passed as a list, each element corresponds to each data dimension |
1.0
|
variance |
float
|
Kernel variance |
1.0
|
noise_variance |
float
|
Data noise variance |
0.1
|
kernel |
Kernel
|
gpflow kernel function |
None
|
return_gp |
bool
|
If True, returns the trained GP model |
False
|
train_inducing_pts |
bool
|
If True, trains the inducing points when using a sparse GP model |
False
|
num_inducing_pts |
int
|
Number of inducing points to use when training a sparse GP model |
500
|
Returns:
Name | Type | Description |
---|---|---|
loss |
list
|
Loss values obtained during training |
variance |
float
|
Optimized data noise variance |
kernel |
Kernel
|
Optimized gpflow kernel function |
gp |
GPR
|
Optimized gpflow GP model.
Returned only if |
Source code in sgptools/utils/gpflow.py
optimize_model(model, max_steps=2000, kernel_grad=True, lr=0.01, optimizer='scipy', method=None, verbose=False, trace_fn=None, convergence_criterion=True, trainable_variables=None, tol=None)
Trains a GP/SGP model
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model |
models
|
GPflow GP/SGP model to train. |
required |
max_steps |
int
|
Maximum number of training steps. |
2000
|
kernel_grad |
bool
|
If |
True
|
lr |
float
|
Optimization learning rate. |
0.01
|
optimizer |
str
|
Optimizer to use for training ( |
'scipy'
|
method |
str
|
Optimization method refer to scipy minimize and tf optimizers for full list |
None
|
verbose |
bool
|
If |
False
|
trace_fn |
str
|
Function to trace metrics during training.
If |
None
|
convergence_criterion |
bool
|
If |
True
|
trainable_variables |
list
|
List of model variables to train. |
None
|
tol |
float
|
Convergence tolerance to decide when to stop optimization. |
None
|
Source code in sgptools/utils/gpflow.py
159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 |
|
plot_loss(losses, save_file=None)
Helper function to plot the training loss
Parameters:
Name | Type | Description | Default |
---|---|---|---|
losses |
list
|
list of loss values |
required |
save_file |
str
|
If passed, the loss plot will be saved to the |
None
|