Surrogate Models¶
Base Class¶
-
class
autooed.mobo.surrogate_model.base.
SurrogateModel
(problem, **kwargs)[source]¶ Bases:
abc.ABC
Base class of surrogate model.
-
__init__
(problem, **kwargs)[source]¶ Initialize a surrogate model.
Parameters: problem (autooed.problem.Problem) – The optimization problem.
-
abstract
_evaluate
(X, std, gradient, hessian)[source]¶ Predict the performance given a set of normalized and continuous design variables.
Parameters: - X (np.array) – Input design variables (normalized, continuous).
- std (bool) – Whether to calculate the standard deviation of the prediction.
- gradient (bool) – Whether to calculate the gradient of the prediction.
- hessian (bool) – Whether to calculate the hessian of the prediction.
Returns: out – A output dictionary containing following properties of performance:
- out[‘F’]: mean, shape (N, n_obj)
- out[‘dF’]: gradient of mean, shape (N, n_obj, n_var)
- out[‘hF’]: hessian of mean, shape (N, n_obj, n_var, n_var)
- out[‘S’]: std, shape (N, n_obj)
- out[‘dS’]: gradient of std, shape (N, n_obj, n_var)
- out[‘hS’]: hessian of std, shape (N, n_obj, n_var, n_var)
Return type: dict
-
abstract
_fit
(X, Y)[source]¶ Fit a surrogate model from normalized and continuous data.
Parameters: - X (np.array) – Input design variables (normalized, continuous).
- Y (np.array) – Input objective values (normalized).
-
evaluate
(X, dtype='raw', std=False, gradient=False, hessian=False)[source]¶ Predict the performance given a set of design variables.
Parameters: - X (np.array) – Input design variables.
- std (bool) – Whether to calculate the standard deviation of the prediction.
- gradient (bool) – Whether to calculate the gradient of the prediction.
- hessian (bool) – Whether to calculate the hessian of the prediction.
Returns: out – A output dictionary containing following properties of performance:
- out[‘F’]: mean, shape (N, n_obj)
- out[‘dF’]: gradient of mean, shape (N, n_obj, n_var)
- out[‘hF’]: hessian of mean, shape (N, n_obj, n_var, n_var)
- out[‘S’]: std, shape (N, n_obj)
- out[‘dS’]: gradient of std, shape (N, n_obj, n_var)
- out[‘hS’]: hessian of std, shape (N, n_obj, n_var, n_var)
Return type: dict
-
Gaussian Process¶
-
class
autooed.mobo.surrogate_model.gp.
GaussianProcess
(problem, nu=1, **kwargs)[source]¶ Bases:
autooed.mobo.surrogate_model.base.SurrogateModel
Gaussian process.
-
__init__
(problem, nu=1, **kwargs)[source]¶ Initialize a Gaussian process.
Parameters: - problem (autooed.problem.Problem) – The optimization problem.
- nu (int) – The parameter nu controlling the type of the Matern kernel. Choices are 1, 3, 5 and -1.
-
_evaluate
(X, std, gradient, hessian)[source]¶ Predict the performance given a set of normalized and continuous design variables.
Parameters: - X (np.array) – Input design variables (normalized, continuous).
- std (bool) – Whether to calculate the standard deviation of the prediction.
- gradient (bool) – Whether to calculate the gradient of the prediction.
- hessian (bool) – Whether to calculate the hessian of the prediction.
Returns: out – A output dictionary containing following properties of performance:
- out[‘F’]: mean, shape (N, n_obj)
- out[‘dF’]: gradient of mean, shape (N, n_obj, n_var)
- out[‘hF’]: hessian of mean, shape (N, n_obj, n_var, n_var)
- out[‘S’]: std, shape (N, n_obj)
- out[‘dS’]: gradient of std, shape (N, n_obj, n_var)
- out[‘hS’]: hessian of std, shape (N, n_obj, n_var, n_var)
Return type: dict
-
Neural Network¶
-
class
autooed.mobo.surrogate_model.nn.
NeuralNetwork
(problem, hidden_size=50, hidden_layers=3, activation='tanh', lr=0.001, weight_decay=0.0001, n_epoch=100, **kwargs)[source]¶ Bases:
autooed.mobo.surrogate_model.base.SurrogateModel
Simple neural network
-
__init__
(problem, hidden_size=50, hidden_layers=3, activation='tanh', lr=0.001, weight_decay=0.0001, n_epoch=100, **kwargs)[source]¶ Initialize a neural network as surrogate model.
Parameters: - problem (autooed.problem.Problem) – The optimization problem.
- hidden_size (int) – Size of the hidden layer of the neural network.
- hidden_layers (int) – Number of hidden layers of the neural network.
- activation (str) – Type of activation function.
- lr (float) – Learning rate.
- weight_decay (float) – Weight decay.
- n_epoch (int) – Number of training epochs.
-
_evaluate
(X, std, gradient, hessian)[source]¶ Predict the performance given a set of normalized and continuous design variables.
Parameters: - X (np.array) – Input design variables (normalized, continuous).
- std (bool) – Whether to calculate the standard deviation of the prediction.
- gradient (bool) – Whether to calculate the gradient of the prediction.
- hessian (bool) – Whether to calculate the hessian of the prediction.
Returns: out – A output dictionary containing following properties of performance:
- out[‘F’]: mean, shape (N, n_obj)
- out[‘dF’]: gradient of mean, shape (N, n_obj, n_var)
- out[‘hF’]: hessian of mean, shape (N, n_obj, n_var, n_var)
- out[‘S’]: std, shape (N, n_obj)
- out[‘dS’]: gradient of std, shape (N, n_obj, n_var)
- out[‘hS’]: hessian of std, shape (N, n_obj, n_var, n_var)
Return type: dict
-
Bayesian Neural Network¶
-
class
autooed.mobo.surrogate_model.bnn.
BayesianNeuralNetwork
(problem, hidden_size=50, hidden_layers=3, activation='tanh', lr=0.001, weight_decay=0.0001, n_epoch=100, **kwargs)[source]¶ Bases:
autooed.mobo.surrogate_model.nn.NeuralNetwork
Deep Networks for Global Optimization [1]: Bayesian Linear Regression with basis function extracted from a neural network
- [1] J. Snoek, O. Rippel, K. Swersky, R. Kiros, N. Satish,
- N. Sundaram, M.~M.~A. Patwary, Prabhat, R.~P. Adams Scalable Bayesian Optimization Using Deep Neural Networks Proc. of ICML’15
-
__init__
(problem, hidden_size=50, hidden_layers=3, activation='tanh', lr=0.001, weight_decay=0.0001, n_epoch=100, **kwargs)[source]¶ Initialize a Bayesian neural network as surrogate model.
Parameters: - problem (autooed.problem.Problem) – The optimization problem.
- hidden_size (int) – Size of the hidden layer of the neural network.
- hidden_layers (int) – Number of hidden layers of the neural network.
- activation (str) – Type of activation function.
- lr (float) – Learning rate.
- weight_decay (float) – Weight decay.
- n_epoch (int) – Number of training epochs.
-
_evaluate
(X, std, gradient, hessian)[source]¶ Predict the performance given a set of normalized and continuous design variables.
Parameters: - X (np.array) – Input design variables (normalized, continuous).
- std (bool) – Whether to calculate the standard deviation of the prediction.
- gradient (bool) – Whether to calculate the gradient of the prediction.
- hessian (bool) – Whether to calculate the hessian of the prediction.
Returns: out – A output dictionary containing following properties of performance:
- out[‘F’]: mean, shape (N, n_obj)
- out[‘dF’]: gradient of mean, shape (N, n_obj, n_var)
- out[‘hF’]: hessian of mean, shape (N, n_obj, n_var, n_var)
- out[‘S’]: std, shape (N, n_obj)
- out[‘dS’]: gradient of std, shape (N, n_obj, n_var)
- out[‘hS’]: hessian of std, shape (N, n_obj, n_var, n_var)
Return type: dict