RoBO Gaussian Process

experiment:
    algorithms:
        RoBO_GP:
            seed: 0
            n_initial_points: 20
            maximizer: 'random'
            acquisition_func: 'log_ei'
            normalize_input: True
            normalize_output: False
class orion.algo.robo.gp.RoBO_GP(space: Space, seed: int | Sequence[int] | None = 0, n_initial_points: int = 20, maximizer: MaximizerName = 'random', acquisition_func: AcquisitionFnName = 'log_ei', normalize_input: bool = True, normalize_output: bool = False)[source]

Wrapper for RoBO with Gaussian processes

Parameters
space: ``orion.algo.space.Space``

Optimisation space with priors for each dimension.

seed: None, int or sequence of int

Seed to sample initial points and candidates points. Default: 0.

n_initial_points: int

Number of initial points randomly sampled. If new points are requested and less than n_initial_points are observed, the next points will also be sampled randomly instead of being sampled from the parzen estimators. Default: 20

maximizer: str

The optimizer for the acquisition function. Can be one of {"random", "scipy", "differential_evolution"}. Defaults to ‘random’

acquisition_func: str

Name of the acquisition function. Can be one of ['ei', 'log_ei', 'pi', 'lcb'].

normalize_input: bool

Normalize the input based on the provided bounds (zero mean and unit standard deviation). Defaults to True.

normalize_output: bool

Normalize the output based on data (zero mean and unit standard deviation). Defaults to False.

Methods

build_model()

Builds the model for the optimisation.

RoBO Gaussian Process with MCMC

experiment:
    algorithms:
        RoBO_GP_MCMC:
            seed: 0
            n_initial_points: 20
            maximizer: 'random'
            acquisition_func: 'log_ei'
            normalize_input: True
            normalize_output: False
            chain_length: 2000
            burnin_steps: 2000
class orion.algo.robo.gp.RoBO_GP_MCMC(space: Space, seed: int | Sequence[int] | None = 0, n_initial_points: int = 20, maximizer: MaximizerName = 'random', acquisition_func: AcquisitionFnName = 'log_ei', normalize_input: bool = True, normalize_output: bool = False, chain_length=2000, burnin_steps=2000)[source]

Wrapper for RoBO with Gaussian processes using Markov chain Monte Carlo to marginalize out hyperparameters of the Bayesian Optimization.

Parameters
space: ``orion.algo.space.Space``

Optimisation space with priors for each dimension.

seed: None, int or sequence of int

Seed to sample initial points and candidates points. Default: 0.

n_initial_points: int

Number of initial points randomly sampled. If new points are requested and less than n_initial_points are observed, the next points will also be sampled randomly instead of being sampled from the parzen estimators. Default: 20

maximizer: str

The optimizer for the acquisition function. Can be one of {"random", "scipy", "differential_evolution"}. Defaults to ‘random’

acquisition_func: str

Name of the acquisition function. Can be one of ['ei', 'log_ei', 'pi', 'lcb'].

normalize_input: bool

Normalize the input based on the provided bounds (zero mean and unit standard deviation). Defaults to True.

normalize_output: bool

Normalize the output based on data (zero mean and unit standard deviation). Defaults to False.

chain_length: int

The length of the MCMC chain. We start n_hypers walker for chain_length steps and we use the last sample in the chain as a hyperparameter sample. n_hypers is automatically inferred based on dimensionality of the search space. Defaults to 2000.

burnin_steps: int

The number of burnin steps before the actual MCMC sampling starts. Defaults to 2000.

Methods

build_model()

Builds the model for the optimisation.

RoBO Random Forest

experiment:
    algorithms:
        RoBO_RandomForest:
            seed: 0
            n_initial_points: 20
            maximizer: 'random'
            acquisition_func: 'log_ei'
            num_trees: 30
            do_bootstrapping: True
            n_points_per_tree: 0
            compute_oob_error: False
            return_total_variance: True
class orion.algo.robo.randomforest.RoBO_RandomForest(space: Space, seed: int | Sequence[int] | None = 0, n_initial_points=20, maximizer: MaximizerName = 'random', acquisition_func: AcquisitionFnName = 'log_ei', num_trees: int = 30, do_bootstrapping: bool = True, n_points_per_tree: int = 0, compute_oob_error: bool = False, return_total_variance: bool = True)[source]

Wrapper for RoBO with

Parameters
space: ``orion.algo.space.Space``

Optimisation space with priors for each dimension.

seed: None, int or sequence of int

Seed to sample initial points and candidates points. Default: 0.

n_initial_points: int

Number of initial points randomly sampled. If new points are requested and less than n_initial_points are observed, the next points will also be sampled randomly instead of being sampled from the parzen estimators. Default: 20

maximizer: str

The optimizer for the acquisition function. Can be one of {"random", "scipy", "differential_evolution"}. Defaults to ‘random’

acquisition_func: str

Name of the acquisition function. Can be one of ['ei', 'log_ei', 'pi', 'lcb'].

num_trees: int

The number of trees in the random forest. Defaults to 30.

do_bootstrapping: bool

Turns on / off bootstrapping in the random forest. Defaults to True.

n_points_per_tree: int

Number of data point per tree. If set to 0 then we will use all data points in each tree. Defaults to 0.

compute_oob_error: bool

Turns on / off calculation of out-of-bag error. Defaults to False.

return_total_variance: bool

Return law of total variance (mean of variances + variance of means, if True) or explained variance (variance of means, if False). Defaults to True.

Methods

build_model()

Build the model that will be registered as self.model

RoBO DNGO

experiment:
    algorithms:
        RoBO_DNGO:
            seed: 0
            n_initial_points: 20
            maximizer: 'random'
            acquisition_func: 'log_ei'
            normalize_input: True
            normalize_output: False
            chain_length: 2000
            burnin_steps: 2000
            batch_size: 10
            num_epochs: 500
            learning_rate: 1e-2
            adapt_epoch: 5000
class orion.algo.robo.dngo.RoBO_DNGO(space: Space, seed: int | Sequence[int] | None = 0, n_initial_points: int = 20, maximizer: MaximizerName = 'random', acquisition_func: AcquisitionFnName = 'log_ei', normalize_input: bool = True, normalize_output: bool = False, chain_length: int = 2000, burnin_steps: int = 2000, batch_size: int = 10, num_epochs: int = 500, learning_rate: float = 0.01, adapt_epoch: int = 5000)[source]

Wrapper for RoBO with DNGO

For more information on the algorithm, see original paper at http://proceedings.mlr.press/v37/snoek15.html.

J. Snoek, O. Rippel, K. Swersky, R. Kiros, N. Satish, N. Sundaram, M.~M.~A. Patwary, Prabhat, R.~P. Adams Scalable Bayesian Optimization Using Deep Neural Networks Proc. of ICML’15

Parameters
space: ``orion.algo.space.Space``

Optimisation space with priors for each dimension.

seed: None, int or sequence of int

Seed to sample initial points and candidates points. Default: 0.

n_initial_points: int

Number of initial points randomly sampled. If new points are requested and less than n_initial_points are observed, the next points will also be sampled randomly instead of being sampled from the parzen estimators. Default: 20

maximizer: str

The optimizer for the acquisition function. Can be one of {"random", "scipy", "differential_evolution"}. Defaults to ‘random’

acquisition_func: str

Name of the acquisition function. Can be one of ['ei', 'log_ei', 'pi', 'lcb'].

normalize_input: bool

Normalize the input based on the provided bounds (zero mean and unit standard deviation). Defaults to True.

normalize_output: bool

Normalize the output based on data (zero mean and unit standard deviation). Defaults to False.

chain_lengthint

The chain length of the MCMC sampler

burnin_steps: int

The number of burnin steps before the sampling procedure starts

batch_size: int

Batch size for training the neural network

num_epochs: int

Number of epochs for training

learning_rate: float

Initial learning rate for Adam

adapt_epoch: int

Defines after how many epochs the learning rate will be decayed by a factor 10

Methods

build_model()

Build the model.

RoBO BOHAMIANN

experiment:
    algorithms:
        RoBO_BOHAMIANN:
            seed: 0
            n_initial_points: 20
            maximizer: 'random'
            acquisition_func: 'log_ei'
            normalize_input: True
            normalize_output: False
            burnin_steps: 2000
            sampling_method: "adaptive_sghmc"
            use_double_precision: True
            num_steps: null
            keep_every: 100
            learning_rate: 1e-2
            batch_size: 20
            epsilon: 1e-10
            mdecay: 0.05
            verbose: False
class orion.algo.robo.bohamiann.RoBO_BOHAMIANN(space: Space, seed: int | Sequence[int] | None = 0, n_initial_points: int = 20, maximizer: MaximizerName = 'random', acquisition_func: AcquisitionFnName = 'log_ei', normalize_input: bool = True, normalize_output: bool = False, burnin_steps: bool | None = None, sampling_method: SamplingMethod = 'adaptive_sghmc', use_double_precision: bool = True, num_steps: int | None = None, keep_every: int = 100, learning_rate: float = 0.01, batch_size: int = 20, epsilon: float = 1e-10, mdecay: float = 0.05, verbose: bool = False)[source]

Wrapper for RoBO with BOHAMIANN

For more information on the algorithm, see original paper at https://papers.nips.cc/paper/2016/hash/a96d3afec184766bfeca7a9f989fc7e7-Abstract.html.

Springenberg, Jost Tobias, et al. “Bayesian optimization with robust Bayesian neural networks.” Advances in neural information processing systems 29 (2016): 4134-4142.

Parameters
space: ``orion.algo.space.Space``

Optimisation space with priors for each dimension.

seed: None, int or sequence of int

Seed to sample initial points and candidates points. Default: 0.

n_initial_points: int

Number of initial points randomly sampled. If new points are requested and less than n_initial_points are observed, the next points will also be sampled randomly instead of being sampled from the parzen estimators. Default: 20

maximizer: str

The optimizer for the acquisition function. Can be one of {"random", "scipy", "differential_evolution"}. Defaults to ‘random’

acquisition_func: str

Name of the acquisition function. Can be one of ['ei', 'log_ei', 'pi', 'lcb'].

normalize_input: bool

Normalize the input based on the provided bounds (zero mean and unit standard deviation). Defaults to True.

normalize_output: bool

Normalize the output based on data (zero mean and unit standard deviation). Defaults to False.

burnin_steps: int or None.

The number of burnin steps before the sampling procedure starts. If None, burnin_steps = n_dims * 100 where n_dims is the dimensionality of the search space. Defaults to None.

sampling_method: str

Can be one of ['adaptive_sghmc', 'sgld', 'preconditioned_sgld', 'sghmc']. Defaults to "adaptive_sghmc". See PyBNN samplers’ code for more information.

use_double_precision: bool

Use double precision if using bohamiann. Note that it can run faster on GPU if using single precision. Defaults to True.

num_steps: int or None

Number of sampling steps to perform after burn-in is finished. In total, num_steps // keep_every network weights will be sampled. If None, num_steps = n_dims * 100 + 10000 where n_dims is the dimensionality of the search space.

keep_every: int

Number of sampling steps (after burn-in) to perform before keeping a sample. In total, num_steps // keep_every network weights will be sampled.

learning_rate: float

Learning rate. Defaults to 1e-2.

batch_size: int

Batch size for training the neural network. Defaults to 20.

epsilon: float

epsilon for numerical stability. Defaults to 1e-10.

mdecay: float

momemtum decay. Defaults to 0.05.

verbose: bool

Write progress logs in stdout. Defaults to False.

Methods

build_model()

Build the model that will be registered as self.model