bojaxns

bojaxns

Subpackages

Submodules

Package Contents

class Trial[source]

Bases: pydantic.BaseModel

trial_id: str
create_dt: datetime.datetime
param_values: bojaxns.common.ParamValues
U_value: bojaxns.common.UValue
trial_updates: Dict[str, TrialUpdate]
class TrialUpdate[source]

Bases: pydantic.BaseModel

ref_id: str
measurement_dt: datetime.datetime
objective_measurement: float
class OptimisationExperiment[source]

Bases: pydantic.BaseModel

experiment_id: str
parameter_space: bojaxns.parameter_space.ParameterSpace
trials: Dict[str, Trial]
ensure_parameters_match_space(value, values)[source]
class NewExperimentRequest[source]

Bases: pydantic.BaseModel

parameter_space: bojaxns.parameter_space.ParameterSpace
init_explore_size: conint(ge=1)
class AbstractAcquisition[source]

A class that represents any acquisition function. All acquisition functions take a point in the U-domain and returns a metric that gives a proxy as to how valuable it would be to try that point. All acquisition values only make sense relatively.

abstract __call__(u_star)[source]
Parameters:

u_star (jax.numpy.ndarray) –

class MarginalisedAcquisitionFunction(key, ns_results, acquisition_factory, S)[source]

Bases: AbstractAcquisition

Class that represents a marginalisation of an acquisition function over samples.

Parameters:
__call__(u_star)[source]
Parameters:

u_star (jax.numpy.ndarray) –

class MarginalisationData[source]

Bases: NamedTuple

samples: Dict[str, chex.Array]
log_dp_mean: chex.Array
class OptimisationExperiment[source]

Bases: pydantic.BaseModel

experiment_id: str
parameter_space: bojaxns.parameter_space.ParameterSpace
trials: Dict[str, Trial]
ensure_parameters_match_space(value, values)[source]
class GaussianProcessData[source]

Bases: NamedTuple

U: jax.numpy.ndarray
Y: jax.numpy.ndarray
Y_var: jax.numpy.ndarray
sample_size: jax.numpy.ndarray
class GaussianProcessConditionalPredictiveFactory(data)[source]

Bases: bojaxns.base.ConditionalPredictiveFactory

Parameters:

data (GaussianProcessData) –

ndims()[source]
build_prior_model()[source]
psd_kernels()[source]
Return type:

List[Type[tensorflow_probability.substrates.jax.math.psd_kernels.PositiveSemidefiniteKernel]]

__call__(**samples)[source]
Return type:

GaussianProcessConditionalPredictive

class ExpectedImprovementAcquisitionFactory(conditional_predictive_factory)[source]

Bases: bojaxns.base.AcquisitionFactory

Parameters:

conditional_predictive_factory (GaussianProcessConditionalPredictiveFactory) –

__call__(**sample)[source]
Return type:

bojaxns.base.AbstractAcquisition

class TopTwoAcquisitionFactory(conditional_predictive_factory, u1)[source]

Bases: bojaxns.base.AcquisitionFactory

Parameters:
__call__(**sample)[source]
Return type:

bojaxns.base.AbstractAcquisition

run_multi_lookahead(rng_key, data, ns_results, batch_size, max_depth, num_actions, num_simulations, S)[source]
Parameters:
Return type:

Tuple[chex.Array, mctx.PolicyOutput[mctx.GumbelMuZeroExtraData]]

convert_tree_to_graph(tree, action_labels=None, batch_index=0)[source]

Converts a search tree into a Graphviz graph.

Parameters:
  • tree (mctx.Tree) – A Tree containing a batch of search data.

  • action_labels (Optional[Sequence[str]]) – Optional labels for edges, defaults to the action index.

  • batch_index (int) – Index of the batch element to plot.

Returns:

A Graphviz graph representation of tree.

tfpb[source]
class BayesianOptimiser(experiment, num_parallel_solvers=1, beta=0.5, S=512)[source]
Parameters:
posterior_solve(key)[source]
Parameters:

key (chex.PRNGKey) –

Return type:

jaxns.internals.types.NestedSamplerResults

search_U_top1(key, ns_results, batch_size, num_search)[source]
Parameters:
search_U_top2(key, ns_results, u1, batch_size, num_search)[source]
Parameters:
choose_next_U_toptwo(key, batch_size, num_search)[source]
Parameters:
  • key (chex.PRNGKey) –

  • batch_size (int) –

  • num_search (int) –

choose_next_U_multistep(key, batch_size, max_depth, num_simulations, branch_factor)[source]
Parameters:
  • key (chex.PRNGKey) –

  • batch_size (int) –

  • max_depth (int) –

  • num_simulations (int) –

  • branch_factor (int) –

class ContinuousPrior[source]

Bases: pydantic.BaseModel

type: Literal[continuous_prior] = 'continuous_prior'
lower: float
upper: float
mode: float
uncert: confloat(gt=0.0)
class IntegerPrior[source]

Bases: pydantic.BaseModel

type: Literal[integer_prior] = 'integer_prior'
lower: int
upper: int
mode: float
uncert: confloat(gt=0.0)
class CategoricalPrior[source]

Bases: pydantic.BaseModel

type: Literal[categorical_prior] = 'categorical_prior'
probs: List[confloat(ge=0.0)]
class Parameter[source]

Bases: pydantic.BaseModel

name: str
prior: ParamPrior
class ParameterSpace[source]

Bases: pydantic.BaseModel

parameters: List[Parameter]
unique_parameters(value)[source]
build_prior_model(parameter_space)[source]

Constructs a prior model given the parameter space.

Parameters:

parameter_space (ParameterSpace) –

Returns:

exception InvalidTrial[source]

Bases: Exception

Common base class for all non-exit exceptions.

Initialize self. See help(type(self)) for accurate signature.

class BayesianOptimisation(experiment)[source]
Parameters:

experiment (bojaxns.experiment.OptimisationExperiment) –

property experiment
classmethod create_new_experiment(new_experiment)[source]
Parameters:

new_experiment (bojaxns.experiment.NewExperimentRequest) –

Return type:

BayesianOptimisation

add_trial_from_data(key, param_values)[source]
Parameters:
  • key (jax._src.random.PRNGKey) –

  • param_values (bojaxns.common.ParamValues) –

Return type:

str

create_new_trial(key, random_explore=False, beta=0.5)[source]
Parameters:
  • key (jax._src.random.PRNGKey) –

  • random_explore (bool) –

  • beta (float) –

Return type:

str

get_trial(trial_id)[source]
Parameters:

trial_id (str) –

Return type:

bojaxns.experiment.Trial

delete_trial(trial_id)[source]
Parameters:

trial_id (str) –

post_measurement(trial_id, trial_update)[source]
Parameters:
trial_size(trial_id)[source]
Parameters:

trial_id (str) –

visualise(main_color='#7e97bf', grid_color='#969396')[source]

Constructs a visual breakdown of condition

Parameters:
  • main_color – color of main axes

  • grid_color – color of grid

Returns:

a pylab Figure

Raises:

NotEnoughData if not enough to compute a breakdown

Return type:

pylab.Figure

latin_hypercube(seed, num_samples, num_dim)[source]

Sample from the latin-hypercube defined as the continuous analog of the discrete latin-hypercube. That is, if you partition each dimension into num_samples equal volume intervals then there is (conditionally) exactly one point in each interval. We guarantee that uniformity by randomly assigning the permutation of each dimension. The degree of randomness is controlled by cube_scale. A value of 0 places the sample at the center of the grid point, and a value of 1 places the value randomly inside the grid-cell.

Parameters:
  • key – PRNG key

  • num_samples (int) – number of samples in total to draw

  • num_dim (int) – number of dimensions in each sample

  • cube_scale – The scale of randomness, in (0,1).

  • seed (int) –

Returns:

latin-hypercube samples of shape [num_samples, num_dim]

build_example(model)[source]
Parameters:

model (Type[_T]) –

Return type:

_T

current_utc()[source]
Return type:

datetime.datetime