Nestle: nested sampling routines to evaluate Bayesian evidence.

nestle.sample(loglikelihood, prior_transform, ndim, npoints=100, method='single', update_interval=None, npdim=None, maxiter=None, maxcall=None, dlogz=None, decline_factor=None, rstate=None, callback=None, **options)

Perform nested sampling to evaluate Bayesian evidence.


loglikelihood : function

Function returning log(likelihood) given parameters as a 1-d numpy array of length ndim.

prior_transform : function

Function translating a unit cube to the parameter space according to the prior. The input is a 1-d numpy array with length ndim, where each value is in the range [0, 1). The return value should also be a 1-d numpy array with length ndim, where each value is a parameter. The return value is passed to the loglikelihood function. For example, for a 2 parameter model with flat priors in the range [0, 2), the function would be:

def prior_transform(u):
    return 2.0 * u

ndim : int

Number of parameters returned by prior and accepted by loglikelihood.

npoints : int, optional

Number of active points. Larger numbers result in a more finely sampled posterior (more accurate evidence), but also a larger number of iterations required to converge. Default is 100.

method : {‘classic’, ‘single’, ‘multi’}, optional

Method used to select new points. Choices are ‘classic’, single-ellipsoidal (‘single’), multi-ellipsoidal (‘multi’). Default is ‘single’.

update_interval : int, optional

Only update the new point selector every update_interval-th likelihood call. Update intervals larger than 1 can be more efficient when the likelihood function is very fast, particularly when using the multi-ellipsoid method. Default is round(0.6 * npoints).

npdim : int, optional

Number of parameters accepted by prior. This might differ from ndim in the case where a parameter of loglikelihood is dependent upon multiple independently distributed parameters, some of which may be nuisance parameters.

maxiter : int, optional

Maximum number of iterations. Iteration may stop earlier if termination condition is reached. Default is no limit.

maxcall : int, optional

Maximum number of likelihood evaluations. Iteration may stop earlier if termination condition is reached. Default is no limit.

dlogz : float, optional

If supplied, iteration will stop when the estimated contribution of the remaining prior volume to the total evidence falls below this threshold. Explicitly, the stopping criterion is log(z + z_est) - log(z) < dlogz where z is the current evidence from all saved samples, and z_est is the estimated contribution from the remaining volume. This option and decline_factor are mutually exclusive. If neither is specified, the default is dlogz=0.5.

decline_factor : float, optional

If supplied, iteration will stop when the weight (likelihood times prior volume) of newly saved samples has been declining for decline_factor * nsamples consecutive samples. A value of 1.0 seems to work pretty well. This option and dlogz are mutually exclusive.

rstate : RandomState, optional

RandomState instance. If not given, the global random state of the numpy.random module will be used.

callback : function, optional

Callback function to be called at each iteration. A single argument, a dictionary, is passed to the callback. The keys include 'it', the current iteration number, and 'logz', the current total log evidence of all saved points. To simply print these at each iteration, use the convience function callback=nestle.print_progress.


result : Result

A dictionary-like object with attribute access: Attributes can be accessed with, for example, either result['niter'] or result.niter. Attributes:

niter (int)

Number of iterations.

ncall (int)

Number of likelihood calls.

logz (float)

Natural logarithm of evidence (integral of posterior).

logzerr (float)

Estimated numerical (sampling) error on logz.

h (float)

Information. This is a measure of the “peakiness” of the likelihood function. A constant likelihood has zero information.

samples (ndarray)

Parameter values of each sample. Shape is (nsamples, ndim).

logvol (ndarray)

Natural log of prior volume of corresponding to each sample. Shape is (nsamples,).

logl (ndarray)

Natural log of the likelihood for each sample, as returned by user-supplied logl function. Shape is (nsamples,).

weights (ndarray)

Weight corresponding to each sample, normalized to unity. These are proportional to exp(logvol + logl). Shape is (nsamples,).

Other Parameters:

steps : int, optional

For the ‘classic’ method, the number of steps to take when selecting a new point. Default is 20.

enlarge : float, optional

For the ‘single’ and ‘multi’ methods, enlarge the ellipsoid(s) by this fraction in volume. Default is 1.2.


Callback function that prints a running total on a single line.


info : dict

Dictionary containing keys 'it' and 'logz'.

nestle.mean_and_cov(x, weights)

Compute weighted sample mean and covariance.


x : ndarray

2-D array containing data samples. Shape is (M, N) where N is the number of variables and M is the number of samples or observations. This is ordering is equivalent to using rowvar=0 in numpy.cov.

weights : ndarray

1-D array of sample weights. Shape is (M,).


mean : ndarray

Weighted average of samples, with shape (N,).

cov : ndarray

The covariance matrix of the variables with shape (N, N).


Implements formula described here: https://en.wikipedia.org/wiki/Sample_mean_and_sample_covariance (see “weighted samples” section)

nestle.resample_equal(samples, weights, rstate=None)

Resample the samples so that the final samples all have equal weight.

Each input sample appears in the output array either floor(weights[i] * N) or ceil(weights[i] * N) times, with floor or ceil randomly selected (weighted by proximity).


samples : ndarray

Unequally weight samples returned by the nested sampling algorithm. Shape is (N, ...), with N the number of samples.

weights : ndarray

Weight of each sample. Shape is (N,).


equal_weight_samples : ndarray

Samples with equal weights, same shape as input samples.


Implements the systematic resampling method described in this PDF. Another way to sample according to weights would be:

N = len(weights)
new_samples = samples[np.random.choice(N, size=N, p=weights)]

However, the method used in this function is less “noisy”.


>>> x = np.array([[1., 1.], [2., 2.], [3., 3.], [4., 4.]])
>>> w = np.array([0.6, 0.2, 0.15, 0.05])
>>> nestle.resample_equal(x, w)
array([[ 1.,  1.],
       [ 1.,  1.],
       [ 1.,  1.],
       [ 3.,  3.]])
class nestle.Result

Represents a sampling result.

Since this class is essentially a subclass of dict with attribute accessors, one can see which attributes are available using the keys() method.


Return a nicely formatted string giving summary.