Modelling Pipeline#
Our current modelling pipeline has three steps.
- class gigalens.inference.ModellingSequenceInterface(phys_model, prob_model, sim_config)[source]#
Defines the three steps in modelling:
Multi-starts gradient descent to find the maximum a posteriori (MAP) estimate. See Martí [Marti03], György and Kocsis [GyorgyK11].
Variational inference (VI) using the MAP as a starting point. See Hoffman et al. [HBWP13], Blei et al. [BKM17]. Note that the implementation of variational inference is stochastic variational inference, so VI and SVI are interchangeable.
Hamiltonian Monte Carlo (HMC) using the inverse of the VI covariance matrix as the mass matrix \(M\). See Duane et al. [DKPR87], Neal [Neal11].
- Parameters
phys_model (
PhysicalModel
) – The physical model of the lensing system that we want to fitprob_model (
ProbabilisticModel
) – The probabilistic model of the data we are fittingsim_config (
SimulatorConfig
) – Parameters for image simulation (e.g., pixel scale)
- abstract MAP(optimizer, start, n_samples, num_steps, seed)[source]#
Finds maximum a posteriori (MAP) estimates for the parameters. See Section 2.3 in our paper.
- Parameters
optimizer – An optimizer object with which to run MAP. Adam or variants thereof are recommended, using a decaying learning rate
start (
Optional
[Any
]) – Samples from which to start optimization. If none are provided, optimization will be started by sampling directly from the priorn_samples (int) – Number of samples with which to run multi-starts gradient descent
num_steps (int) – Number of gradient descent steps
seed (
Optional
[Any
]) – A random seed (only necessary ifstart
is not specified)
- Returns
The unconstrained parameters of all
n_samples
samples after runningnum_steps
of optimization.
- abstract SVI(optimizer, start, n_vi, num_steps, init_scales, seed)[source]#
Runs stochastic variational inference (SVI) to characterize the posterior scales. Currently, only multi-variate Gaussian ansatz is supported. Note that the implementation of variational inference is stochastic variational inference, so VI and SVI are interchangeable. This is roughly equivalent to taking the Hessian of the log posterior at the MAP. However, in our experience, the Hessian can become unstable in high dimensions (in cases of very small eigenvalues). See Section 2.4 in our paper.
- Parameters
optimizer – An optimizer with which to minimize the ELBO loss. Adam or variants thereof are recommended, using slow learning rate warm-up.
start – Initial guess for posterior mean. Must be shape (1,d), where d is the number of parameters. Convention is that it is in unconstrained parameter space.
n_vi (int) – Number of samples with which to approximate the ELBO loss
num_steps (int) – Number of optimization steps
init_scales (float or
np.array
) – Initial VI standard deviation guessseed (
Any
) – A random seed for drawing samples from the posterior ansatz
- Returns
The fitter posterior in unconstrained space
- abstract HMC(q_z, init_eps, init_l, n_hmc, num_burnin_steps, num_results, max_leapfrog_steps, seed)[source]#
Runs Hamiltonian Monte Carlo (HMC) to draw posterior samples. See Section 2.5 in our paper.
- Parameters
q_z – Fitted posterior from SVI. Used to calculate the mass matrix \(M\) for preconditioned HMC. Convention is that
q_z
is an approximation of the unconstrained posterior.init_eps (float) – Initial step size \(\epsilon\)
init_l (int) – Initial number of leapfrog steps \(L\)
n_hmc (int) – Number of HMC chains to run in parallel
num_burnin_steps (int) – Number of burn-in steps
num_results (int) – Number of samples to draw from each chain (after burning in)
max_leapfrog_steps (int) – Maximum number of leapfrog steps if \(L\) is tuned automatically
seed (
Any
) – A random seed
- Returns
Posterior chains in unconstrained space