Approximate Bayesian inference using Expectation-Propagation (EP)
For applications where exact inference is not practical, we have developed and applied so-called approximate message passing methods for scalable inference and uncertainty quantification. Among other applications, using EP enables fast(er) inference for:
- Large scale (e.g., imaging) inverse problems, as in AbdulazizEUSIPCO_2021, YaoTIP_2022 or YaoOptExp_2023
- Sparse regression problems, as in ZengTGRS_2022 or AltmannSciRep_2020
- Fast and/or online 3D imaging (using single-photon Lidar), as in AltmannTIP202 or DrummondSSPD_2021
Variational autoencoders (VAEs) and Bayesian deep learning networks
For systems and operations that are effectively representable in lower-dimensional spaces, we have developed various VAE architectures for scalable inference and uncertainty quantification. Compared to other deep learning methods, VAEs demonstrate greater robustness to changes within the system and provide enhanced uncertainty quantification. Example applications include:
- Imaging within dynamic systems, such as multimode fibres, as in AbdulazizSciRep_2023
- Identification of hazardous release parameters from imagery, as in AbdulazizSSPD_2023