search_query=cat:astro-ph.*+AND+lastUpdatedDate:[202603182000+TO+202603242000]&start=0&max_results=5000

New astro-ph.* submissions cross listed on cs.AI, stat.*, cs.LG, physics.data-an staritng 202603182000 and ending 202603242000

Feed last updated: 2026-03-24T05:18:39Z

A plug-and-play approach with fast uncertainty quantification for weak lensing mass mapping

Authors: Hubert Leterme, Andreas Tersenov, Jalal Fadili, Jean-Luc Starck
Comments: No comment found
Primary Category: astro-ph.CO
All Categories: astro-ph.CO, astro-ph.IM, cs.LG, stat.ME

Upcoming stage-IV surveys such as Euclid and Rubin will deliver vast amounts of high-precision data, opening new opportunities to constrain cosmological models with unprecedented accuracy. A key step in this process is the reconstruction of the dark matter distribution from noisy weak lensing shear measurements. Current deep learning-based mass mapping methods achieve high reconstruction accuracy, but either require retraining a model for each new observed sky region (limiting practicality) or rely on slow MCMC sampling. Efficient exploitation of future survey data therefore calls for a new method that is accurate, flexible, and fast at inference. In addition, uncertainty quantification with coverage guarantees is essential for reliable cosmological parameter estimation. We introduce PnPMass, a plug-and-play approach for weak lensing mass mapping. The algorithm produces point estimates by alternating between a gradient descent step with a carefully chosen data fidelity term, and a denoising step implemented with a single deep learning model trained on simulated data corrupted by Gaussian white noise. We also propose a fast, sampling-free uncertainty quantification scheme based on moment networks, with calibrated error bars obtained through conformal prediction to ensure coverage guarantees. Finally, we benchmark PnPMass against both model-driven and data-driven mass mapping techniques. PnPMass achieves performance close to that of state-of-the-art deep-learning methods while offering fast inference (converging in just a few iterations) and requiring only a single training phase, independently of the noise covariance of the observations. It therefore combines flexibility, efficiency, and reconstruction accuracy, while delivering tighter error bars than existing approaches, making it well suited for upcoming weak lensing surveys.


ALABI: Active Learning for Accelerated Bayesian Inference

Authors: Jessica Birky, Rory K. Barnes
Comments: Submitted to PASP, comments welcome
Primary Category: astro-ph.IM
All Categories: astro-ph.IM, physics.data-an

We present Active Learning for Accelerated Bayesian Inference (\texttt{alabi}): an open-source Python package for performing Bayesian inference with computationally expensive models. Given a forward model and observational data to construct a likelihood and priors, \texttt{alabi}\ uses a Gaussian Process (GP) surrogate model trained to predict posterior probability as a function of input parameters, and employs active learning to iteratively improve GP predictive performance in high-likelihood regions where the GP is most uncertain. \texttt{alabi}\ provides a uniform interface for using Markov chain Monte Carlo (MCMC) with different packages, including the affine-invariant sampler \texttt{emcee}, and nested samplers \texttt{dynesty}, \texttt{multinest}, and \texttt{ultranest}. This approach facilitates accurate estimation of the desired posterior distribution, while reducing the number of computationally expensive model evaluations required by factors of thousands. We demonstrate the performance of \texttt{alabi}\ on a variety of test cases, including where inference is challenging due to complex posterior structure or high dimensionality. We show that \texttt{alabi}\ offers a substantial improvement for likelihood functions with evaluation times $\gtrsim 1$\,s, speeding up MCMC computations by a factor of $10-1000\times$ when tested on problems with up to 64 dimensions.