search_query=cat:astro-ph.*+AND+lastUpdatedDate:[202512092000+TO+202512152000]&start=0&max_results=5000
We introduce the Morph approximation, a class of product approximations of probability densities that selects low-order disjoint parameter blocks by maximizing the sum of their total correlations. We use the posterior approximation via Morph as the importance distribution in optimal bridge sampling. We denote this procedure by MorphZ, which serves as a post-processing estimator of the marginal likelihood. The MorphZ estimator requires only posterior samples together with the prior and likelihood, and is fully agnostic to the choice of sampler. We evaluate MorphZ's performance across statistical benchmarks, pulsar timing array (PTA) models, compact binary coalescence (CBC) gravitational-wave (GW) simulations and the GW150914 event. Across these applications, spanning low to high dimensionalities, MorphZ yields accurate evidence at substantially reduced computational cost relative to standard approaches, and can improve these estimates even when posterior coverage is incomplete. Its bridge sampling relative error diagnostic provides conservative uncertainty estimates. Because MorphZ operates directly on posterior draws, it complements exploration-oriented samplers by enabling fast and reliable evidence estimation, while it can be seamlessly integrated into existing inference workflows.
Estimating the auto power spectrum of cosmological tracers from line-intensity mapping (LIM) data is often limited by instrumental noise, residual foregrounds, and systematics. Cross-power spectra between multiple lines offer a robust alternative, mitigating noise bias and systematics. However, inferring the auto spectrum from cross-correlations relies on two key assumptions: that all tracers are linearly biased with respect to the matter density field, and that they are strongly mutually correlated. In this work, we introduce a new diagnostic statistic, \(\mathcal{Q}\), which serves as a data-driven null test of these assumptions. Constructed from combinations of cross-spectra between four distinct spectral lines, \(\mathcal{Q}\) identifies regimes where cross-spectrum-based auto-spectrum reconstruction is unbiased. We validate its behavior using both analytic toy models and simulations of LIM observables, including star formation lines ([CII], [NII], [CI],[OIII]) and the 21-cm signal. We explore a range of redshifts and instrumental configurations, incorporating noise from representative surveys. Our results demonstrate that the criterion \( \mathcal{Q} \approx 1 \) reliably selects the modes where cross-spectrum estimators are valid, while significant deviations are an indicator that the key assumptions have been violated. The \( \mathcal{Q} \) diagnostic thus provides a simple yet powerful data-driven consistency check for multi-tracer LIM analyses.
Semi-analytic models are a widely used approach to simulate galaxy properties within a cosmological framework, relying on simplified yet physically motivated prescriptions. They have also proven to be an efficient alternative for generating accurate galaxy catalogs, offering a faster and less computationally expensive option compared to full hydrodynamical simulations. In this paper, we demonstrate that using only galaxy $3$D positions and radial velocities, we can train a graph neural network coupled to a moment neural network to obtain a robust machine learning based model capable of estimating the matter density parameters, $Ω_{\rm m}$, with a precision of approximately 10%. The network is trained on ($25 h^{-1}$Mpc)$^3$ volumes of galaxy catalogs from L-Galaxies and can successfully extrapolate its predictions to other semi-analytic models (GAEA, SC-SAM, and Shark) and, more remarkably, to hydrodynamical simulations (Astrid, SIMBA, IllustrisTNG, and SWIFT-EAGLE). Our results show that the network is robust to variations in astrophysical and subgrid physics, cosmological and astrophysical parameters, and the different halo-profile treatments used across simulations. This suggests that the physical relationships encoded in the phase-space of semi-analytic models are largely independent of their specific physical prescriptions, reinforcing their potential as tools for the generation of realistic mock catalogs for cosmological parameter inference.
Radio emission from pulsars is known to exhibit a diverse range of emission phenomena, among which nulling, where the emission becomes temporarily undetectable, is an intriguing one. Observations suggest nulling is prevalent in many long-period pulsars and must be understood to obtain a more comprehensive picture of pulsar emission and its evolution. One of the limitations in observational characterisation of nulling is the limited signal-to-noise, making individual pulses often not easily distinguishable from noise or any putative faint emission. Although some of the approaches in the published literature attempt to address this, they lose efficacy when individual pulses appear indistinguishable from the noise, and as a result, can lead to less accurate measurements. Here we develop a new method (the $\mathbb{N}$sum algorithm) that uses sums of pulses for better distinguishability from noise and thus measures the nulling fraction more robustly. It can be employed for measuring nulling fractions in weaker pulsars and observations with a limited number of observed pulses. We compare our algorithm with the recently developed Gaussian Mixture Modelling approach, using both simulated and real data, and find that our approach yields consistent results for generic and weaker pulsars. We also explore quasi-periodicity in nulling and measure the related parameters for five pulsars, including PSRs~J1453$-$6413, J0950$+$0755 and J0026$-$1955, for which these are also the first such measurements. We compare and contrast our analysis of quasi-periodic nulling with previously published work and explore the use of spin-down energy loss ($\dot E$) to distinguish between different types of modulation behaviour.
Telescope bibliographies record the pulse of astronomy research by capturing publication statistics and citation metrics for telescope facilities. Robust and scalable bibliographies ensure that we can measure the scientific impact of our facilities and archives. However, the growing rate of publications threatens to outpace our ability to manually label astronomical literature. We therefore present the Automated Mission Classifier (amc), a tool that uses large language models (LLMs) to identify and categorize telescope references by processing large quantities of paper text. A modified version of amc performs well on the TRACS Kaggle challenge, achieving a macro $F_1$ score of 0.84 on the held-out test set. amc is valuable for other telescopes beyond TRACS; we developed the initial software for identifying papers that featured scientific results by NASA missions. Additionally, we investigate how amc can also be used to interrogate historical datasets and surface potential label errors. Our work demonstrates that LLM-based applications offer powerful and scalable assistance for library sciences.