search_query=cat:astro-ph.*+AND+lastUpdatedDate:[202604032000+TO+202604092000]&start=0&max_results=5000
We present a framework to compute non-Gaussian likelihoods for two-point correlation functions. The non-Gaussianity is most pronounced on large scales that will be well-measured by stage-IV weak-lensing surveys. We show how such a multivariate likelihood can be constructed and efficiently evaluated using a copula approach by incorporating exact one-dimensional marginals and a dependence structure derived from the exact multivariate likelihood. The copula likelihood is found to be in better agreement with simulated sampling distributions of correlation functions than Gaussian likelihoods, particularly on large scales. We furthermore investigate the effect of the non-Gaussian copula likelihood on posterior inference, including sampling the full parameter space of contemporary weak-lensing analyses. We find parameter shifts in $S_8$ on the order of one standard deviation for $1 \ 000 \ \mathrm{deg}^2$ surveys but negligible shifts for areas of $10 \ 000 \ \mathrm{deg}^2$, suggesting Gaussian likelihoods are sufficient for stage-IV surveys, though results depend on the detailed mask geometry and data-vector structure.
Stellar astrophysics relies critically on accurate descriptions of the physical conditions inside stars. Traditional solvers such as \texttt{MESA} (Modules for Experiments in Stellar Astrophysics), which employ adaptive finite-difference methods, can become computationally expensive and challenging to scale for large stellar population synthesis ($>10^9$ stars). In this work, we present an self-supervised physics-informed neural network (PINN) framework that provides a mesh-free and fully differentiable approach to solving the stellar structure equations under hydrostatic and thermal equilibrium. The model takes as input the stellar boundary conditions (at the center and surface) together with the chemical composition, and learns continuous radial profiles for mass $M_r(r)$, pressure $P(r)$, density $ρ(r)$, temperature $T(r)$, and luminosity $L_r(r)$ by enforcing the governing structure equations through physics-based loss terms. To incorporate realistic microphysics, we introduce auxiliary neural networks that approximate the equation of state and opacity tables as smooth, differentiable functions of the local thermodynamic state. These surrogates replace traditional tabulated inputs and enable end-to-end training. Once trained for a given star, the model produces continuous solutions across the entire radial domain without requiring discretization or interpolation. Validation against benchmark \texttt{MESA} models across a range of stellar masses yields a Mean Relative Absolute Error of $3.06\%$ and an average $R^2$ score of $99.98\%$. To our knowledge, this is the first demonstration that the stellar structure equations can be solved in a fully self-supervised and data-free fashion employing PINNs. This work establishes a foundation for scalable, physics-informed emulation of stellar interiors and opens the door to future extensions toward time-dependent stellar evolution.
Bayesian inference of nanohertz gravitational-wave background models in pulsar timing array analyses often relies on Gaussian-process interpolators to avoid repeated, computationally expensive strain-spectrum calculations. However, Gaussian-process training becomes a bottleneck for large training sets. We test whether probabilistic neural networks can replace Gaussian processes in this role for both a self-interacting dark matter model and a phenomenological environmental model. We find that neural networks recover consistent posteriors while significantly reducing both training and Markov chain Monte Carlo runtime, with the largest gains for the more computationally demanding model.
We present a geometric framework for adaptive whitening in gravitational-wave detectors, reformulating the problem from a sequence of spectral factorizations to parallel transport on a principal bundle. We identify the whitening filter as a section over the manifold of power spectra and derive the minimum-phase connection as the unique geometric structure that enforces signal causality while preserving signal-to-noise ratio. This construction yields a rigorous definition of geometric drift, a coordinate-independent scalar measuring the intrinsic instability of the detector noise floor. The central result is the flatness theorem, which proves that the curvature of the connection vanishes for scalar fields. This establishes a holonomic update law, guaranteeing that the optimal filter correction is path-independent and determined solely by the instantaneous noise state, free from geometric phase or hysteresis. This framework unifies the static theory of Wiener-Hopf factorization with the dynamic requirements of real-time control, providing a rigorous certification for the stability of zero-latency calibration routines and establishing a foundation for gauge-theoretic signal processing (GTSP) in next-generation detector networks.