search_query=cat:astro-ph.*+AND+lastUpdatedDate:[202603282000+TO+202604032000]&start=0&max_results=5000
Large Language Models (LLMs) are now widely used in astrophysics, but do they actually make our lives easier, or do they merely invent new physics with enough confidence to hide a minus sign? In a specialized field where checking fluent hallucinations is itself labor-intensive, AI assistance can demand as much work as the task it claims to simplify. To evaluate where AI genuinely improves scientific workflows, we bypassed human trials and instead forced AI agents to cosplay as astrophysicists. We simulated 144 synthetic researchers, varying in career stage, AI awareness, and willingness to verify outputs, across 2,592 daily astrophysics research assignments. Comparing solo work against four styles of AI assistance produced 12,960 scored episodes. No assisted policy universally outperformed unassisted work in the primary Qwen production run. Instead, performance depends strongly on the task, the style of AI use, and the identity of the actor. While cautious assistance helps on creative, extractive, and critique-oriented tasks, it can fail catastrophically on derivation-heavy physics. A full actor-swap DeepSeek rerun changes that picture materially: verification-heavy use becomes the strongest assisted policy, two assisted modes enter the higher-utility/lower-risk quadrant, and the derivation-heavy fragility that dominates the Qwen production run largely disappears. In its current form, AI is useful, but only conditionally, its value is uneven, task-specific, and shaped jointly by workflow, usage policy, and which LLM you are using.
Modern numerical models are increasingly complex, opaque, and computationally expensive, yet frequently fail to predict even qualitative features of observed phenomena. We propose a new paradigm, Declarative Bespoke Modelling, in which the modeller explicitly declares the relationship between model inputs and outputs. We demonstrate that this approach achieves perfect predictive accuracy, unconditional numerical stability, and complete interpretability. It represents a natural endpoint of contemporary modelling practice and near-zero CO2 emission.
TESS (Transiting Exoplanet Survey Satellite) has produced long-term photometry for millions of stars across the sky. In this work, we present an asteroseismic catalogue of 19,151 red giants in the TESS Continuous Viewing Zones using sectors 1--87 (Years 1--7). We visually assessed the power spectra for oscillations, and then applied the computationally efficient nuSYD method to confirm reliability. We identified an increase of 80% in the number of previously known oscillating red giants at a TESS magnitude $>$ 8. We determined the frequency of maximum power ($\rm ν_{max}$) and the large frequency separation ($\rm Δν$) using the pySYD pipeline, achieving typical precisions of 1.5% and 1%, respectively. We classified the stars into Red Giant Branch (RGB) and Core Helium Burning (CHeB) classes using a Convolutional Neural Network. Using spectroscopic data for 10,298 stars with reliable asteroseismic measurements, we have been able to measure stellar mass and radii with precisions of 7.5% and 2.8%, which is comparable to that from 4-yr $Kepler$ data. A comparison of the seismic radii with Gaia radii shows excellent agreement. With three years of TESS data, the asteroseismic parameters are precise enough to identify the RGB bump and delineate the Zero Age Helium Burning edge. Combined with astrometric data, these parameters reveal established trends across the Galactic plane, providing a valuable set of uniformly determined asteroseismic parameters for Galactic Archaeology.
As the population of anthropogenic space objects transitions from sparse clusters to mega-constellations exceeding 100,000 satellites, traditional orbital propagation techniques face a critical bottleneck. Standard CPU-bound implementations of the Simplified General Perturbations 4 (SGP4) algorithm are less well suited to handle the requisite scale of collision avoidance and Space Situational Awareness (SSA) tasks. This paper introduces \texttt{jaxsgp4}, an open-source high-performance reimplementation of SGP4 utilising the \texttt{JAX} library. \texttt{JAX} has gained traction in the landscape of computational research, offering an easy mechanism for Just-In-Time (JIT) compilation, automatic vectorisation and automatic optimisation of code for CPU, GPU and TPU hardware modalities. By refactoring the algorithm into a pure functional paradigm, we leverage these transformations to execute massively parallel propagations on modern GPUs. We demonstrate that \texttt{jaxsgp4} can propagate the entire Starlink constellation (9,341 satellites) each to 1,000 future time steps in under 4 ms on a single A100 GPU, representing a speedup of $1500\times$ over traditional C++ baselines. Furthermore, we argue that the use of 32-bit precision for SGP4 propagation tasks offers a principled trade-off, sacrificing negligible precision loss for a substantial gain in throughput on hardware accelerators.
Scientific multi-label text classification suffers from extreme class imbalance, where specialized terminology exhibits severe power-law distributions that challenge standard classification approaches. Existing scientific corpora lack comprehensive controlled vocabularies, focusing instead on broad categories and limiting systematic study of extreme imbalance. We introduce AstroConcepts, a corpus of English abstracts from 21,702 published astrophysics papers, labeled with 2,367 concepts from the Unified Astronomy Thesaurus. The corpus exhibits severe label imbalance, with 76% of concepts having fewer than 50 training examples. By releasing this resource, we enable systematic study of extreme class imbalance in scientific domains and establish strong baselines across traditional, neural, and vocabulary-constrained LLM methods. Our evaluation reveals three key patterns that provide new insights into scientific text classification. First, vocabulary-constrained LLMs achieve competitive performance relative to domain-adapted models in astrophysics classification, suggesting a potential for parameter-efficient approaches. Second, domain adaptation yields relatively larger improvements for rare, specialized terminology, although absolute performance remains limited across all methods. Third, we propose frequency-stratified evaluation to reveal performance patterns that are hidden by aggregate scores, thereby making robustness assessment central to scientific multi-label evaluation. These results offer actionable insights for scientific NLP and establish benchmarks for research on extreme imbalance.
The direct imaging of potentially habitable exoplanets is one prime science case for high-contrast imaging instruments on extremely large telescopes. Most such exoplanets orbit close to their host stars, where their observation is limited by fast-moving atmospheric speckles and quasi-static non-common-path aberrations (NCPA). Conventional NCPA correction methods often use mechanical mirror probes, which compromise performance during operation. This work presents machine-learning-based NCPA control methods that automatically detect and correct both dynamic and static NCPA errors by leveraging sequential phase diversity. We extend previous work in reinforcement learning for AO to focal plane control. A new model-based RL algorithm, Policy Optimization for NCPAs (PO4NCPA), interprets the focal-plane image as input data and, through sequential phase diversity, determines phase corrections that optimize both non-coronagraphic and post-coronagraphic PSFs without prior system knowledge. Further, we demonstrate the effectiveness of this approach by numerically simulating static NCPA errors on a ground-based telescope and an infrared imager affected by water-vapor-induced seeing (dynamic NCPAs). Simulations show that PO4NCPA robustly compensates static and dynamic NCPAs. In static cases, it achieves near-optimal focal-plane light suppression with a coronagraph and near-optimal Strehl without one. With dynamics NCPA, it matches the performance of the modal least-squares reconstruction combined with a 1-step delay integrator in these metrics. The method remains effective for the ELT pupil, vector vortex coronagraph, and under photon and background noise. PO4NCPA is model-free and can be directly applied to standard imaging as well as to any coronagraph. Its sub-millisecond inference times and performance also make it suitable for real-time low-order correction of atmospheric turbulence beyond HCI.