search_query=cat:astro-ph.*+AND+lastUpdatedDate:[202505132000+TO+202505192000]&start=0&max_results=5000
Surface information derived from exospheric measurements at planetary bodies complements surface mapping provided by dedicated imagers, offering critical insights into surface release processes, interactions within the planetary environment, space weathering, and planetary evolution. This study explores the feasibility of deriving Mercury's regolith elemental composition from in-situ measurements of its neutral exosphere using deep neural networks (DNNs). We present a supervised feed-forward DNN architecture - a multilayer perceptron (MLP) - that, starting from exospheric densities and proton precipitation fluxes, predicts the chemical elements of the surface regolith below. It serves as an estimator for the surface-exosphere interaction and the processes leading to exosphere formation. Because the DNN requires a comprehensive exospheric dataset not available from previous missions, this study uses simulated exosphere components and simulated drivers. Extensive training and testing campaigns demonstrate the MLP's ability to accurately predict and reconstruct surface composition maps from these simulated measurements. Although this initial version does not aim to reproduce Mercury's actual surface composition, it provides a proof of concept, showcasing the algorithm's robustness and capacity for handling complex datasets to create estimators for exospheric generation models. Moreover, our tests reveal substantial potential for further development, suggesting that this method could significantly enhance the analysis of complex surface-exosphere interactions and complement planetary exosphere models. This work anticipates applying the approach to data from the BepiColombo mission, specifically the SERENA package, whose nominal phase begins in 2027.
Interplanetary coronal mass ejections (ICMEs) are major drivers of space weather disturbances, posing risks to both technological infrastructure and human activities. Automatic detection of ICMEs in solar wind in situ data is essential for early warning systems. While several methods have been proposed to identify these structures in time series data, robust real-time detection remains a significant challenge. In this work, we present ARCANE - the first framework explicitly designed for early ICME detection in streaming solar wind data under realistic operational constraints, enabling event identification without requiring observation of the full structure. Our approach evaluates the strengths and limitations of detection models by comparing a machine learning-based method to a threshold-based baseline. The ResUNet++ model, previously validated on science data, significantly outperforms the baseline, particularly in detecting high-impact events, while retaining solid performance on lower-impact cases. Notably, we find that using real-time solar wind (RTSW) data instead of high-resolution science data leads to only minimal performance degradation. Despite the challenges of operational settings, our detection pipeline achieves an F1 score of 0.53, with an average detection delay of 21.5% of the event's duration while only seeing a minimal amount of data. As more data becomes available, the performance increases significantly. These results mark a substantial step forward in automated space weather monitoring and lay the groundwork for enhanced real-time forecasting capabilities.
The characterization of exoplanetary atmospheres through spectral analysis is a complex challenge. The NeurIPS 2024 Ariel Data Challenge, in collaboration with the European Space Agency's (ESA) Ariel mission, provided an opportunity to explore machine learning techniques for extracting atmospheric compositions from simulated spectral data. In this work, we focus on a data-centric business approach, prioritizing generalization over competition-specific optimization. We briefly outline multiple experimental axes, including feature extraction, signal transformation, and heteroskedastic uncertainty modeling. Our experiments demonstrate that uncertainty estimation plays a crucial role in the Gaussian Log-Likelihood (GLL) score, impacting performance by several percentage points. Despite improving the GLL score by 11%, our results highlight the inherent limitations of tabular modeling and feature engineering for this task, as well as the constraints of a business-driven approach within a Kaggle-style competition framework. Our findings emphasize the trade-offs between model simplicity, interpretability, and generalization in astrophysical data analysis.
Two maximum likelihood-based algorithms for unfolding or deconvolution are considered: the Richardson-Lucy method and the Data Unfolding method with Mean Integrated Square Error (MISE) optimization [10]. Unfolding is viewed as a procedure for estimating an unknown probability density function. Both external and internal quality assessment methods can be applied for this purpose. In some cases, external criteria exist to evaluate deconvolution quality. A typical example is the deconvolution of a blurred image, where the sharpness of the restored image serves as an indicator of quality. However, defining such external criteria can be challenging, particularly when a measurement has not been performed previously. In such instances, internal criteria are necessary to assess the quality of the result independently of external information. The article discusses two internal criteria: MISE for the unfolded distribution and the condition number of the correlation matrix of the unfolded distribution. These internal quality criteria are applied to a comparative analysis of the two methods using identical numerical data. The results of the analysis demonstrate the superiority of the Data Unfolding method with MISE optimization over the Richardson-Lucy method.
Precise and accurate estimation of cosmological parameters is crucial for understanding the Universe's dynamics and addressing cosmological tensions. In this methods paper, we explore bio-inspired metaheuristic algorithms, including the Improved Multi-Operator Differential Evolution scheme and the Philippine Eagle Optimization Algorithm (PEOA), alongside the relatively known genetic algorithm, for cosmological parameter estimation. Using mock data that underlay a true fiducial cosmology, we test the viability of each optimization method to recover the input cosmological parameters with confidence regions generated by bootstrapping on top of optimization. We compare the results with Markov chain Monte Carlo (MCMC) in terms of accuracy and precision, and show that PEOA performs comparably well under the specific circumstances provided. Understandably, Bayesian inference and optimization serve distinct purposes, but comparing them highlights the potential of nature-inspired algorithms in cosmological analysis, offering alternative pathways to explore parameter spaces and validate standard results.