search_query=cat:astro-ph.*+AND+lastUpdatedDate:[202512042000+TO+202512102000]&start=0&max_results=5000
We introduce a novel estimator to quantify statistical tensions among multiple cosmological datasets simultaneously. This estimator generalizes the Difference-in-Means statistic, $Q_{\rm DM}$, to the multi-dataset regime. Our framework enables the detection of dominant tension directions in the shared parameter space. It further provides a geometric interpretation of the tension for the two- and three-dataset cases in two dimensions. According to this approach, the previously reported increase in tension between DESI and Planck from $1.9σ$ (DR1) to $2.3σ$(DR2) is reinterpreted as a more modest shift from $1.18σ^{\rm eff}$ (DR1) to $1.45σ^{\rm eff}$ (DR2). These new tools may also prove valuable across research fields where dataset discrepancies arise.
Ultracool dwarfs consist of lowest-mass stars and brown dwarfs. Their interior is fully convective, different from that of the partly-convective Sun-like stars. Magnetic field generation process beneath the surface of ultracool dwarfs is still poorly understood and controversial. To increase samples of active ultracool dwarfs significantly, we have identified 962 ultracool dwarfs in the latest LAMOST data release, DR11. We also simulate the Chinese Space Station Survey Telescope (CSST) low-resolution slitless spectra by degrading the LAMOST spectra. A semi-supervised machine learning approach with an autoencoder model is built to identify ultracool dwarfs with the simulated CSST spectra, which demonstrates the capability of the CSST all-sky slitless spectroscopic survey on the detection of ultracool dwarfs. Magnetic activity of the ultracool dwarfs is investigated by using the H$α$ line emission as a proxy. The rotational periods of 82 ultracool dwarfs are derived based on the Kepler/K2 light curves. We also derive the activity-rotation relation of the ultracool dwarfs, which is saturated around a Rossby number of 0.12.
The interpretation of the origin of observed exoplanets is usually done only qualitatively due to uncertainties of key parameters in planet formation models. To allow a quantitative methodology which traces back in time to the planet birth locations, we train recently developed conditional invertible neural networks (cINN) on synthetic data from a global planet formation model which tracks growth from dust grains to evolved final giant planets. In addition to deterministic single planet formation runs, we also include gravitationally interacting planets in multiplanetary systems, which include some measure of chaos. For the latter case, we treat them as individual planets or choose the two or three planets most likely to be discovered by telescopes. We find that training on multiplanetary data, each planet treated as individual point, is promising. The single-planet data only covers a small range of planets and does not extrapolate well to planet properties not included in the training data. Extension to planetary systems will require more training data due to the higher dimensionality of the problem.
Strong gravitational lensing can reveal the influence of dark-matter substructure in galaxies, but analyzing these effects from noisy, low-resolution images poses a significant challenge. In this work, we propose a masked autoencoder (MAE) pretraining strategy on simulated strong-lensing images from the DeepLense ML4SCI benchmark to learn generalizable representations for two downstream tasks: (i) classifying the underlying dark matter model (cold dark matter, axion-like, or no substructure) and (ii) enhancing low-resolution lensed images via super-resolution. We pretrain a Vision Transformer encoder using a masked image modeling objective, then fine-tune the encoder separately for each task. Our results show that MAE pretraining, when combined with appropriate mask ratio tuning, yields a shared encoder that matches or exceeds a ViT trained from scratch. Specifically, at a 90% mask ratio, the fine-tuned classifier achieves macro AUC of 0.968 and accuracy of 88.65%, compared to the scratch baseline (AUC 0.957, accuracy 82.46%). For super-resolution (16x16 to 64x64), the MAE-pretrained model reconstructs images with PSNR ~33 dB and SSIM 0.961, modestly improving over scratch training. We ablate the MAE mask ratio, revealing a consistent trade-off: higher mask ratios improve classification but slightly degrade reconstruction fidelity. Our findings demonstrate that MAE pretraining on physics-rich simulations provides a flexible, reusable encoder for multiple strong-lensing analysis tasks.