
A research team from the Xinjiang Astronomical Observatory (XAO) of the Chinese Academy of Sciences has developed an interpretable artificial intelligence (AI) framework named Convolutional Kolmogorov–Arnold Network (CKAN), which sheds new light on the properties of dark matter at galaxy-cluster scales.
The work, led by master’s student Huang Zhenyang under the supervision of Prof. Wang Na and Liu Zhiyong, was published in The Astronomical Journal.
Challenges in dark matter research
The nature of dark matter is one of the most pressing open questions in modern astrophysics. Although the cold dark matter (CDM) paradigm successfully explains the large-scale structure of the universe, it faces a number of tensions on smaller scales, such as in galaxy cluster cores. Self-interacting dark matter models provide a compelling alternative explanation for these small-scale discrepancies. With the rapid development of machine learning, AI has become an increasingly important tool for exploring the universe.
Previous influential studies by researchers at the École Polytechnique Fédérale de Lausanne have demonstrated that convolutional neural networks (CNNs) can extract extremely subtle structural features from galaxy cluster simulations that include complex baryonic physics, thereby efficiently distinguishing between different dark matter models. This laid the foundation for using AI to address this long-standing physical problem.
Introducing the CKAN framework
However, despite their impressive performance, conventional CNNs are large, black-box models whose internal decision-making is difficult to interpret, which to some extent limits further progress in this area.
To overcome this limitation, the XAO researchers developed the CKAN framework, which is based on the Kolmogorov–Arnold representation theorem. In CKAN, learnable activation functions replace traditional fixed activation forms. While maintaining high classification accuracy, the internal structure of CKAN can be further cast into a symbolic representation, which substantially enhances the network’s interpretability.
Key findings and implications
Analysis of the symbolically represented network shows that the AI network spontaneously focuses on key physical quantities, such as the miscentering between the dark matter halo center and the cluster center, as well as thermal conduction features in the cluster core region. These automatically extracted features are qualitatively consistent with existing theoretical expectations and help researchers begin to understand the internal decision-making mechanisms of the neural network.
Building on this, the researchers combined network performance tests with interpretability diagnostics to obtain a quantitative inference: on galaxy cluster scales, in order for the signatures of dark matter self-interactions to be reliably identified in observations, the self-interaction cross section must be at least on the order of 0.1–0.3cm2g-1. This threshold is consistent with recent independent analyses based on galaxy cluster simulations.
Furthermore, the researchers incorporated simulated observational noise, constructed using instrumental characteristics such as James Webb Space Telescope and Euclid, to test the robustness of CKAN. Even under these more realistic conditions, CKAN maintained strong model discrimination and feature-identification capabilities.
Together, these results demonstrate that CKAN offers an efficient and interpretable new tool for studying dark matter with next-generation survey data. More broadly, this study represents a step toward bridging the gap between idealized numerical simulations and real observations, highlighting the potential of interpretable AI to extract physically meaningful features and uncover new insights from astrophysical data.
More information:
Zhenyang Huang et al, An Interpretable AI Framework to Disentangle Self-interacting and Cold Dark Matter in Galaxy Clusters: The CKAN Approach, The Astronomical Journal (2025). DOI: 10.3847/1538-3881/ae0476
Provided by
Chinese Academy of Sciences
Citation:
Interpretable neural networks help reveal the nature of dark matter (2025, December 18)
retrieved 18 December 2025
from https://phys.org/news/2025-12-neural-networks-reveal-nature-dark.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.