A0156
Title: Unveiling the latent space: Dimension reduction for concepts discovery in GNN-based combinatorial optimization
Authors: Havana Rika - The Academic College of Tel Aviv - Yaffo (Israel) [presenting]
Dan Vilenchik - Ben-Gurion University (Israel)
Joowon Lee - University of Wisconsin-Madison (United States)
Abstract: In recent years, the intersection of machine learning and optimization of NP-hard problems has been marked by significant advances in the application of Graph Neural Networks (GNNs). Despite these successes, fundamental questions remain about how and why these models make their decisions. In classical combinatorial optimization, experts have long relied on interpretable heuristics (such as notions of vertex support or degree) to guide problem-solving. By contrast, neural networks typically operate as black boxes, obscuring the intuitive concepts that may emerge within their high-dimensional latent spaces. Explainable AI (XAI) methods provide a pathway to peer inside the network and identify the ideas that drive performance. Among the diverse XAI techniques, the dimension reduction approach known as Principal Component Analysis (PCA) has proven invaluable for revealing the low-dimensional structures that underlie a GNNs learned combinatorial concepts. The aim is to explore the insights and learned concepts of GNNs applied to solving NP-hard problems such as Boolean satisfiability (SAT), Graph Coloring, and the Max-Clique problem while comparing them to known combinatorial heuristics.