arXiv:2511.20679v1 Announce Type: new
Abstract: Hyperbolic geometry is an effective geometry for embedding hierarchical data structures. Hyperbolic learning has therefore become increasingly prominent in machine learning applications where data is hierarchically organized or governed by hierarchical semantics, ranging from recommendation systems to computer vision. The quality of hyperbolic embeddings is tightly coupled to the structure of the input hierarchy, which is often derived from knowledge graphs or ontologies. Recent work has uncovered that for an optimal hyperbolic embedding, a high branching factor and single inheritance are key, while embedding algorithms are robust to imbalance and hierarchy size. To assist knowledge engineers in reorganizing hierarchical knowledge, this paper investigates whether Large Language Models (LLMs) have the ability to automatically restructure hierarchies to meet these criteria. We propose a prompt-based approach to transform existing hierarchies using LLMs, guided by known desiderata for hyperbolic embeddings. Experiments on 16 diverse hierarchies show that LLM-restructured hierarchies consistently yield higher-quality hyperbolic embeddings across several standard embedding quality metrics. Moreover, we show how LLM-guided hierarchy restructuring enables explainable reorganizations, providing justifications to knowledge engineers.
Expert Commentary: The Power of Hyperbolic Geometry in Machine Learning
Hyperbolic geometry has emerged as a powerful tool in the field of machine learning, particularly in applications where data is organized hierarchically. This type of geometry allows for efficient embedding of hierarchical structures, making it ideal for tasks such as recommendation systems and computer vision. One of the key insights from recent research is that the quality of hyperbolic embeddings is closely linked to the structure of the input hierarchy, often derived from knowledge graphs or ontologies.
What sets hyperbolic learning apart is its ability to capture complex relationships in hierarchical data with a high branching factor and single inheritance. This means that the algorithms used for embedding are robust to imbalances in the hierarchy size. This understanding is crucial for knowledge engineers looking to optimize their hierarchical knowledge structures for better machine learning performance.
This paper takes a novel approach by exploring the use of Large Language Models (LLMs) to automatically restructure hierarchies in a way that enhances hyperbolic embeddings. By leveraging the capabilities of LLMs, the authors demonstrate how existing hierarchies can be transformed to meet the criteria for optimal hyperbolic embeddings, leading to consistently higher quality results across various metrics.
Moreover, the use of LLMs in hierarchy restructuring offers a level of explainability that is often lacking in machine learning models. By providing justifications for the reorganizations, knowledge engineers can gain valuable insights into the decision-making process behind the restructuring, enhancing transparency and understanding.
This study showcases the multi-disciplinary nature of the concepts at play, bringing together hyperbolic geometry, machine learning, and natural language processing. By combining these diverse fields, researchers are pushing the boundaries of what is possible in hierarchical data analysis and knowledge representation, opening up new possibilities for future advancements in the field.