arXiv:2505.15862v1 Announce Type: new
Abstract: Algorithms designed for routing problems typically rely on high-quality candidate edges to guide their search, aiming to reduce the search space and enhance the search efficiency. However, many existing algorithms, like the classical Lin-Kernighan-Helsgaun (LKH) algorithm for the Traveling Salesman Problem (TSP), often use predetermined candidate edges that remain static throughout local searches. This rigidity could cause the algorithm to get trapped in local optima, limiting its potential to find better solutions. To address this issue, we propose expanding the candidate sets to include other promising edges, providing them an opportunity for selection. Specifically, we incorporate multi-armed bandit models to dynamically select the most suitable candidate edges in each iteration, enabling LKH to make smarter choices and lead to improved solutions. Extensive experiments on multiple TSP benchmarks show the excellent performance of our method. Moreover, we employ this bandit-based method to LKH-3, an extension of LKH tailored for solving various TSP variant problems, and our method also significantly enhances LKH-3’s performance across typical TSP variants.

Expert Commentary: Enhancing Routing Algorithms with Multi-Armed Bandit Models

In the field of algorithm design for routing problems, the use of candidate edges plays a crucial role in guiding search processes to find optimal solutions efficiently. However, traditional algorithms often suffer from using static candidate edges, which can lead to being trapped in local optima and limiting their ability to find better solutions.

One innovative approach to address this challenge is the incorporation of multi-armed bandit models into routing algorithms, as proposed in this study. By dynamically selecting promising candidate edges in each iteration, algorithms like LKH can make smarter choices and potentially lead to improved solutions. This dynamic selection process adds a layer of adaptability and flexibility to the algorithm, allowing it to explore a wider range of possibilities and avoid being stuck in suboptimal solutions.

The use of multi-armed bandit models in routing algorithms highlights the multi-disciplinary nature of this research, combining concepts from algorithm design, optimization, and machine learning. By leveraging insights from different fields, researchers can develop more robust and efficient algorithms that can adapt to changing environments and problem characteristics.

The results of the experiments conducted on multiple TSP benchmarks demonstrate the effectiveness of incorporating multi-armed bandit models into the LKH algorithm. Furthermore, extending this approach to LKH-3 and other TSP variant problems showcases the potential for this method to enhance the performance of a wide range of routing algorithms.

Overall, this study opens up new possibilities for improving routing algorithms by integrating techniques from diverse disciplines, highlighting the importance of interdisciplinary research in advancing the field of algorithm design and optimization.

Reference: arXiv:2505.15862v1

Read the original article