🧭 Learned Routing

Two recent papers from Google suggest that the fifteen-year reign of purely combinatorial transit and road routing may be ending

  1. Algorithmic Foundations
  2. Learning to Score Alternative Routes
  3. Routes as Token Sequences

In November 2022, Daniel Delling left Apple to join Google Maps as Distinguished Engineer. Delling is one of a small number of researchers whose work underpins essentially all modern route planning: he co-invented RAPTOR, the algorithm behind OpenTripPlanner and Navitia, and co-authored the Customizable Route Planning framework that enables fast hierarchical search on road networks. His move to Google, after eight years at Apple, likely went unnoticed outside of the routing community who could have foreseen its implications.

Eighteen months later, two papers appeared with Delling as co-author, both from Google Research. One, published at AISTATS 2024 by Alex Zhai, Dee Guo, Kostas Kollias, Sreenivas Gollapudi, and Delling, addresses alternative route generation. The other, published at NeurIPS 2024 by Eric Zhao, Pranjal Awasthi, Zhengdao Chen, Gollapudi, and Delling, proposes something more radical: replacing the routing algorithm itself with an autoregressive transformer that predicts paths edge by edge ("next edge prediction"), conditioned on natural language queries. Together they suggest a strategic direction at Google Maps that goes well beyond incremental improvement.

To understand why these papers matter, one needs some background on what they are departing from. The routing algorithms that power every major mapping service emerged from a specific research community, largely centred at the Karlsruhe Institute of Technology and Microsoft Research Silicon Valley, between roughly 2009 and 2015. Four algorithm families dominate: Transfer Patterns, which Google has used since 2010; RAPTOR, which became the open-source standard; the Connection Scan Algorithm; and Trip-Based Routing. These are beautiful pieces of engineering. They solve their target problems exactly, in milliseconds, on networks with hundreds of millions of connections.

That said, they share a limitation that has become harder to ignore. They optimise for objectives that can be stated precisely and evaluated efficiently: earliest arrival, fewest transfers, shortest distance. Real users increasingly want things these algorithms cannot express, like stopping off en route at a coffee shop/gym/supermarket, to avoid a particular tube line, or even plan a day trip to several places within a time budget. These requests either require bolting ad hoc extensions onto systems that were not designed for them, or they require something structurally different.

The two papers examined in this series represent two distinct responses to that limitation. The first stays close to the classical paradigm, using a neural network (BERT) to approximate an expensive quality metric that traditional heuristics handle poorly. The second departs from it more completely, framing routing as a sequence prediction problem amenable to the same autoregressive methods that power language models. They are not interchangeable ideas and they are at different stages of maturity.

This series has three parts. The first surveys the algorithmic foundations, with attention to the actual trade-offs practitioners face rather than textbook descriptions. The second examines the alternative route paper and its careful integration of learned components with the existing CRP infrastructure. The third analyses the semantic routing paper, which is more ambitious, more speculative, and more interesting in what it reveals about where Google thinks routing needs to go.

References