Breaking the sorting barrier for directed single-source shortest paths - https://news.ycombinator.com/item?id=44812695 - Aug 2025 (51 comments)
It might optimize internal routing but getting this standardised across vendors etc. is not impossible, but probably takes a long time to standardise/govern etc.
The Thresh X2 [0] algorithm - for example - does away with the priority queue that is the bottleneck in Dijkstra. Instead, it iteratively runs a "label-correcting" routine over increasing search radii until the target is hit. I only learnt about this algorithm this year and can't find much about it online, although I've heard that it's sometimes used in videogames.
Then there's Contraction Hierarchies [1], used by many modern routing engines (such as OSRM [2] or GraphHopper [3]). This involves a slow pre-processing step in which nodes are put into a hierarchy of "importance", allowing a modified query-time routine which is orders of magnitude faster than Dijkstra. Recent work on this has also resulted in query-time routines that eliminate priority queues entirely. However, this assumes a fairly static road graph over which many requests are run.
In the linked algorithm, they seem to have an iteratively increasing radii and a routine which applies Bellman-Ford to identify "important" nodes. As I understand it, this decreases the number of nodes that need to be inserted into the priority queue.
[0] https://dlnext.acm.org/doi/10.1016/0167-6377%2887%2990053-8
Hope you don't mind but I took a little look at your posting history and saw this: https://news.ycombinator.com/item?id=41954120
I've been researching this lately, as we've recently implemented traffic patterns in our routing model and are just now working on live traffic updates. The easiest way to adapt our existing code looks like Customizable Contraction Hierarchies. There's a really nice review paper here: https://arxiv.org/abs/2502.10519 . The technique is to apply nested dissections to build a "metric-independent" hierarchy based purely on connectivity, which gives a decent quality of contraction regardless of transit times. Is that what you mean by decomposing the network into "cells"?
There are many similarities between this approach and customisable contraction hierarchies. The latter allows a particularly elegant query-time algorithm involving only a couple of linear sweeps, I suspect even in the many-many scenario.
I'm pretty confident they're heavily (if not fully) relying on LLM-generated text. Maybe they're drafting it themselves first and getting an LLM to refine. I found some recent articles by the same author which gave me the same reaction:
https://medium.com/@kanishks772/computer-scientists-just-bro...
and:
https://medium.com/@kanishks772/why-your-next-gps-might-use-...
They seem to have a process for grabbing a research paper, getting an LLM to summarise it, adding AI-generated images and pseudo code, and publishing it. There are lots of parallels in describing fundamental breakthroughs overturning decades of conventional wisdom. And it has the same clipped bullet lists with sound bite phrases, and slideshow style headings. It's extremely reminiscent of what happens when I ask Claude to give me a summary of something.
As a general note, I do think it best to take any article written by AI with a pinch of salt. Much like you should closely review any code they write. It's not at the level of a human expert, but it's trained to convince you that it is one.
It should make no difference it's written by AI or not. One should evaluate and criticize content without regard to who or what has written it. Conversely, just because a human writes something should not and does not make it superior.
People like you shamelessly attempt to suppress a broad spectrum of writing whenever they find something to disagree with, by blaming it on AI. That's what's evil about it.
Secondly, almost all content is going to be produced in part with assistance from AI tools, not necessarily to write the content, but at least to discover and understand source materials for it. Would you say that a search engine was used? AI is the new search engine too.
Your assertion that using AI makes content worse is a false one. Many use it to make their content better.
Also, you originally perpetuated a false dichotomy. Content is typically going to be produced in collaboration with AI.
Instead of using your lazy attack, if you actually see a real issue with some content, why not just document any issue with the content in good faith with the mindset that the content were written by a human?
Consider what you are fighting. Do you think your fight has a future where your side will come out victorious such that humans write content without AI?
> It should make no difference it's written by AI or not.
It absolutely should.
It is plain wrong to make an unsubstantiated and unproven accusation, and even if were true, it's irrelevant to the topic at hand. Moreover, it demonstrates an unjustified anti-AI bias which is a separate problem.
Some of us think that anything created by an AI should be labeled as such. Is that an inherently evil belief?
It is a little dumbfounding to compare the seeming emotionality/outrage of your posts with that which spurred it.
robaato•4mo ago