This is a classic product data decision-making fallacy. The right question is "how much total value do all of the links provide", not "what percent are used".
Yes, but it doesn't bring in the sweet promotion home, unfortunately. Ironically, if 99% of them doesn't see any traffic, you can scale back the infra, run it in 2 VMs, and make sure a single person can keep it up as a side quest, just for fun (but, of course, pay them for their work).
This beancounting really makes me sad.
Doing things for fun isn't in Google's remit
Amazon should volunteer a free-tier EC2 instance to help Google in their time of economic struggles.
If they’re so inclined, Oracle has an always free tier with ample resources. They can use that one, too.
It's less "data-driven decisions", more "how to lie with statistics".
Many videos I uploaded in 4k are now only available in 480p, after about a decade.
Better to have a short URL and not need it, than need a short URL and not have it IMO.
* as long as humanly possible, as is archive.org's mission.
Us preserving digital archives is a good step. I guess making hard copies would be the next step.
How will they know a short link to a random PDF on S3 is potentially sensitive info?
Making bad actors brute force the key space to find unlisted URLs could be a better scenario for most people.
People also upload unlisted Youtube videos and cloud docs so that they can easily share them with family. It doesn't mean you might as well share content that they thought was private.
Compiler Explorer and the Promise of URLs That Last Forever (May 2025, 357 points, 189 comments)
I'm not a google fanboi and the google graveyard is a well known thing, but this has been 6+ years coming.
Granted that it was a free service and Google is under no obligation to keep it going. But if they were going to be so casual about it, they shouldn't have offered it in the first place. Or perhaps, people should take that lesson instead and spare themselves the pain.
Google’s probably trying to stop goo.gl URLs from being used for phishing, but doesn’t want to admit that publicly.
https://x.com/elithrar/status/1948451254780526609
Remember this next time you are thinking of depending upon a Google service. They could have kept this going easily but are intentionally breaking it.
My guess is it just keeps chugging along with little maintenance needed by Google itself. The UI hasn’t changed in a while from what I’ve seen.
I have a tiny service built on top of Google App Engine that (only) I use personally. I made it 15+ years ago, and the last time I deployed changes was 10+ years ago.
It's still running. I have no idea why.
Next time? I guess there’s a wave of new people that haven’t learned that that lesson yet.
https://9to5google.com/2018/03/30/google-url-shortener-shut-...
> URL Shortener has been a great tool that we’re proud to have built. As we look towards the future, we’re excited about the possibilities of Firebase Dynamic Links
Perhaps relatedly, Google is shutting down Firebase Dynamic Links too, in about a month (2025-08-25).
Is that the same shortening platform running it?
No one wants to own this product.
- The code could be partially frozen, but large scale changes are constantly being made throughout the google3 codebase, and someone needs to be on the hook for approving certain changes or helping core teams when something goes wrong. If a service it uses is deprecated, then lots of work might need to be done.
- Every production service needs someone responsible for keeping it running. Maybe an SRE, thought many smaller teams don't have their own SREs so they manage the service themselves.
So you'd need some team, some full reporting chain all the way up, to take responsibility for this. No SWE is going to want to work on a dead product where no changes are happening, no manager is going to care about it. No director is going to want to put staff there rather than a project that's alive. No VP sees any benefit here - there's only costs and risks.
This is kind of the Reader situation all over again (except for the fact that a PM with decent vision could have drastically improved and grown Reader, IMO).
This is obviously bad for the internet as a whole, and I personally think that Google has a moral obligation to not rug pull infrastructure like this. Someone there knows that critical links will be broken, but it's in no one's advantage to stop that from happening.
I think Google needs some kind of "attic" or archive team that can take on projects like this and make them as efficiently maintainable in read-only mode as possible. Count it as good-will marketing, or spin it off to google.org and claim it's a non-profit and write it off.
Side note: a similar, but even worse situation for the company is the Google Domains situation. Apparently what happened was that a new VP came into the org that owned it and just didn't understand the product. There wasn't enough direct revenue for them, even though the imputed revenue to Workspace and Cloud was significant. They proposed selling it off and no other VPs showed up to the meeting about it with Sundar so this VP got to make their case to Sundar unchallenged. The contract to sell to Squarespace was signed before other VPs who might have objected realized what happened, and Google had to buy back parts of it for Cloud.
Myself, no, for a few reasons: I mainly work on developer tools, I'm too senior for that, and I'm not that interested.
But some people are motivated to work on internet infrastructure, and would be interested. First, you wouldn't be stuck for 10 years. That's not how Google works (and you could of course quit) you're supposed to be with a team a minimum of 18 months, and after that, transfer away. A lot of junior devs don't care that much where they land, the archive team would have to be responsible for more than just the link shortener, so it might be interesting to care for several services from top to bottom. SWEs could be compensated for rotating on to the archive team, and/or it could be part-time.
I think the harder thing is getting management buy-in, even from the front-line managers.
While clearly maintenance and ownership is still a major problem, one could easily imagine deploying something similar — especially read-only — using GCP's Cloud Run and BigTable products could be less work to maintain, as you're not chasing anywhere near such a moving target.
Also, it's quite conspicuous that 30+ years into this thing browsers still have no built-in capacity to store pages locally in a reasonable manner. We still rely on "bookmarks".
Unlike the google URL shortener, you can count on "Oh By" existing in 20 years.
[1] https://0x.co
When you blame your customer, you have failed.
No search engine or crawler person will ever recommend using a shortener for any reason.
edent•17h ago
Countless books with irrevocably broken references - https://www.google.com/search?q=%22://goo.gl%22&sca_upv=1&sc...
And for what? The cost of keeping a few TB online and a little bit of CPU power?
An absolute act of cultural vandalism.
djfivyvusn•17h ago
toomuchtodo•17h ago
api•17h ago
The simplicity of the web is one of its virtues but also leaves a lot on the table.
toomuchtodo•17h ago
https://tracker.archiveteam.org/goo-gl/ (1.66B work items remaining as of this comment)
How to run an ArchiveTeam warrior: https://wiki.archiveteam.org/index.php/ArchiveTeam_Warrior
(edit: i see jaydenmilne commented about this further down thread, mea culpa)
pentagrama•15h ago
I wanted to help and did that using VMware.
For curious people, here is what the UI looks like, you have a list of projects to choose, I choose the goo.gl project, and a "Current project" tab which shows the project activity.
Project list: https://imgur.com/a/peTVzyw
Current project: https://imgur.com/a/QVuWWIj
progbits•11h ago
Going to run the warrior over the weekend to help out a bit.
epolanski•17h ago
Even worse if your resource is a shortened link by some other service, you've just added yet another layer of unreliable indirection.
whatevaa•17h ago
ceejayoz•16h ago
epolanski•15h ago
ceejayoz•15h ago
diatone•15h ago
ceejayoz•15h ago
jeeyoungk•14h ago
epolanski•10h ago
Just buy any scientific book and try to navigate to it's own errata they link in the book. It's always dead.
IanCal•15h ago
ceejayoz•15h ago
It's the fact that it's likely gonna be printed in a paper journal, where you can't click the link.
SR2Z•15h ago
This use case of "I have a paper journal and no PDF but a computer with a web browser" seems extraordinarily contrived. I have literally held a single-digit number of printed papers in my entire life while looking at thousands as PDFs. If we cared, we'd use a QR code.
This kind of luddite behavior sometimes makes using this site exhausting.
andrepd•14h ago
SR2Z•14h ago
Anyone who is savvy enough to put a link in a document is well-aware of the fact that links don't work forever, because anyone who has ever clicked a link from a document has encountered a dead link. It's not 2005 anymore, the internet has accumulated plenty of dead links.
andrepd•9h ago
ceejayoz•14h ago
This is by no means a universal experience.
People still get printed journals. Libraries still stock them. Some folks print out reference materials from a PDF to take to class or a meeting or whatnot.
SR2Z•14h ago
Sure, contributing to link rot is bad, but in the same way that throwing out spoiled food is bad. Sometimes you've just gotta break a bunch of links.
ceejayoz•14h ago
That probably depends on the link's purpose.
"The full dataset and source code to reproduce this research can be downloaded at <url>" might be deeply interesting to someone in a few years.
epolanski•10h ago
In any case a paper should not rely on an ephemeral resource like internet links.
Have you ever tried to navigate to the errata corrige of computer science books? It's one single book, with one single link, and it's dead anyway.
JumpCrisscross•10h ago
There are always dependencies in citations. Unless a paper comes with its citations embedded, splitting hairs between why one untrustworthy provider is more untrustworthy than another is silly.
ycombinatrix•8h ago
jtuple•14h ago
Reading paper was more comfortable then reading on the screen, and it was easy to annotate, highlight, scribble notes in the margin, doodle diagrams, etc.
Do grad students today just use tablets with a stylus instead (iPad + pencil, Remarkable Pro, etc)?
Granted, post grad school I don't print much anymore, but that's mostly due to a change in use case. At work I generally read at most 1-5 papers a day tops, which is small enough to just do on a computer screen (and have less need to annotate, etc). Quite different then the 50-100 papers/week + deep analysis expected in academia.
Incipient•6h ago
I just had a really warm feeling of nostalgia reading that! I was a pretty average student, and the material was sometimes dull, but the coffee was nice, life had little stress (in comparison) and everything felt good. I forgot about those times haha. Thanks!
reaperducer•10h ago
We have many paper documents from over 1,000 years ago.
The vast majority of what was on the internet 25 years ago is gone forever.
epolanski•10h ago
Try going back by 6/7 years on this very website, half the links are dead.
eviks•2h ago
leumon•15h ago
zffr•17h ago
I’m genuinely asking. It seems like its hard to trust that any service will remaining running for decades
toomuchtodo•17h ago
It is built for the task, and assuming worse case scenario of sunset, it would be ingested into the Wayback Machine. Note that both the Internet Archive and Cloudflare are supporting partners (bottom of page).
(https://doi.org/ is also an option, but not as accessible to a casual user; the DOI Foundation pointed me to https://www.crossref.org/ for adhoc DOI registration, although I have not had time to research further)
Hyperlisk•17h ago
ruined•17h ago
other readers may be specifically interested in their contingency plan
https://perma.cc/contingency-plan
whoahwio•16h ago
toomuchtodo•16h ago
This is distinct from Google saying "bye y'all, no more GETs for you" with no other way to access the data.
whoahwio•16h ago
toomuchtodo•15h ago
danelski•17h ago
edent•17h ago
You aren't responsible if things go offline. No more than if a publisher stops reprinting books and the library copies all get eaten by rats.
A reader can assess the URl for trustworthiness (is it scam.biz or legitimate_news.com) look at the path to hazard a guess at the metadata and contents, and - finally - look it up in an archive.
firefax•17h ago
I thought that was the standard in academia? I've had reviewers chastise me when I did not use wayback machine to archive a citation and link to that since listing a "date retrieved" doesn't do jack if there's no IA copy.
Short links were usually in addition to full URLS, and more in conference presentations than the papers themselves.
grapesodaaaaa•14h ago
We’ve learned over the years that they can be unreliable, security risks, etc.
I just don’t see a major use-case for them anymore.
kazinator•17h ago
jeffbee•17h ago
nikanj•16h ago
crossroadsguy•16h ago
BobaFloutist•16h ago
SoftTalker•15h ago
SirMaster•15h ago
jlarocco•14h ago
It wasn't a good idea to use shortened links in a citation in the first place, and somebody should have explained that to the authors. They didn't publish a book or write an academic paper in a vacuum - somebody around them should have known better and said something.
And really it's not much different than anything else online - it can disappear on a whim. How many of those shortened links even go to valid pages any more?
And no company is going to maintain a "free" service forever. It's easy to say, "It's only ...", but you're not the one doing the work or paying for it.
gmerc•14h ago
FallCheeta7373•14h ago
hammyhavoc•3h ago
I know some brilliant people, but, well, putting it kindly, they're as useful as a chocolate teapot outside of their specific area of academic expertise.
kazinator•13h ago
The authors just had their heads too far up their academic asses to have heard of this.
justin66•13h ago
It's a great idea, and today in 2025, papers are pretty much the only place where using these shortened URLs makes a lot of sense. In almost any other context you could just use a QR code or something, but that wouldn't fit an academic paper.
Their specific choice of shortened URL provider was obviously unfortunate. The real failure is that of DOI to provide an alternative to goo.gl or tinyurl or whatever that is easy to reach for. It's a big failure, since preserving references to things like academic papers is part of their stated purpose.
dingnuts•10h ago
nly•9h ago
QuantumGood•12h ago
justinmayer•11h ago
Overcast link to relevant chapter: https://overcast.fm/+BOOFexNLJ8/02:33
Original episode link: https://shows.arrowloop.com/@abstractions/episodes/001-the-r...
asdll•6h ago
It makes me mad also, but something we have to learn the hard way is that nothing in this world is permanent. Never, ever depend on any technology to persist. Not even URLs to original hosts should be required. Inline everything.
lubujackson•3h ago
eviks•2h ago
For the immeasurable benefits of educating the public.