This is a classic product data decision-making fallacy. The right question is "how much total value do all of the links provide", not "what percent are used".
Yes, but it doesn't bring in the sweet promotion home, unfortunately. Ironically, if 99% of them doesn't see any traffic, you can scale back the infra, run it in 2 VMs, and make sure a single person can keep it up as a side quest, just for fun (but, of course, pay them for their work).
This beancounting really makes me sad.
Doing things for fun isn't in Google's remit
Amazon should volunteer a free-tier EC2 instance to help Google in their time of economic struggles.
If they’re so inclined, Oracle has an always free tier with ample resources. They can use that one, too.
It's less "data-driven decisions", more "how to lie with statistics".
Why does google kill any project? the people who made it moved on, the new people dont care because it doesn't make their resume look any better.
basically nobody wants to own this service and it requires upkeep to maintain it alongside other google services.
google's history shows a clear choice to reward new projects, not old ones.
Many videos I uploaded in 4k are now only available in 480p, after about a decade.
Better to have a short URL and not need it, than need a short URL and not have it IMO.
* as long as humanly possible, as is archive.org's mission.
Us preserving digital archives is a good step. I guess making hard copies would be the next step.
So, at minimum, assuming there are 2 people maintaining this at google that probably means it would cost them $250k/yr in just payroll to keep this going. That's probably a very low ball estimate on the people involved but it still shows how expensive theses old products can be.
How will they know a short link to a random PDF on S3 is potentially sensitive info?
Making bad actors brute force the key space to find unlisted URLs could be a better scenario for most people.
People also upload unlisted Youtube videos and cloud docs so that they can easily share them with family. It doesn't mean you might as well share content that they thought was private.
Compiler Explorer and the Promise of URLs That Last Forever (May 2025, 357 points, 189 comments)
I'm not a google fanboi and the google graveyard is a well known thing, but this has been 6+ years coming.
Granted that it was a free service and Google is under no obligation to keep it going. But if they were going to be so casual about it, they shouldn't have offered it in the first place. Or perhaps, people should take that lesson instead and spare themselves the pain.
This seems to be echoed by the archiveteam scrambling to get this archived. I figure they would have backed these up years ago if it was more well known.
Google’s probably trying to stop goo.gl URLs from being used for phishing, but doesn’t want to admit that publicly.
https://x.com/elithrar/status/1948451254780526609
Remember this next time you are thinking of depending upon a Google service. They could have kept this going easily but are intentionally breaking it.
My guess is it just keeps chugging along with little maintenance needed by Google itself. The UI hasn’t changed in a while from what I’ve seen.
I have a tiny service built on top of Google App Engine that (only) I use personally. I made it 15+ years ago, and the last time I deployed changes was 10+ years ago.
It's still running. I have no idea why.
Next time? I guess there’s a wave of new people that haven’t learned that that lesson yet.
https://9to5google.com/2018/03/30/google-url-shortener-shut-...
> URL Shortener has been a great tool that we’re proud to have built. As we look towards the future, we’re excited about the possibilities of Firebase Dynamic Links
Perhaps relatedly, Google is shutting down Firebase Dynamic Links too, in about a month (2025-08-25).
Is that the same shortening platform running it?
But maybe a youtube disruption would be good for video on the internet. or it might be bad. idk.
No one wants to own this product.
- The code could be partially frozen, but large scale changes are constantly being made throughout the google3 codebase, and someone needs to be on the hook for approving certain changes or helping core teams when something goes wrong. If a service it uses is deprecated, then lots of work might need to be done.
- Every production service needs someone responsible for keeping it running. Maybe an SRE, thought many smaller teams don't have their own SREs so they manage the service themselves.
So you'd need some team, some full reporting chain all the way up, to take responsibility for this. No SWE is going to want to work on a dead product where no changes are happening, no manager is going to care about it. No director is going to want to put staff there rather than a project that's alive. No VP sees any benefit here - there's only costs and risks.
This is kind of the Reader situation all over again (except for the fact that a PM with decent vision could have drastically improved and grown Reader, IMO).
This is obviously bad for the internet as a whole, and I personally think that Google has a moral obligation to not rug pull infrastructure like this. Someone there knows that critical links will be broken, but it's in no one's advantage to stop that from happening.
I think Google needs some kind of "attic" or archive team that can take on projects like this and make them as efficiently maintainable in read-only mode as possible. Count it as good-will marketing, or spin it off to google.org and claim it's a non-profit and write it off.
Side note: a similar, but even worse situation for the company is the Google Domains situation. Apparently what happened was that a new VP came into the org that owned it and just didn't understand the product. There wasn't enough direct revenue for them, even though the imputed revenue to Workspace and Cloud was significant. They proposed selling it off and no other VPs showed up to the meeting about it with Sundar so this VP got to make their case to Sundar unchallenged. The contract to sell to Squarespace was signed before other VPs who might have objected realized what happened, and Google had to buy back parts of it for Cloud.
Myself, no, for a few reasons: I mainly work on developer tools, I'm too senior for that, and I'm not that interested.
But some people are motivated to work on internet infrastructure, and would be interested. First, you wouldn't be stuck for 10 years. That's not how Google works (and you could of course quit) you're supposed to be with a team a minimum of 18 months, and after that, transfer away. A lot of junior devs don't care that much where they land, the archive team would have to be responsible for more than just the link shortener, so it might be interesting to care for several services from top to bottom. SWEs could be compensated for rotating on to the archive team, and/or it could be part-time.
I think the harder thing is getting management buy-in, even from the front-line managers.
[1] Almost the simplest possible services (sans the scale I guess) you can imagine except simple static webpages
[2] The original product included some sort of traffic counter, etc. IIRC
While clearly maintenance and ownership is still a major problem, one could easily imagine deploying something similar — especially read-only — using GCP's Cloud Run and BigTable products could be less work to maintain, as you're not chasing anywhere near such a moving target.
Also, it's quite conspicuous that 30+ years into this thing browsers still have no built-in capacity to store pages locally in a reasonable manner. We still rely on "bookmarks".
Unlike the google URL shortener, you can count on "Oh By" existing in 20 years.
[1] https://0x.co
When you blame your customer, you have failed.
No search engine or crawler person will ever recommend using a shortener for any reason.
They are saving pennies but reminding everyone one more time that Google cannot be relied upon.
edent•6mo ago
Countless books with irrevocably broken references - https://www.google.com/search?q=%22://goo.gl%22&sca_upv=1&sc...
And for what? The cost of keeping a few TB online and a little bit of CPU power?
An absolute act of cultural vandalism.
djfivyvusn•6mo ago
toomuchtodo•6mo ago
api•6mo ago
The simplicity of the web is one of its virtues but also leaves a lot on the table.
toomuchtodo•6mo ago
https://tracker.archiveteam.org/goo-gl/ (1.66B work items remaining as of this comment)
How to run an ArchiveTeam warrior: https://wiki.archiveteam.org/index.php/ArchiveTeam_Warrior
(edit: i see jaydenmilne commented about this further down thread, mea culpa)
pentagrama•6mo ago
I wanted to help and did that using VMware.
For curious people, here is what the UI looks like, you have a list of projects to choose, I choose the goo.gl project, and a "Current project" tab which shows the project activity.
Project list: https://imgur.com/a/peTVzyw
Current project: https://imgur.com/a/QVuWWIj
addandsubtract•6mo ago
progbits•6mo ago
Going to run the warrior over the weekend to help out a bit.
raybb•6mo ago
xingped•6mo ago
Scoundreller•6mo ago
epolanski•6mo ago
Even worse if your resource is a shortened link by some other service, you've just added yet another layer of unreliable indirection.
whatevaa•6mo ago
ceejayoz•6mo ago
epolanski•6mo ago
ceejayoz•6mo ago
diatone•6mo ago
ceejayoz•6mo ago
jeeyoungk•6mo ago
epolanski•6mo ago
Just buy any scientific book and try to navigate to it's own errata they link in the book. It's always dead.
ceejayoz•6mo ago
IanCal•6mo ago
ceejayoz•6mo ago
It's the fact that it's likely gonna be printed in a paper journal, where you can't click the link.
SR2Z•6mo ago
This use case of "I have a paper journal and no PDF but a computer with a web browser" seems extraordinarily contrived. I have literally held a single-digit number of printed papers in my entire life while looking at thousands as PDFs. If we cared, we'd use a QR code.
This kind of luddite behavior sometimes makes using this site exhausting.
andrepd•6mo ago
SR2Z•6mo ago
Anyone who is savvy enough to put a link in a document is well-aware of the fact that links don't work forever, because anyone who has ever clicked a link from a document has encountered a dead link. It's not 2005 anymore, the internet has accumulated plenty of dead links.
andrepd•6mo ago
ceejayoz•6mo ago
This is by no means a universal experience.
People still get printed journals. Libraries still stock them. Some folks print out reference materials from a PDF to take to class or a meeting or whatnot.
SR2Z•6mo ago
Sure, contributing to link rot is bad, but in the same way that throwing out spoiled food is bad. Sometimes you've just gotta break a bunch of links.
ceejayoz•6mo ago
That probably depends on the link's purpose.
"The full dataset and source code to reproduce this research can be downloaded at <url>" might be deeply interesting to someone in a few years.
epolanski•6mo ago
In any case a paper should not rely on an ephemeral resource like internet links.
Have you ever tried to navigate to the errata corrige of computer science books? It's one single book, with one single link, and it's dead anyway.
JumpCrisscross•6mo ago
There are always dependencies in citations. Unless a paper comes with its citations embedded, splitting hairs between why one untrustworthy provider is more untrustworthy than another is silly.
ycombinatrix•6mo ago
jtuple•6mo ago
Reading paper was more comfortable then reading on the screen, and it was easy to annotate, highlight, scribble notes in the margin, doodle diagrams, etc.
Do grad students today just use tablets with a stylus instead (iPad + pencil, Remarkable Pro, etc)?
Granted, post grad school I don't print much anymore, but that's mostly due to a change in use case. At work I generally read at most 1-5 papers a day tops, which is small enough to just do on a computer screen (and have less need to annotate, etc). Quite different then the 50-100 papers/week + deep analysis expected in academia.
Incipient•6mo ago
I just had a really warm feeling of nostalgia reading that! I was a pretty average student, and the material was sometimes dull, but the coffee was nice, life had little stress (in comparison) and everything felt good. I forgot about those times haha. Thanks!
IanCal•6mo ago
reaperducer•6mo ago
We have many paper documents from over 1,000 years ago.
The vast majority of what was on the internet 25 years ago is gone forever.
epolanski•6mo ago
Try going back by 6/7 years on this very website, half the links are dead.
eviks•6mo ago
SR2Z•6mo ago
leumon•6mo ago
IanCal•6mo ago
zffr•6mo ago
I’m genuinely asking. It seems like its hard to trust that any service will remaining running for decades
toomuchtodo•6mo ago
It is built for the task, and assuming worse case scenario of sunset, it would be ingested into the Wayback Machine. Note that both the Internet Archive and Cloudflare are supporting partners (bottom of page).
(https://doi.org/ is also an option, but not as accessible to a casual user; the DOI Foundation pointed me to https://www.crossref.org/ for adhoc DOI registration, although I have not had time to research further)
Hyperlisk•6mo ago
ruined•6mo ago
other readers may be specifically interested in their contingency plan
https://perma.cc/contingency-plan
whoahwio•6mo ago
toomuchtodo•6mo ago
This is distinct from Google saying "bye y'all, no more GETs for you" with no other way to access the data.
whoahwio•6mo ago
toomuchtodo•6mo ago
afandian•6mo ago
That’s not to say that DOIs aren’t registered for all kinds of urls. I found the likes of YouTube etc when I researched this about 10 years ago.
toomuchtodo•6mo ago
afandian•6mo ago
Crossref isn’t the only DOI registration agency. DataCite may be more relevant, although both require membership. Part of this is the commitment to maintaining the content.
You could look at Figshare or Zenodo? https://docs.github.com/en/repositories/archiving-a-github-r...
Then Rogue Scholar is worth a mention. https://rogue-scholar.org/
Sorry that doesn’t answer your question but maybe that’s a clue that DOIs might not be right for your use case?
N19PEDL2•6mo ago
Until the Cocos Islands are annexed by Australia.
danelski•6mo ago
edent•6mo ago
You aren't responsible if things go offline. No more than if a publisher stops reprinting books and the library copies all get eaten by rats.
A reader can assess the URl for trustworthiness (is it scam.biz or legitimate_news.com) look at the path to hazard a guess at the metadata and contents, and - finally - look it up in an archive.
firefax•6mo ago
I thought that was the standard in academia? I've had reviewers chastise me when I did not use wayback machine to archive a citation and link to that since listing a "date retrieved" doesn't do jack if there's no IA copy.
Short links were usually in addition to full URLS, and more in conference presentations than the papers themselves.
afandian•6mo ago
grapesodaaaaa•6mo ago
We’ve learned over the years that they can be unreliable, security risks, etc.
I just don’t see a major use-case for them anymore.
AbstractH24•6mo ago
Say the interview of a person, a niche publication, a local pamphlet?
Maybe to certify that your article is of a certain level of credibility you need to manually preserve all the cited works yourself in an approved way.
kazinator•6mo ago
jeffbee•6mo ago
nikanj•6mo ago
crossroadsguy•6mo ago
BobaFloutist•6mo ago
SoftTalker•6mo ago
SirMaster•6mo ago
spixy•6mo ago
jlarocco•6mo ago
It wasn't a good idea to use shortened links in a citation in the first place, and somebody should have explained that to the authors. They didn't publish a book or write an academic paper in a vacuum - somebody around them should have known better and said something.
And really it's not much different than anything else online - it can disappear on a whim. How many of those shortened links even go to valid pages any more?
And no company is going to maintain a "free" service forever. It's easy to say, "It's only ...", but you're not the one doing the work or paying for it.
gmerc•6mo ago
FallCheeta7373•6mo ago
hammyhavoc•6mo ago
I know some brilliant people, but, well, putting it kindly, they're as useful as a chocolate teapot outside of their specific area of academic expertise.
kazinator•6mo ago
The authors just had their heads too far up their academic asses to have heard of this.
bbuut•6mo ago
If you want it archived do it. You seem to want someone else to take up your concerns.
An HN genius should be able to crawl this and fix it.
But you’re not geniuses. They’re too busy to be low affect whiners on social media.
jlarocco•6mo ago
Who's lost out at the end of the day? People who didn't understand the free market and lost access to these "free" services? Or people who knew what would happen and avoided them? My links are still working...
There are digital public goods (like Wikipedia) that are intended to stick around forever with free access, but Google isn't one of them.
justin66•6mo ago
It's a great idea, and today in 2025, papers are pretty much the only place where using these shortened URLs makes a lot of sense. In almost any other context you could just use a QR code or something, but that wouldn't fit an academic paper.
Their specific choice of shortened URL provider was obviously unfortunate. The real failure is that of DOI to provide an alternative to goo.gl or tinyurl or whatever that is easy to reach for. It's a big failure, since preserving references to things like academic papers is part of their stated purpose.
dingnuts•6mo ago
nly•6mo ago
justin66•6mo ago
HaZeust•6mo ago
???
DOI and ORCID sponsored link-shortening with Goo.gl. Authors did what they were told would be optimal, and ORCID was probably told by Google that it'd hone its link-shortening service for long-term reliability. What a crazy victim-blame.
QuantumGood•6mo ago
justinmayer•6mo ago
Overcast link to relevant chapter: https://overcast.fm/+BOOFexNLJ8/02:33
Original episode link: https://shows.arrowloop.com/@abstractions/episodes/001-the-r...
asdll•6mo ago
It makes me mad also, but something we have to learn the hard way is that nothing in this world is permanent. Never, ever depend on any technology to persist. Not even URLs to original hosts should be required. Inline everything.
lubujackson•6mo ago
eviks•6mo ago
For the immeasurable benefits of educating the public.