As we already have a PostgreSQL database server, thecost of running this is extremely low, and we aren't concerned about GDPR (etc) issues with using a third-party site.
- Tinyurl.com, launched in 2002, currently 23 years old
- Urly.it, launched in 2009, currently 16 years old
- Bitly.com, also launched in 2009
So yes, some services survived a long time.
[1]: https://en.wikipedia.org/wiki/Digital_object_identifier
I think I might be doing a self plug here, so pardon me but I am pretty sure that I can create something like a link shortener which can last essentially permanent, it has to do with crypto (I don't adore it as an investment, I must make it absolutely clear)
But basically I have created nanotimestamps which can embed some data in nano blockchain and that data could theoretically be a link..
Now the problem is that the link would atleast either be a transaction id which is big or some sort of seed passphrase...
So no, its not as easy as some passphrase but I am pretty sure that nano isn't going to dissolve, last time I checked it has 60 nodes and anyone can host a node and did I mention all of this for completely free.. (I mean, there is no gas fees in nano, which is why I picked it)
I am not associated with the nano team and it would actually be sort of put their system on strain if we do actually use it in this way but I mean their system allows for it .. so why not cheat the system
Tldr: I am pretty sure that I can build one which can really survive a really long time, decentralized based link shortener but the trade off is that the shortened link might actually become larger than original link. I can still think of a way to actually shorten it though
Like I just thought that nano has a way to catalogue transactions in time so its theoretically possible that we can catalogue some transactions from time, and so basically its just the nth number of transaction and that n could be something like 1000232
and so it could be test.org/1000232 could lead to something like youtube rickroll. Could theoretically be possible, If literally anybody is interested, I can create a basic prototype since I am just so proud really that I created some decent "innovation" in some space that I am not even familiar with (I ain't no crypto wizard)
Data stored in a blockchain isn't any more permanent than data stored in a well-seeded SQLite torrent: it's got the same failure modes (including "yes, technically there are a thousand copies… somewhere; but we're unlikely to get hold of one any time in the next 3 years").
But yes, you have correctly used the primitives to construct a system. (It's hardly your fault people undersell the leakiness of the abstraction.)
But I think compared to sqlite torrent, the part about crypto might be the fact that since there's people's real money involved (for the worse or for the better) it then becomes of absolute permanence that data stored in blockchain becomes permanent.. and like I said, I can use that 60 nodes for absolutely free due to absolutely 0 gas fees compared to Sqlite torrent.
If you want to use blockchain for this, I advise properly using a dedicated new blockchain, not spamming the Nano network.
With the system I am presenting, I think that it can be possible to have a website like redirect.com/<some-gibberish> and even if redirect.com goes down then yes that link would stop working but what redirect.com is doing under the hood can be done by anybody so that being said,
it can be possible for someone to archive redirect.com main site which might give instructions which can give a recent list on github or some other place which can give a list of top updated working web gateways
And so anybody can go to archive.org, see that's what they meant and try it or maybe we can have some sort of slug like redirect.com/block/<random-gibberish> and then maybe people can then have it be understood to block meaning this is just a gateway (a better more niche word would help)
But still, at the end of the day there is some way of using that shortened link forever thus being permanent in some sense.
Like Imagine that someone uses goo.gl link for some extremely important document and then somehow it becomes inaccessible for whatever use case and now... Its just gone?
I think that a way to recover that could really help. But honestly, I am all in for feedback and since its 0 fees and as such I would most likely completely open source it and neither am I involved in this crypto project, I most likely will earn nothing like ever even if I do make this, but I just hope that I could help in making the internet a little less like a graveyard with dead links and help in that aspect.
The same would have to be done for the node running the service, and it too has been prefunded with a sitting balance.
Granted, there still exist failure modes, and so the bus factor needs to be more than one, but the above setup can in all probability easily ride out a few decades with the original person forgetting about it. In principle, a prefunded LLM with access to appropriate tooling and a headless browser can even be put in charge to address common administrative concerns.
2) you don't actually want things to be permanent - users will inevitably shorten stuff strings didn't mean to / want to, so there needs to be a way to scrub them.
Shortened or not, they change, disappear, get redirected, all the time. There was once an idea that a URL was (or should be) a permanent reference, but to the extent that was ever true it's long in the past.
The closest thing we might have to that is an Internet Archive link.
Otherwise, don't cite URLs. Cite authors, titles, keywords, and dates, and maybe a search engine will turn up the document, if it exists at all.
Fwiw, I wrote and hosted my own URL shortener, also embeddable in applications.
I don't know if anyone should use a URL shortener or not ... but if you do ...
"Oh By"[1] will be around in thirty years.
Links will not be "purged". Users won't be tracked. Ads won't be served.
[1] https://0x.co
How can you (or I) know that?
What normal person would find this glove and result in it being returned to its owner? Even if "0x.co" was written too, I think most people wouldn't understand it to be a URL.
> We understand these links are embedded in countless documents, videos, posts and more, and we appreciate the input received.
How did they think the links were being used?
Also helps that they are in a culture which does not mind killing services on a whim.
Then they should also be okay for keeping the goo.gl links honestly.
Sounds kinda bad for some good will but this is literally google, the one thing google is notorious for is killing their products.
Hey lets also dump 100 Billion dollars into this AI thing without any business plan or ideas to back it up this year. HOW FAST CAN YOU ACCEPT MY CHECK!
I doubt it was a cost-driven decision on the basis of running the servers. My guess would be that it was a security and maintenance burden that nobody wanted.
They also might have wanted to use the domain for something else.
The nature of something like this is that the cost to run it naturally goes down over time. Old links get clicked less so the hardware costs would be basically nothing.
As for the actual software security, it's a URL shortener. They could rewrite the entire thing in almost no time with just a single dev. Especially since it's strictly hosting static links at this point.
It probably took them more time and money to find inactive links than it'd take to keep the entire thing running for a couple of years.
How? Barring a database leak I don't see a way for someone to simply scan all the links. Putting something like Cloudflare in front of the shortener with a rate limit would prevent brute force scanning. I assume google semi-competently made the shortener (using a random number generator) which would make it pretty hard to find links in the first place.
Removing inactive links also doesn't solve this problem. You can still have active links to secret docs.
If they're have a (passwordless) URL they're not secret.
My understanding from conversations I've seen about Google Reader is that the problem with Google is that every few years they have a new wave of infrastructure, which necessitates upgrading a bunch of things about all of their products.
I guess that might be things like some new version of BigTable or whatever coming along, so you need to migrate everything from the previous versions.
If a product has an active team maintaining it they can handle the upgrade. If a product has no team assigned there's nobody to do that work.
Arrival of new does not neccessitate migration.
Only departure of old does.
But it's worse than that because they'll bring up whole new datacenters without ever bringing the deprecated service up, and they also retire datacenters with some regularity. So if you run a service that depends on deprecated services you could quickly find yourself in a situation where you have to migrate to maintain N+2 redundancy but there's hardly any datacenter with capacity available in the deprecated service you depend on.
Also, how many man years of engineering do you want to spend on keeping goo.gl running. If you were an engineer would you want to be assigned this project? What are you going to put in your perf packet? "Spent 6 months of my time and also bothered engineers in other teams to keep this service that makes us no money running"?
If you're high flying, trying to be the next Urs or Jeff Dean or Ian Goodfellow, you wouldn't, but I'm sure there's are many thousands of people who are able to do the job that would just love to work for Google and collect a paycheck on a $150k/yr job and do that for the rest of their lives.
Best analogy I can think of is log-rolling (as in the lumberjack competition).
What does happen is APIs are constantly upgraded and rewritten and deprecated. Eventually projects using the deprecated APIs need to be upgraded or dropped. I don't really understand why developers LOVE to deprecate shit that has users but it's a fact of life.
Second hand info about Google only so take it with a grain of salt.
As such, developing a new API gets more brownie points than rebuilding a service that does a better job of providing an existing API.
To be more charitable, having learned lessons from an existing API, a new one might incorporate those lessons learned and be able to do a better job serving various needs. At some point, it stops making sense to support older versions of an API as multiple versions with multiple sets of documentation can be really confusing.
I'm personally cynical enough to believe more in the less charitable version, but it's not impossible.
This seems like a good eval case for autonomous coding agents.
You know how Google deprecating stuff externally is a (deserved) meme? Things get deprecated internally even more frequently and someone has to migrate to the new thing. It's a huge pain in the ass to keep up with for teams that are fully funded. If something doesn't have a team dedicated to it eventually someone will decide it's no longer worth that burden and shut it down instead.
Edit: nevermind, I had no idea Dynamic Links is deprecated and will be shutting down.
It's a really ridiculous decision though. There's not a lot that goes into a link redirection service.
Yeah I can't imagine it being a huge cost saver? But guessing that the people who developed it long moved on, and it stopped being a cool project. And depending on the culture inside Google it just doesn't pay career-wise to maintain someone else's project.
Cloudflare offered to run it and Google turned them down:
Here is a service that basically makes Google $0 and confuses a non-zero amount of non-technical users when it sends them to a scam website.
Also, in the age of OCR on every device they make basically no sense. You can take a picture of a long URL on a piece of paper then just copy and paste the text instantly. The URL shortener no longer serves a discernible purpose.
That way they'll make money, and they can fund the service not having to shut down, and there isn't any linkrot.
Given those options, an ad seems like a trivial annoyance to anyone who very much needs a very old link to work. Anyone who still has the ability to update their pages can always update their links.
"Here's a permanent (*) link".
[*] Definitions of permanent may vary wildly.
I will be honest I was never in an environment that would benefit from link shortening, so I don't really know if any end users actually wanted them (my guess twitter mainly) and always viewed these hashed links with extreme suspicion.
Like other things spun down there must not be value in the links.
For company running GCP and giving things like Colab TPUs free the costs of running a URL service would be trivial rounding number at best
Can't dig this document up right now, but in their Chrome dev process they say something along these lines: "even if a ferie is used by 0.01% of users, at scale that's a lot of users . Don't remove until you've made solely due impost is negligible".
At Google scale I'm surprised [1] this is not applied everywhere.
[1] Well, not that surprised
This is exactly why many big companies like Amazon, Google and Mozilla still support TLSv1.0, for example, whereas all the fancy websites would return an error unless you're using TLSv1.3 as if their life depends on it.
In fact, I just checked a few seconds ago with `lynx`, and Google Search even still works on plain old HTTP without the "S", too — no TLS required whatsoever to start with.
Most people are very surprised by this revelation, and many don't even believe it, because it's difficult to reproduce this with a normal desktop browser, apart from lynx.
But this also shows just out how out of touch Walmart's digital presence really is, because somehow they deem themselves to be important enough to mandate TLSv1.2 and the very latest browsers unlike all the major ecommerce heavyweights, and deny service to anyone who doesn't have the latest device with all the latest updates installed, breaking even the slightly outdated browsers even if they do support TLSv1.2.
https://www.auslogics.com/en/articles/is-it-bad-that-google-...
Not only are things evolving internally within Google, laws are evolving externally and must be followed.
But just a guess.
Apparently they measured it once by running a map-reduce or equivalent.
I don’t see why they couldn’t measure it again. Maybe they don’t want it to be gamed, but why?
Google's shortened goo.gl links will stop working next month - https://news.ycombinator.com/item?id=44683481 - July 2025 (219 comments)
Google URL Shortener links will no longer be available - https://news.ycombinator.com/item?id=40998549 - July 2024 (49 comments)
Not knowing all the details motivating this surprising decision, from the outside, I'd expect this to be an easy "Don't Be Evil" call:
"If we don't want to make new links, we can stop taking them (with advance warning, for any automation clients). But we mustn't throw away this information that was entrusted to us, and must keep it organized/accessible. We're Google. We can do it. Oddly, maybe even with less effort than shutting it down would take."
That someone made a poor decision to rely on anything made by Google.
Go look at a decade+ old webpage. So many of the links to specific resources (as in, not just a link to a domain name with no path) simply don't work anymore.
That would come off far less user hostile than this move while still achieving the goal of trimming truly unnecessary bloat from their database. It also doesn't require you to keep track of how often a link is followed, which incurs its own small cost.
That actually seems just as bad to me, since the URL often has enough data to figure out what was being pointed to even if the exact URL format of a site has changed or even if a site has gone offline. It might be like:
kmart dot com / product.aspx?SKU=12345678&search_term=Staplers or /products/swingline-red-stapler-1235467890
Those URLs would now be dead and kmart itself will soon be fully dead but someone can still understand what was being linked to.
Even if the URL is 404, it's still possibly useful information for someone looking at some old resource.
I'm completely serious, and I have a PhD thesis with such links to back it up. Just in some foootnotes, but still.
Yes, maybe this shows how naive we were/I was. But it definitely also shows how deep Google has fallen, that it had so much trust and completely betrayed it.
Then, even as that was eroding, they were still seen as reliable, IIRC.
The killedbygoogle reputation was more recent. And still I think isn't common knowledge among non-techies.
And even today, if you ask a techie which companies have certain reliability capabilities, Google would be at the top of some lists (e.g., keeping certain sites running under massive demand, and securing data against attackers).
Look at what happened to their search results over the years and you'll understand.
Google has a number of internal processes that effectively make it impossible to run legacy code without an engineering team just to integrate breaking upstream API changes, of which there are many. Imagine Google as an OS, and every few years you need to upgrade from, say, Google 8 to Google 9, and there's zero API or ABI stability so you have to rewrite every app built on Google. Everyone is on an upgrade treadmill. And you can't decide not to get on that treadmill either because everything built at Google is expected to launch at scale on Google's shitty[0]-ass infrastructure.
[0] In the same sense that Intel's EDA tools were absolutely fantastic when they made them and are holding the company back now
It's just not an accurate view of how the world works.
Is that the same shortening platform running it?
And also does this have something to do with the .gl TLD? Greenland? A redirect to share.google would be fine
modeless•12h ago
Retr0id•12h ago
dietr1ch•12h ago
Retr0id•11h ago
smaudet•11h ago
1043*1000000000 / (1023^3)
10 4 byte characters times 3 billion links, dividing by 1 GB of memory...
Roughly 111 GB of RAM.
Which is like nothing to a search giant.
To put that into perspective, my Desktop Computer's max Mobo memory is 128 GB, so saying it has to do with RAM is like saying they needed to shut off a couple servers...and save like maybe a thousand dollars.
This reeks of something else, if not just sheer ineptitude...
dietr1ch•11h ago
You are forgetting job replication. A global service can easily have 100s of jobs on 10-20 datacenters. Saving 111TiB of RAM can probably pay your salary forever. I think I paid mine with fewer savings while there. During covid there was a RAM shortage too enough to have a call to prefer trading CPU to save RAM with changes to the rule of thumb resource costs.
nomel•10h ago
There's obviously, something in between maintaining the latency with 20 datacenter, increasing the latency a bit reducing hosting to a couple $100 worth of servers, and setting the latency to infinity, which was the original plan.
dietr1ch•10h ago
A service like this is probably on maintenance mode too, so simplifying it to use fewer resources probably makes sense, and I bet the PMs are happy about shorter links, since at some point you are better off not using a link shortener and instead just use a QR code in fear of inconvenience and typos.
18172828286177•12h ago
zarzavat•11h ago
deelowe•11h ago
afavour•11h ago
I’m not defending it, just that I can absolutely imagine Google PMs making a chart of “$ saved vs clicks” and everyone slapping each other on the back and saying good job well done.
OutOfHere•11h ago
42lux•11h ago
maven29•11h ago
imchillyb•11h ago
mystifyingpoi•11h ago
MajimasEyepatch•10h ago
1. Years ago, Acme Corp sets up an FAQ page and creates a goo.gl link to the FAQ.
2. Acme goes out of business. They take the website down, but the goo.gl link is still accessible on some old third-party content, like social media posts.
3. Eventually, the domain registration lapses, and a bad actor takes over the domain.
4. Someone stumbles across a goo.gl link in a reddit thread from a decade ago and clicks it. Instead of going to Acme, they now go to a malicious site full of malware.
With the new policy, if enough time has passed without anyone clicking on the link, then Google will deactivate it, and the user in step 4 would now get a 404 from Google instead.
dundarious•9h ago
xp84•9h ago
e.g. Imagine SMS or email saying "We've received your request to delete your Google account effective (insert 1 hour's time). To cancel your request, just click here and log into your account: https://goo.gl/ASDFjkl
This was a very popular strategy for phishing and it's still possible if you can find old links that go to hosts that are NXDOMAIN and unregistered, of which there are no doubt millions.
NewJazz•8h ago
dundarious•5h ago
mattmaroon•2h ago
Presumably ACME used the link shortener because they wanted to put the shortened link somewhere, so someone’s going to click things like these. If Google can just delete a lot of it why not?