frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The critical window of shadow libraries (2024)

https://annas-archive.org/blog/critical-window.html
1•PaulHoule•55s ago•0 comments

Christian Democracy's pact with the far right

https://theloop.ecpr.eu/christian-democracys-pact-with-the-far-right/
1•kome•1m ago•0 comments

Prepaid Phone Plans: Everything You Need to Know About MVNOs

https://www.wired.com/story/prepaid-phone-plans-and-mvnos/
1•quapster•2m ago•0 comments

Which AI Voice Agents to use in 2025?

1•Olivia8•2m ago•0 comments

Who Said Neural Networks Aren't Linear?

https://assafshocher.github.io/linearizer/
1•fofoz•3m ago•0 comments

Barbarians at the Gate: How AI Is Upending Systems Research

https://arxiviq.substack.com/p/barbarians-at-the-gate-how-ai-is
1•che_shr_cat•4m ago•0 comments

F5 says hackers stole undisclosed BIG-IP flaws, source code

https://www.bleepingcomputer.com/news/security/f5-says-hackers-stole-undisclosed-big-ip-flaws-sou...
2•WalterSobchak•4m ago•0 comments

Zuban Beta Release: High-Performance Python Type Checker

https://zubanls.com/blog/beta/
2•davidhalter•5m ago•0 comments

Uber Losses

https://uberlosses.com/
1•thelastgallon•5m ago•0 comments

Customer Service Firm 5CA Denies Responsibility for Discord Data Breach

https://www.securityweek.com/customer-service-firm-5ca-denies-responsibility-for-discord-data-bre...
1•Bender•6m ago•0 comments

Improving User Interaction

1•anandagali•8m ago•0 comments

Faroese, Croatian, Slovenian and Vietnamese might be removed from GUI

https://github.com/bit-team/backintime/issues/2080
1•buhtz•9m ago•0 comments

What AI Hype Misses About Real Software Engineering Work

https://davidadamojr.com/what-ai-hype-misses-about-real-software-engineering-work/
1•dtgeadamo•10m ago•1 comments

Ask HN: What's working for founders raising without warm intros in 2025?

3•paulwilsonn•11m ago•0 comments

Data darkness in US spreads a global shadow

https://www.reuters.com/world/asia-pacific/data-darkness-us-spreads-global-shadow-2025-10-15/
4•geox•12m ago•0 comments

Building a Container from Scratch with Bash (No Docker, No Magic)

https://www.youtube.com/watch?v=FNfNxoOIZJs
1•birdculture•15m ago•0 comments

I have built an LLM API logger with rate limits and request scoping

https://vonwerk.com/prelaunch/ai-gateway
1•mxmzb•15m ago•1 comments

AI startup Augment scraps 'unsustainable' pricing, users say new model 10x worse

https://www.theregister.com/2025/10/15/augment_pricing_model/
3•rntn•16m ago•0 comments

Wheretowatch.stream:Stream availability by season with dubbed and subtitled ver

https://www.wheretowatch.stream
1•ericrenan•17m ago•1 comments

Electric Truck Charging Is Here, and Governments Want More

https://www.bloomberg.com/news/newsletters/2025-10-14/electric-truck-charging-is-here-and-governm...
1•toomuchtodo•17m ago•1 comments

Show HN: GenAI Test Case Generator – Reduces QA time by 80% using GPT-4

https://genai-test-gen.streamlit.app/
1•ashish_sharda•19m ago•0 comments

Solution notes: stop repeating past mistakes

https://henko.net/blog/solution-notes/
2•henrikje•21m ago•1 comments

Ask HN: How to sanity check an ambitious autocoder for enterprise systems?

1•tjmills111•23m ago•0 comments

China Can't Win

https://www.campbellramble.ai/p/china-cant-win
2•ironyman•24m ago•0 comments

Show HN: BrowserPod – In-browser Node.js, Vite, and Svelte with full networking

https://vitedemo.browserpod.io
2•yuri91•25m ago•0 comments

As Windows 10 signs off, ReactOS exploring long-awaited feature in WDDM support

https://www.tomshardware.com/software/windows/as-windows-10-signs-off-reactos-devs-are-exploring-...
1•rbanffy•25m ago•0 comments

M5 iPad Pro

https://www.apple.com/ipad-pro/
2•tosh•25m ago•0 comments

Apple Vision Pro

https://www.apple.com/apple-vision-pro/
11•meetpateltech•26m ago•0 comments

Apple introduces the powerful new iPad Pro with the M5 chip

https://www.apple.com/newsroom/2025/10/apple-introduces-the-powerful-new-ipad-pro-with-the-m5-chip/
4•chasingbrains•27m ago•0 comments

M5 MacBook Pro

https://www.apple.com/macbook-pro/
6•tambourine_man•27m ago•1 comments
Open in hackernews

Why We're Leaving Serverless

https://www.unkey.com/blog/serverless-exit
105•vednig•2h ago

Comments

pjmlp•1h ago
Their problem isn't serverless, rather Cloudflare Workers and WebAssembly.

All major cloud vendors have serveless solutions based on containers, with longer managed lifetimes between requests, and naturally the ability to use properly AOT compiled languages on the containers.

OvervCW•1h ago
Agree, it seems like they decided to use Cloudflare Workers and then fought them every step of the way instead of going back and evaluating if it actually fit the use case properly.

It reminds me of the companies that start building their application using a NoSQL database and then start building their own implementation of SQL on top of it.

zaphirplane•1h ago
Hey! Bet I can guess who
CuriouslyC•1h ago
Ironically, I really like cloudflare but actively dislike workers and avoid them when possible. R2/KV/D1 are all fantastic and being able to shard customer data via DOs is huge, but I find myself fighting workers when I use them for non-trivial cases. Now that Cloudflare has containers I'm pushing people that way.
keyle•1h ago
You're saying serverless can have really low latency and fast 24/7?

Isn't serverless at the base the old model, of shared vms, except with a ton of people?

I'm old school I guess, baremetal for days...

pjmlp•1h ago
Yes, check Cloud Run, AWS Lambda, Azure Functions with containers.
fabian2k•1h ago
At that point, why should I use serverless at all? If I have to think about the lifetime of the servers running my serverless functions?
OvervCW•1h ago
Serverless only makes sense if the lifetime doesn't matter to your application, so if you find that you need to think about your lifetime then serverless is simply not the right technology for your use case.
pjmlp•1h ago
Because it is still less management effort than taking full control of the whole infrastructure.

Usually a decision factor between more serverless, or more DevOps salaries.

fabian2k•1h ago
I would doubt that this is categorically true. Serverless inherently makes the whole architecture more complex with more moving parts in most cases compared to classical web applications.
pjmlp•1h ago
Depends pretty much where those classical web applications are hosted, how big is the infrasture taking care of security, backups, scalability, failovers, and the amount of salaries being paid, including on-call bonus.
9rx•32m ago
> Serverless inherently makes the whole architecture more complex with more moving parts

Why's that? Serverless is just the generic name for CGI-like technologies, and CGI is exactly how classical web application were typically deployed historically, until Rails became such a large beast that it was too slow to continue using CGI, and thus running your application as a server to work around that problem in Rails pushed it to become the norm across the industry — at least until serverless became cool again.

Making your application the server is what is more complex with more moving parts. CGI was so much simpler, albeit with the performance tradeoff.

Perhaps certain implementations make things needlessly complex, but it is not clear why you think serverless must fundamentally be that way.

array_key_first•1h ago
There's a huge gap between serverless and full infra management. Also, IMO, serverless still requires engineers just to manage that. Your concerns shift, but then you need platform experts.
pjmlp•1h ago
A smaller team, and from business point of view others take care of SLAs, which matters in cost center budgets.
ramraj07•56m ago
Serverless is not a panacea. And the alternative isn't always "multiple devops salaries" - unless the only two options you see are server serverless vs outrageously stupid complicated kubernetes cluster to host a website.
johannes1234321•1h ago
For a thing, which permanently has load it makes little sense.

It can make sense if you have very differing load, with few notable spikes or on an all in on managed services, where serverless things are event collectors from other services ("new file in object store" - trigger function to update some index)

CuriouslyC•1h ago
Cloudflare has containers now too, and having used AppRunner and Cloud Run, it's much easier to work with. Once they get rid of the container caps and add more flexibility in terms of container resources, I would never go back to the big cloud containers, the price and ease of use of Cloudflare's containers just destroy them.
pjmlp•1h ago
I doubt that the bill would be that much cheaper, nonetheless thanks for making me aware they are a thing now.
CuriouslyC•1h ago
They're much cheaper, they're just DOs, and they get billed as such. They also have faster cold start times and automatic multi-region support.
iainmerrick•21m ago
In that scenario, how do you keep cold startup as fast as possible?

The nice thing about JS workers is that they can start really fast from cold. If you have low or irregular load, but latency is important, Cloudflare Workers or equivalent is a great solution (as the article says towards the end).

If you really need a full-featured container with AOT compiled code, won't that almost certainly have a longer cold startup time? In that scenario, surely you're better off with a dedicated server to minimise latency (assuming you care about latency). But then you lose the ability to scale down to zero, which is the key advantage of serverless.

pjmlp•13m ago
Apparently not nice enough, given that they rewrote the application in Go.

Serverless with containers is basically managed Kubernetes, where someone else has the headache to keep the whole infrastructure running.

muragekibicho•1h ago
Interesting writeup. The serverless approach helped with GTM. (I speculate) raising capital afforded them extra devs who noticed the cache latency.
saidinesh5•1h ago
> The serverless approach helped with GTM

Unlikely? They could've just as well deployed their single go binary to a vm from day 1 and it would've been smooth sailing for their use case, while they acquire customers.

The cloudflare workers they chose aren't really suited for latency critical, high throughput APIs they were designing.

seethishat•1h ago
Linux servers running Go apps? Would be nice to see server cost and specs, backup strategy, etc.
wltr•1h ago
Backup strategy? What do you mean by that?
seethishat•1h ago
Servers go down. What is the plan to get them "backup" and running ;)
fabian2k•1h ago
They probably don't need one for the application servers. And they probably already have a backup strategy for their DBs.
illuminator83•1h ago
I'm assuming "High Availability" is what is really meant here.
gethly•1h ago
What do you find so peculiar about it? A lot of people are running Go apps on VPSs.
ape4•57m ago
Next article - why we switched from our own servers to serverless for reliability. A small performance hit was worth it.
sgarland•21m ago
TFA states that they’re running on AWS Fargate.

That said, as an example, an m8g.8xlarge gives you 32 vCPU / 128 GiB RAM for about $1000/month in us-east-1 for current on-demand pricing, and that drops to just under $700 if you can do a 1-year RI. I’m guessing this application isn’t super memory-heavy, so you could save even more by switching to the c-family: same vCPU, half the RAM.

Stick two of those behind a load balancer, and you have more compute than a lot of places actually need.

Or, if you have anything resembling PMF, spend $10K or so on a few used servers and put them into some good colo providers. They’ll do hardware replacement for you (for a fee).

1GZ0•1h ago
Somewhere in Denmark, DHH is smiling
noir_lord•1h ago
Gives him a break from writing out of touch screeds about countries he knows nothing about I guess.
hshdhdhehd•1h ago
30ms P99 does not a cache make.

Source work somewhere where you easily get 1ms cached relational DB reads from outside the service.

30ms makes me suspect it went cross region.

kunley•1h ago
For a best price-to-performance ratio create your instances and do whatever is needed on them. Software stacks are not that complicated to delegate everything to the Wizards of Cloud Overcharging.
kburman•1h ago
The takeaway here isn’t that serverless doesn’t work, it’s that the authors didn’t understand what they were building on. Putting a latency-critical API on a stateless edge runtime was a rookie mistake, and the pain they describe was entirely predictable.
nougati•1h ago
The takeaway isn't that they didn't understand, it's that they are sharing information which you agree is valuable
kburman•1h ago
What's valuable about rediscovering that stateless architectures requiring network round-trips for state access are slower than in-memory state? This isn't new information, it's a predictable consequence of their architecture choice that anyone with distributed systems experience could have told them on day zero.
chronark_•57m ago
Not everyone is born with experience in distributed systems
sgarland•50m ago
Sure, but there are some fundamentals about latency that any programmer should know [0] (absolute values outdated, but still useful as relative comparisons), like “network calls are multiple orders of magnitude slower than IPC.”

I’m assuming you’re an employee of the company based on your comments, so please don’t take this poorly - I applaud any and all public efforts to bring back sanity to modern architecture, especially with objective metrics.

0: https://gist.github.com/hellerbarde/2843375

chronark_•37m ago
I cofounded it yeah

And yeah you’re right in hindsight it was a terrible idea to begin with

I thought it could work but didn’t benchmark it enough and didn’t plan enough. It all looked great in early POCs and all of these issues cropped up as we built it

kburman•48m ago
That's fair, but then the framing matters. The article criticizes serverless architecture rather than acknowledging an evaluation failure.

"Serverless was fighting us" vs "We didn't understand serverless tradeoffs" - one is a learning experience, the other is misdirected criticism.

chronark_•36m ago
Yeah that’s fair
lossolo•12m ago
You don't need experience and there is not really a lot to know about "distributed systems" in this case, that's basic CS knowledge about networks, latency and what "serverless" actually is, you can read about it. To be honest, to me it reads like people who don't understand the problem they're solving, haven't acquired the necessary knowledge to solve it (either by learning themselves or by asking/hiring people who have it), and seeing such an amateurish mistake doesn't inspire confidence for the future. You should either hire people that know what they are doing or upgrade your knowledge about systems you are using before making decisions to use them.
ramraj07•1h ago
Bo Burmham said, "self awareness does not absolve anyone of anything"

But here I dont think they (or their defenders) are still aware of the real lesson here.

Theres literally zero information thats valuable here. Its like saying "we used an 18 wheeler as our family car and then we switched over to a regular camry and solved all our problems." What is the lesson to be learned in that statement?

The real interesting post mortem would be if they go, "god in retrospect what a stupid decision we took; what were we thinking? Why did we not take a step back earlier and think, why are we doing it this way?" If they wrote a blog post that way, that would likely have amazing takeaways.

chronark_•58m ago
I can assure you that was pretty close to the internal conversation lol

Not sure what the different takeaways would be though?

ramraj07•52m ago
What did your internal discussion conclude for the question "Why did we not take a step back earlier and think, why are we doing it this way?"

Im genuinely curious because this is not singling out your team or org, this is a very common occurrence among modern engineering teams, and I've often found myself on the losing end of such arguments. So I am all ears to hear at least one such team telling what goes on in their mind when they make terrible architecture decisions and if they learned anything philosophical that would prevent a repeat.

hrimfaxi•40m ago
I have had CTOs (two in my career) tell me we had to use our AWS credits since they were going to expire worthless. Both experiences were at vc-backed startups.
chronark_•39m ago
Oh we had it coming for quite some time and knew we would need to rebuild it, we just didn’t have the capacity to do it unfortunately.

I was working on it on and off moving one endpoint at a time but it was very slow until we hired someone who was able to focus on it.

It didn’t feel good at all. We knew the product had massive flaws due to the latency but couldn’t address it quickly. Especially cause we he to build more workarounds as time went on. Workarounds we knew would be made redundant by the reimplementation.

I think we had that discussion if “wtf are we doing here” pretty early, but we didn’t act on it in the beginning, instead we tried different approaches to make it work within the serverless constraints cause that’s what we knew well.

czhu12•1h ago
> Putting a latency-critical API on a stateless edge runtime

Isn’t this the whole point of serverless edge?

It’s understood to be more complex, with more vendor lockin, and more expensive.

Trade off is that it’s better supported and faster by being on the edge.

Why would anyone bother to learn a proprietary platform for non critical, latency agnostic service?

kburman•1h ago
You're confusing network proximity with application architecture. Edge deployment helps connection latency. Stateless runtime destroys it by forcing every cache access through the network.

The whole point of edge is NOT to make latency-critical APIs with heavy state requirements faster. It's to make stateless operations faster. Using it for the former is exactly the mismatch I'm describing.

Their 30ms+ cache reads vs sub-10ms target latency proves this. Edge proximity can't save you when your architecture adds 3x your latency budget per cache hit.

osigurdson•37m ago
Realistically, they should be able to do sub ms cache hits which land in the same datacenter. I know cloudflare doesn't have "named" datacenters like other providers but at the end of the day, there are servers somewhere and if your lambda runs twice in the same one there is no reason why a pull-through cache can't experience a standard intra data-center latency hit.

I wonder if there is anything other than good engineering getting in the way of this and even sub us intra-process pull through caches for busy lambda functions. After all, if my lambda is getting called 1000X per second from the same point of presence, why wouldn't they keep the process in memory?

whynotmaybe•43m ago
On serverless, whenever you call your code, it has to be executed but first the infrastructure has to find a place to run it and sometimes if there's no running instance available, it must fire up a new instance to run your code.

That's hot start VS cold start.

torginus•1h ago
My personal experience is that if you want guaranteed anything (quick scaling, latency, CPU, disk or network throughput), your best bet is to manually provision EC2 instances (or use some API that does). Once you give up control hoping to gain performance for free, you usually end up with an unfixable bottleneck.
randomtoast•57m ago
If you're looking for a middle ground between VMs and serverless, ECS Fargate is a good option. Because a container is always running, you won't experience any cold start times.
sgarland•43m ago
Yes, though unless you’re provisioning your own EC2s for them to run on, you have no guarantee about the server generation, and IME AWS tends to provision older stuff for Fargate.

This may or may not matter to you depending on your application’s needs, but there is a significant performance difference between, say, an m4 family (Haswell / Broadwell) and an m7i family (Sapphire Rapids) - literally a decade of hardware improvements. Memory performance in particular can be a huge hit for latency-sensitive applications.

evantbyrne•27m ago
ECS is good, just expensive and still requires more devops than it should. Docker Swarm is an easy way to run production container services on VMs. I built a free golang tool called Rove that provisions fresh Ubuntu VMs in one command and diffs updates. It's also easy-enough to use Swarm directly.
osigurdson•46m ago
There isn't much for them to mess with in EKS either. It is very close to the metal and easy to reason about.
Esophagus4•1h ago
I’ve found this to be true, with one caveat.

Most cloud pain people experience is from a misunderstanding / abuse of solutions architecture and could have been avoided with a more thoughtful design. It tends to be a people problem, not a tool problem.

However, in my experience cloud vendors sell the snot out of their offerings, and the documentation is closer to marketing than truthful technical documentation. Their products’ genuine performance is a closely guarded proprietary secret, and the only way to find out… e.g. whether Lambdas are fast enough for your use case, or whether AWS RDS cross-region replication is good enough for you… is to run your own performance testing.

I’ve been burned enough times by AWS making it difficult to figure out exactly how performant their services are, and I’ve learned to test everything myself for the workloads I’ll be running.

Danjoe4•1h ago
This is exactly why I'd rather get a fat VPS from a reputable provider. As long as the bandwidth is sufficient the only limitation is vertical scaling.
dlisboa•40m ago
I'm partial to this, the only thing I've found that is harder to achieve is the "edge" part of cloud services. Having a server at each continent is enough for most needs but having users route to the closest one is not as clear to me.

I know about Anycast but not how to make it operational for dynamic web products (not like CDN static assets). Any tips on this?

kijin•35m ago
Have a server or two on each continent for all your actual computing needs. Slap on a CDN to do the routing and lightweight proxying. They're good at it.
whstl•1h ago
> the documentation is closer to marketing than truthful technical documentation

I participated in AWS training and certification given by AWS for a company to obtain a government contract and I can 100% say that the PAID TRAINING itself is also 100% marketing and developer evangelism.

ivape•16m ago
Infra will always be full of so much nonsense because it’s really hard to tell successful developers their code and system design is unusable. People use it because they are paid to do so usually, but it’s literally some of the worst product development I’ve ever seen.

AWS will hopefully be reduced to natural language soon enough with AI, and their product team can move on (most likely they moved on a long time ago, and the revolving door at the company meant it was going remain a shittily thought out platform in long term maintenance).

gonzo41•57m ago
I feel like every cloud build meeting should have a moment where everyone has to defend the question "Wait! could this be a regular database with a regular app on a server with a regular cache?"
stego-tech•40m ago
You took the words right out of my mouth. Between aggressive salespeople marketing any given product as a panacea for everything and mandates from above to arbitrarily use X thing to do Y, there’s a lot of just plain bad architecture out there.
osigurdson•17m ago
>> is to run your own performance testing

I think they are shooting themselves in the foot with this approach. If you have to run a monte carlo simulation on every one of their services at your own time and expense just to understand performance and costs, people will naturally shy away from such black boxes.

usui•11m ago
> people will naturally shy away from such black boxes.

I don't this isn't true. In fact, it seems that in the industry, many developers don't proceed with caution and go straight into usage, only to find the problems later down the road. This is a result of intense marketing on the part of cloud providers.

ochronus•22m ago
But but it's webscale!
chronark_•1h ago
Author of that blog here, happy to answer any questions :)
flerchin•1h ago
Really great writeup. The charts tell the story beautifully, and the latency gains are surely a win for your company and customers. I always wonder about the tradeoffs. Is there a measurable latency difference for your non-colocated customers? What does maintenance look like for your Go servers? I assume that your Cloudflare costs dropped?
chronark_•28m ago
It’s faster for non-colocated customers too weirdly

I think cause connections can be reused more often. Cloud flare workers are really prone to doing a lot of TLS handshakes cause they spin up new ones constantly

Right now were just hang aws far hate for the go servers, so there really isn’t much maintenance at all. We’ll be moving that into eks soon though cause we are starting to add more stuff and need k8s anyways

wiether•51m ago
Not a question: thanks for the writeup and for the honesty of saying that serverless is not inherently bad, just not the right fit for your usecase!

Unfortunately too many comments here are quick to come to the wrong conclusion, based only on the title. Not a reason to change it though!

chronark_•31m ago
Thanks

It’s totally fair criticism that the title and wording is a bit clickbaity

But that’s ok

torginus•1h ago
I think someone should make a timeline of software technology eras, each beginning with 'why XYZ is the future' and ending with articles like this.
yilugurlu•1h ago
These two have resonated with me deeply.

- Eliminated complex caching workarounds and data pipeline overhead

- Simplified architecture from distributed system to straightforward application

We, as developers/engineers (put whatever title you want), tend to make things complex for no reason sometimes. Not all systems have to follow state-of-the-art best practices. Many times, secure, stable, durable systems outperform these fancy techs and inventions. Don't get me wrong, I love to use all of these technologies and fancy stuff, but sometimes that old, boring, monolithic API running on an EC2 solves 98% of your business problems, so no need to introduce ECS, K8S, Serverless, or whatever.

Anyway, I guess I'm getting old, or I understand the value of a resilient system, and I'm trying to find peace xD.

ramraj07•1h ago
But when were serverless systems like lambda and cloud workers "best practices" for low latency apis?
1-6•1h ago
I think this is what is being said:

"Down with serverless! Long live serverless!"

voodooEntity•1h ago
As someone who worked with serverless for multiple years (mostly amazon lambda but others too) i can absolutly apporove the authors points.

While it "takes away" some work from you, it adds this work on other points to solve the "artificial induced problems".

Another example i hit was a hard upload limit. Ported an application to a serverless variant, had an import API for huge customer exports. Shouldnt be a problem right? Just setup an ingest endpoint and some background workers to process the data.

Tho than i learned : i cant upload more than 100mb at a time through the "api gateway" (basically their proxy to invoke your code) and when asking if i could change it somehow i just was told to tell our customers to upload smaller file chunks.

While from a "technical" perspective this sounds logical, our customers not gonne start exchanging all their software so we get a "nicer upload strategy".

For me this is comparable with "it works in a vacuum" type of things. Its cool in theory, but as soon it hits reality you will realice quite fast that the time and money you safed on changing from permanent running machines to serverless, you will spend in other ways to solve the serverless specialities.

akdev1l•21m ago
The way to work around this issue is to provide a presigned S3 url

Have the users upload to s3 directly and then they can either POST you what they uploaded or you can find some other means of correlating the input (eg: files in s3 are prefixed with the request id or something)

I agree this is annoying and maybe I’ve been in AWS ecosystem for too long.

However having an API that accepts an unbounded amount of data is a good recipe for DoS attacks, I suppose the 100MB is outdated as internet has gotten faster but eventually we do need some limit

tacker2000•1h ago
Incredible that these kinds of services were hosted like this.

I guess they never came out of MVP, which could warrant using serverless, but in the end it makes 0 sense to use some slow solution like this for the service they are offering.

Why didnt they go with a self hosted backend right away?

Its funny how nowadays most devs are too scared to roll their own and just go with the cloud offerings that cost them tech debt and actual money down the road.

chronark_•1h ago
We did initially but thought cloud flare was a better solution for scalability and latency.

We believed their docs/marketing without doing extensive benchmarks, which is on us.

The appeal was also to use the same typescript stack across everything, which was nice to work with

ramraj07•59m ago
Where did their marketing or documentation say this service is perfect for low latency APIs?
chronark_•51m ago
I doubt they literally said “perfect for low latency APIs” but their messaging is definitely trying to convince you that they’re fast globally, just look at the workers.ckoudflare.com page
K0IN•57m ago
After building my first Serverless/Cloudflare worker app, this is why I migrated to Deno. Deno enables you to run the same codebase in deno (self-hosted/local) and in deno deploy (serverless platform from deno).

I wanted my app to be self-hostable as well, and Cloudflare worker is a hard ecosystem lock to their platform, which makes it undesirable (imo).

Here is a link to my reasoning from back then: https://github.com/K0IN/Notify/pull/77#issuecomment-16776070...

scottydelta•39m ago
I ported my worker project into Django since cloudflare workers wouldn’t allow selection of region for hosting workers which is generally required due to data compliances. This is something all cloud providers provide from day one yet cloudflare made it an enterprise feature.

Also the vendor lock-in doesn’t help with durable objects and D2 instead of simply doing what supabase and others are doing by providing Postgres or standard SQLite as a service.

gloomyday•53m ago
I think developers are drowning in tools to make things "easy", when in truth many problems are already easy with the most basic stuff in our tool belt (a compiler, some bash scripts, and some libraries). You can always build up from there.

This tooling fetish hurts both companies and developers.

bamboozled•49m ago
Excerpt AWS lambda is stupidly cheap!
Esophagus4•40m ago
For certain workloads :)

And that is actually the advantage of serverless, in my mind. For some low-traffic workloads, you can host for next to nothing. Per invocation, it is expensive, but if you only have a few invocations of a workload that isn't very latency sensitive, you can run an entirely serverless architecture for pennies per month.

Where people get burned is moving high traffic volumes to serverless... then they look at their bill and go, "Oh my god, what have I done!?" Or they try to throw all sorts of duct tape at serverless to make it highly performant, which is a fool's errand.

sgarland•40m ago
It’s that, and the fact that precious few people seem to understand fundamentals anymore, which is itself fed by the desire to outsource everything to 3rd parties. You can build an entire stack where the only thing you’ve actually made is the core application, and even that is likely to be influenced if not built by AI.

The industry is creating learned helplessness.

akdev1l•19m ago
A lot of people don’t know about compilers, bash scripts and libraries.
codegeek•24m ago
"Self-Hosting : Being tied to Cloudflare's runtime meant our customers couldn't self-host Unkey. While the Workers runtime is technically open source, getting it running locally (even in dev mode) is incredibly difficult.

With standard Go servers, self-hosting becomes trivial:"

A key point that I always make. Serverless is good if you want a simple periodic task to run intermittently without worrying about a full time server. The moment things get more complex than that (which in real world it almost always is), you need a proper server.