And especially i wanted to make sure that Go's goroutines can do nicely some fan-out things, for example, getting request, logging, publishing event, without notable performance degradation.
I had quite some experience in AWS, with serverless architecture in particular. And there are basically no other option for scheduling except for EventBridge, which has a handsome free tier though. But what if you are not within the AWS infra, because the infra itself could be costly, especially after they removed the "12-months trial" thingy.
So, i thought something like cron-as-a-service could be good starting point.
Decided to take a straightforward stack: Go API server, Postgres, Redis as both job queue and cache, all running on a single €5/month Hetzner VPS behind Caddy.
I read that it's possible to use Redis as a queue. Thus, wanted to try this out as well — job queue via LPUSH/BRPOP and API key cache with a 5-minute TTL, which cuts out a number of Postgres calls on every authenticated request.
To validate the performance story, I ran a benchmark from within Europe — 10,000 requests, 100 concurrent, against the live API:
Requests/sec: 621 Avg latency: 158ms (network-dominated — server is on Hetzner Germany) p99: 442ms Fastest: 84ms
Most of that latency is the round trip across Europe. The actual server processing time - Go handler + local Postgres query - takes much less. But still - this is for a very basic Hetzner VPS instance.
Would love to get any feedback — especially on the scheduler design and whether the worker pool approach could handle real load.
m_barsukou•5m ago