frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•39s ago•0 comments

I replaced the front page with AI slop and honestly it's an improvement

https://slop-news.pages.dev/slop-news
1•keepamovin•5m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•7m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
1•tosh•13m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
2•oxxoxoxooo•16m ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•17m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
2•goranmoomin•20m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

3•throwaw12•22m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•23m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•26m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
2•myk-e•28m ago•4 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•29m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
3•1vuio0pswjnm7•31m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
2•1vuio0pswjnm7•33m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•35m ago•2 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•38m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•43m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•44m ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•48m ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•1h ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•1h ago•1 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•1h ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•1h ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•1h ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•1h ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•1h ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•1h ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•1h ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•1h ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
2•basilikum•1h ago•0 comments
Open in hackernews

Running our Docker registry on-prem with Harbor

https://dev.37signals.com/running-our-docker-registry-on-prem-with-harbor/
155•airblade•5mo ago

Comments

AtomicOrbital•5mo ago
harbor is great ... simple to install ... it's the only container registry I ever use
fuomag9•5mo ago
And in my experience is the only one that has RBAC and can be deployed on premise and that actually works, I’ve tried everything at this point
cyberpunk•5mo ago
? GitLab?
fuomag9•5mo ago
It doesn't make any sense to deploy a full gitlab just to get a docker registry. RBAC is also associated with repositories and users in a way that is unconventional to manage
kirici•5mo ago
I am currently looking into zot, what were your blockers/hiccups with it?
tetha•5mo ago
When we looked at modernizing our image hosting, it came down to Zot vs Harbor, and we preferred Zot as it looked easier to deploy. Just a go binary with a few environment variables connecting to our minio, what could be easier?

However, when getting the config prod-ready, we started to trip over one thing after the other. First, my colleague was struggling to get the scale-out clustering to work in our container management. Right, use the other deployment way for HA. Then we found that apparently, if you enable OIDC, all other authentication methods get deactivated, so suddenly container hosts would have to login with tokens... somehow? And better hope your OIDC provider never goes down. And then we found a bug on top that Zot possibly doesn't remove blobs from minio during GC.

At that point we reconsidered and went with Harbor.

silverwind•5mo ago
for simple use cases, the official registry is good enough too
hn_throw2025•5mo ago
If you use paid S3 as the storage layer, then you want to control size.

With the self hosted official registry, the stop-GC-restart process is a PITA.

qmarchi•5mo ago
Would be interested to see a cost breakdown for the ECR vs S3 and compute cost.
dwedge•5mo ago
Unless I totally misread it, it's their own S3 cluster
nickjj•5mo ago
ECR is kind of hard to beat if you're ok with being in the cloud.

The last time I used it earlier this year for a company already on AWS, it was ~$3 / month per region to store 8 private repos and it was really painless to have a flexible and automated life cycle policy that deleted old image tags. It supports cross region replication too. All of that comes without maintenance or compute costs and if you're already familiar with Terraform, etc. you can automate all of that in a few hours of dev time.

benterix•5mo ago
It depends where the the compute resources using these images are located.
fastest963•5mo ago
Why does the Harbor VM need 32 cores and 64GB of RAM? Especially if it's only serving 32,000 pulls over 2 months.
01HNNWZ0MV43FF•5mo ago
I want something like "This could have been an email" but "This could have been a Caddy instance and static files"

Hell Git doesn't even need the Git protocol if you do `update-server-info`

mcpherrinm•5mo ago
It does seem mostly possible to host a Docker registry with static files and a bit of config:

https://github.com/jpetazzo/registrish

I haven’t tried running this yet, but it seems worth keeping in mind. It’s relatively simple software so the idea could probably be pretty easily adapted to other situations.

arccy•5mo ago
It's certainly possible with s3, so probably for any other static host https://news.ycombinator.com/item?id=40942732
adolph•5mo ago
I recall mentioned here ttl.sh, which as I looked it up [0], uses through Docker a CNCF project called Distribution Registry [1] which implements the core container registry functions (and appears to have additional utility, like being a pull-through cache).

0. https://github.com/replicatedhq/ttl.sh/blob/main/registry/en...

1. https://distribution.github.io/distribution/

nodesocket•5mo ago
Agree that seems insanely inefficient. That's less than a pull a minute.
nchmy•5mo ago
It does seem like a lot. Though, it's also a rounding error for them. It is probably sized to optimize for something else than just $
maratc•5mo ago
FWIW,

   > During this time, Harbor has served more than 32,000 pulls under company-wide use in day-to-day business.
It is possible to read this as the "32,000 pulls" is a daily number, not a total one.
marginalia_nu•5mo ago
Even if it is a daily number, it's not very much. There are 86,400 seconds in a day. Even limiting the time to an 8 hour business day, this is only around 1 pull per second.
mystifyingpoi•5mo ago
> Why does the Harbor VM need ...

They never say that it needs these resources, they say that the current VM has this config. Probably overkill by 1000%.

Symbiote•5mo ago
For what it's worth, I have an on-prem Nexus server with a Docker repository. It has 8 cores and 16GB RAM. It has 82000 hits/day in the webserver log, though 99.9% of them transfer only a few kB, so I assume it's a metadata check and the client already has the correct version.

The same Nexus is also hosting our Maven and NodeJS repositories. That has 1,800,000 hits per day, although all but 120,000 of them return HTTP 404. (I think one of our clients has misconfigured their build server to use our Maven repository as a mirror of Maven Central, but as it's just an AWS IP I don't know who it is.)

I'm sure it's overprovisioned, but the marginal cost of 2, 4 or 8 cores and 4, 8 or 16GB RAM isn't much when we buy our own hardware.

nchmy•5mo ago
> the marginal cost of 2, 4 or 8 cores and 4, 8 or 16GB RAM isn't much when we buy our own hardware

this is the crux of it all. RENTING bare metal (eg from hetzner) is 10x cheaper than aws ec2. So, I can only imagine how much cheaper it is when you buy the hardware directly.

Symbiote•5mo ago
At a very rough estimate with Hetzner's calculator, which doesn't have exactly the systems I most recently bought, our servers cost about 18 months of renting the closest Hetzner dedicated server.

Add to that rackspace costs, staff time doing installation (or cost of remote hands etc). I don't have our figures for rackspace costs, but staff time is fairly low — an hour or two when the server is purchased and later thrown out, and maybe the same to replace a HDD at some point in the 5-8 years we keep it.

Also add the time to get competing quotes from Dell, HP etc and decide on the configuration, but then a similar process is needed to choose from Hetzner vs their competitors.

nchmy•5mo ago
Amazing. So it isn't so much that Hetzner is crazy cheap, but that everyone else is crazy expensive
reilly3000•5mo ago
I would have expected that they also started saving on bandwidth, although they still have to pay for intra-regional transfer.
easton•5mo ago
Do they? If all this is in their DC then it’s going over their wires, unless their colo provider somehow had visibility into their traffic (which I’d guess they don’t).
E39M5S62•5mo ago
It's in Deft's ORD and IAD data centers, using their network for ingress/egress. Still has to go over transit between those two locations.
reactordev•5mo ago
But that might be baked into their enterprise pricing. Since it’s still “within” Deft. Site to site is common.
E39M5S62•5mo ago
It's not within Deft because they rely on transit between ORD and IAD. That was the case a few years ago when I worked there, it's probably still the same.
CBLT•5mo ago
I also run Harbor. I use the official Helm chart; it's a little jank, doesn't support a couple of things we want. It only works with one of ArgoCD/ExternalSecretsOperator, and it doesn't support Redis TLS.

Contrary to the author of this post, we just run one (the "source of truth") and use caching proxies in other regions. Works fine for us.

BlackjackCF•5mo ago
What’s jank about it?
CBLT•5mo ago
I mentioned two things that were broken:

1. Doesn't work with ExternalSecretsOperator and ArgoCD, which I happen to use. This is because the author of the Harbor chart decided not to use k8s concepts like secretRef in a podTemplate. Instead, at Helm template time, it looks up the secret data and writes it into another secret, which is then included as a envFrom. This interacts poorly with ExternalSecretsOperator in general, because it breaks the lifecycle control that ESO has. It's completely broken with ArgoCD because ArgoCD disables secret lookups by charts for pretty valid security concerns. No other chart I've come across does secret lookups during helm template time. Even the helm docs tell you it's not correct.

2. Harbor requires redis, but the Helm chart doesn't correctly pipe in the connection configuration. Redis can't be behind TLS, or the chart won't work.

dwroberts•5mo ago
You could always put the helm chart in a Kustomize and change the things you don’t like.

—-enable-helm isn't supported everywhere but Argo definitely allows it

lijok•5mo ago
We just went through this whole Kustomize shenanigan in our company. Seems completely asinine. Why not just fork the chart, fix it yourself?
p_l•5mo ago
... or the quite common case, make helm write the template once, fix, port to your own process, delete helm, live happy
benterix•5mo ago
> live happy

Until the next major upgrade.

p_l•5mo ago
In my experience, an update big enough to require major rewrite, probably should require a portion of this process to figure just what is the upgrade path.
MPSimmons•5mo ago
Is there no Argo plugin for your secret store? In a previous life, we used Argo Vault Plugin to good effect.
muhehe•5mo ago
This looks nice. What would be good on-prem S3 companion for this? I know if minio but I think there was some recent drama about it (I don't know specifics, just a feeling)
stephenlf•5mo ago
The article mentions Pure Flashblade. Looks like dedicated hardware. https://www.purestorage.com/products/unstructured-data-stora...
muhehe•5mo ago
Yes, but I'd rather some self hosted software solution
opless•5mo ago
Not S3, but I use longhorn to store persistent data on my clusters.

Sometimes you'll need admin skills, but only if you spread your cluster out over availability zones or poor connections

redblueflame•5mo ago
If you want to have replication built in, you can give https://garagehq.deuxfleurs.fr/ a try
muhehe•5mo ago
Thanks, that looks good. Do you have some real experience with this in production?
mdaniel•5mo ago
Just make sure the AGPLv3 is compatible with your policies https://git.deuxfleurs.fr/Deuxfleurs/garage/src/tag/v2.0.0/L...
phireal•5mo ago
Minio used to be de facto here, but they did a bait and switch recently and removed the UI from the free version. Garage is probably closest to best in class for open source on prem.
muhehe•5mo ago
But isn't it still available separately?

https://github.com/minio/object-browser?tab=readme-ov-file

Jedd•5mo ago
Does garage-dev have a GUI? I thought it did not.

I'm using it in my lab, along with an older instance of minio, and both are excellent choices I think. (I'm running both as HA jobs within a Hashicorp Nomad cluster, which complicates / eases various things.)

I had a vague memory of minio losing some favour a while back because they switched their underlying storage from basic object : normal file mapping to something more complex. A lot of home users, and I guess developers, liked the ability to navigate their S3-esque buckets via normal linux file system tooling.

arccy•5mo ago
https://github.com/seaweedfs/seaweedfs much simpler than minio
mdaniel•5mo ago
https://github.com/seaweedfs/seaweedfs#compared-to-other-fil... is handy, and I like that they support "sane" metadata stores and don't just do something dumb like make me run etcd
Jedd•5mo ago
Do you actually need S3-alike?

I run distribution [0] / registry [1] as a docker (via Nomad) job, and it just uses a shared nfs mount in my cluster - ie. EXT4 FS underneath.

Currently has 12GB and 52 repositories, and is very performant. It's just a file server.

Unless your size requirements are huge, the complexity of having your registry rely on a self-hosted S3-alike sounds like a lot of pain for future-muhehe to contend with if / when the S3-alike hiccups.

[0] https://github.com/distribution/distribution

[1] https://github.com/Joxit/docker-registry-ui

muhehe•5mo ago
For image registry...probably not that much. But it is quite common for other software, often those that target containers/k8s/... I already have few of those and that would be quite handy to have some good local s3.
Jedd•5mo ago
Concur - as noted in a sibling comment, I'm using both on Nomad/docker 5-node clusters, but started with minio then ran up garage-dev to compare (I just haven't migrated data over from minio yet).

I use mine primarily for long-term stores for Grafana's Mimir and Loki products.

Anyway, minio & garage are both lightweight, and happily run in docker, so if you've got a small cluster at home it would take maybe an hour or two to install them both from scratch.

ticklyjunk•5mo ago
We are using deepspace storage for this. We can get/put objects into the cloud as one of our target volumes. It works as an auto archive writing anything in the fs that is over 90 to a compressed object and leaving behind a stub. You can point Harbor (or other tools like Mimir/Loki ) to a DeepSpace endpoint which looks like a standard S3 target. Then add policies and the files get moved, replicated, versioned to tape, cloud, disk array in the background. The users just interact with the file system as usual and admins have a UI with a catalog which shows where everything actually is.
denysvitali•5mo ago
I'm confused on why they decided to populate the cache by replicating the entirety of Docker Hub instead of using a sort of cache that gets populated on the first pull
Loic•5mo ago
Because they have money to burn?
justincormack•5mo ago
That was just confusing but it seems like it was 80 repos, all their stuff on Docker Hub not all of Docker Hub.
lijok•5mo ago
We self-host Harbor as well, it’s fairly painless. Has SSO out of the box, a Terraform provider that covers everything, and for the most part just works.

The issues we’ve had so far:

- No programmatic way to retrieve your token that’s required for ‘docker login’. So we had to create a robot account per user and pop their creds into our secrets store.

- Migrating between sites by cloning the underlying S3 bucket and spinning up the new Harbor instance on top of it, does not work. Weird issues with dropping pulls.

- RBAC goes down to project, not repository level, complicating some of our SDLC controls.

- CSRF errors every time you try to do anything in the UI

- Lenient API and lack of docs means things like setting up tag immutability rules via Terraform was a bit of a PITA to figure out the right syntax

So some small issues, but definitely a great piece of software.

delusional•5mo ago
What the upgrade story like? Their official website makes it sound like a pain (stopping the software, backing up the database, changing the settings syntax, running some installer). I would expect something built for kubernetes to just do the right thing on startup (such that upgrading is simply switching out the image).
yorwba•5mo ago
I upgraded Harbor before, it was a pain. I think you're encouraged to use their official Helm chart and then it's supposed to be fairly seamless https://goharbor.io/docs/2.13.0/administration/upgrade/helm-... but if your predecessor decided against that option, separately adjusting the configuration for all the moving pieces is fairly annoying. Also, I misconfigured something and ended having to read Harbor source code because the error messages weren't very helpful. Fortunately, I had the presence of mind to first practice on a secondary installation created from a backup. It's definitely not something where you can stop production, install the update, and expect it to come back up in working order.
tedivm•5mo ago
The lack of OIDC support for Harbor has been the biggest annoyance for me. I'd love to be able to push from Github Actions to Harbor without needing robot users.
mdaniel•5mo ago
I was shocked to read such a thing in 2025 but either there is some nuance to your observation or your information is outdated https://goharbor.io/docs/2.13.0/administration/#:~:text=or%2...
tedivm•5mo ago
You're mixing up Human OIDC and Machine Flow OIDC. You can use OIDC to log in as a user, but you can't use OIDC to allow federated trust from something like Github Actions.

If you can find an example of OIDC with Github Actions and Harbor I'd love to see it.

vergessenmir•5mo ago
Harbor has its pain points but it is infinitely easier to get up and running compared to crufty Artifactory.

One glaring omission is lack of support for proxy docker.io without the project name i.e pulling nginx:latest instead of /myproject/nginx/nginx:latest

The workaround involves URL rewrite magic in your proxy of choice

nickjj•5mo ago
> pulling and pushing our images over the internet dozens of times a day caused us to hit the contracted bandwidth limit with our datacenter provider Deft repeatedly

I wonder what they were doing that resulted in blowing out their Docker layer cache on every pull and push.

Normally only a layer diff would be sent over the wire, such as a code change that didn't change your dependencies.

prmoustache•5mo ago
I'd rather have the agents prune their docker cache (or destroying and recreating agent) every night but it is not uncommon to see pipelines use the --no-cache option at every run to make sure they get the latest security updates.
KronisLV•5mo ago
So what are the thoughts of folks who have used Nexus and moved to Harbor?

In my experience Nexus is a bit weird to administer and sometimes the Docker cleanup policies straight up don't work (junk left over in the blob stores even if you try to clean everything), but it also supports all sorts of other formats, such as repositories for Maven and NuGet. Kind of hungry in regards to resources, though.

firesteelrain•5mo ago
We run both Nexus and Harbor. I am about to dump Harbor because teams don’t use it and frankly Nexus provides the same functionality.
mystifyingpoi•5mo ago
Nexus can be flaky, but it's pretty universal as you say. Harbor is a hard sell for me, since generally in any organization you'll need non-OCI artifact storage at some point, and maintaining 2 tools is always a pain.
phillebaba•5mo ago
I am not to familiar with Kamal but it seems possible to integrate it with my project Spegel to remove some of the load from upstream. Especially if they are running clusters of servers physically located close to each other they could avoid some of the replication complexity with multiple Harbor instances.
o_m•5mo ago
Naive question: why not put the effort into building the RoR apps into a binary and run the all services with systemd? No need to deal with Docker and the entire ecosystem around it
inglor_cz•5mo ago
We run our own Docker registry on-prem with Harbor as well.

One issue to solve is auto-deletion of old images, so that the storage does not swell. Any tips?

mdaniel•5mo ago
https://goharbor.io/docs/2.13.0/working-with-projects/workin... seems like what you want, especially in combination with https://goharbor.io/docs/2.13.0/administration/garbage-colle...
tempest_•5mo ago
We run a docker registry on prem as a pull through cache (none of our containers are stored in there) to keep the rate limit reasonable.

It is pretty easy to just run the basic registry for this purpose.

We have a similar setup for NPM and Pypi on the same machine. It doesnt really need a lot of attention. Some upgrades every once and a while.

branon•5mo ago
> our new on-premise registry

This is incorrect, the word you are looking for here is "on-premises" - a "premise" is something entirely different.