frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

A2CDVI – HDMI output from from the Apple IIc's digital video output connector

https://github.com/MrTechGadget/A2C_DVI_SMD
1•mmoogle•1m ago•0 comments

CLI for Common Playwright Actions

https://github.com/microsoft/playwright-cli
1•saikatsg•2m ago•0 comments

Would you use an e-commerce platform that shares transaction fees with users?

https://moondala.one/
1•HamoodBahzar•3m ago•1 comments

Show HN: SafeClaw – a way to manage multiple Claude Code instances in containers

https://github.com/ykdojo/safeclaw
2•ykdojo•6m ago•0 comments

The Future of the Global Open-Source AI Ecosystem: From DeepSeek to AI+

https://huggingface.co/blog/huggingface/one-year-since-the-deepseek-moment-blog-3
3•gmays•7m ago•0 comments

The Evolution of the Interface

https://www.asktog.com/columns/038MacUITrends.html
2•dhruv3006•8m ago•0 comments

Azure: Virtual network routing appliance overview

https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-routing-appliance-overview
2•mariuz•9m ago•0 comments

Seedance2 – multi-shot AI video generation

https://www.genstory.app/story-template/seedance2-ai-story-generator
2•RyanMu•12m ago•1 comments

Πfs – The Data-Free Filesystem

https://github.com/philipl/pifs
2•ravenical•15m ago•0 comments

Go-busybox: A sandboxable port of busybox for AI agents

https://github.com/rcarmo/go-busybox
3•rcarmo•16m ago•0 comments

Quantization-Aware Distillation for NVFP4 Inference Accuracy Recovery [pdf]

https://research.nvidia.com/labs/nemotron/files/NVFP4-QAD-Report.pdf
2•gmays•17m ago•0 comments

xAI Merger Poses Bigger Threat to OpenAI, Anthropic

https://www.bloomberg.com/news/newsletters/2026-02-03/musk-s-xai-merger-poses-bigger-threat-to-op...
2•andsoitis•17m ago•0 comments

Atlas Airborne (Boston Dynamics and RAI Institute) [video]

https://www.youtube.com/watch?v=UNorxwlZlFk
2•lysace•18m ago•0 comments

Zen Tools

http://postmake.io/zen-list
2•Malfunction92•21m ago•0 comments

Is the Detachment in the Room? – Agents, Cruelty, and Empathy

https://hailey.at/posts/3mear2n7v3k2r
2•carnevalem•21m ago•1 comments

The purpose of Continuous Integration is to fail

https://blog.nix-ci.com/post/2026-02-05_the-purpose-of-ci-is-to-fail
1•zdw•23m ago•0 comments

Apfelstrudel: Live coding music environment with AI agent chat

https://github.com/rcarmo/apfelstrudel
2•rcarmo•24m ago•0 comments

What Is Stoicism?

https://stoacentral.com/guides/what-is-stoicism
3•0xmattf•25m ago•0 comments

What happens when a neighborhood is built around a farm

https://grist.org/cities/what-happens-when-a-neighborhood-is-built-around-a-farm/
1•Brajeshwar•25m ago•0 comments

Every major galaxy is speeding away from the Milky Way, except one

https://www.livescience.com/space/cosmology/every-major-galaxy-is-speeding-away-from-the-milky-wa...
3•Brajeshwar•25m ago•0 comments

Extreme Inequality Presages the Revolt Against It

https://www.noemamag.com/extreme-inequality-presages-the-revolt-against-it/
2•Brajeshwar•25m ago•0 comments

There's no such thing as "tech" (Ten years later)

1•dtjb•26m ago•0 comments

What Really Killed Flash Player: A Six-Year Campaign of Deliberate Platform Work

https://medium.com/@aglaforge/what-really-killed-flash-player-a-six-year-campaign-of-deliberate-p...
1•jbegley•26m ago•0 comments

Ask HN: Anyone orchestrating multiple AI coding agents in parallel?

1•buildingwdavid•28m ago•0 comments

Show HN: Knowledge-Bank

https://github.com/gabrywu-public/knowledge-bank
1•gabrywu•33m ago•0 comments

Show HN: The Codeverse Hub Linux

https://github.com/TheCodeVerseHub/CodeVerseLinuxDistro
3•sinisterMage•34m ago•2 comments

Take a trip to Japan's Dododo Land, the most irritating place on Earth

https://soranews24.com/2026/02/07/take-a-trip-to-japans-dododo-land-the-most-irritating-place-on-...
2•zdw•34m ago•0 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
49•bookofjoe•35m ago•23 comments

BookTalk: A Reading Companion That Captures Your Voice

https://github.com/bramses/BookTalk
1•_bramses•36m ago•0 comments

Is AI "good" yet? – tracking HN's sentiment on AI coding

https://www.is-ai-good-yet.com/#home
3•ilyaizen•37m ago•1 comments
Open in hackernews

Instant database clones with PostgreSQL 18

https://boringsql.com/posts/instant-database-clones/
435•radimm•1mo ago

Comments

mvcosta91•1mo ago
It looks very interesting for integration tests
radimm•1mo ago
OP here - yes, this is my use case too: integration and regression testing, as well as providing learning environments. It makes working with larger datasets a breeze.
febed•1mo ago
If possible could you share a repo/gist with a working docker example? I’m curious how the instant clone world work there.
drakyoko•1mo ago
would this work inside test containers?
radimm•1mo ago
OP here - still have to try (generally operate on VM/bare metal level); but my understanding is that ioctl call would get passed to the underlying volume; i.e. you would have to mount volume
odie5533•1mo ago
I use CREATE DATABASE dbname TEMPLATE template1; inside test containers. Have not tried this new method yet.
presentation•1mo ago
We do this, preview deploys, and migration dry runs using Neon Postgres’s branching functionality - seems one benefit of that vs this is that it works even with active connections which is good for doing these things on live databases.
1f97•1mo ago
aws supports this as well: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide...
horse666•1mo ago
Aurora clones are copy-on-write at the storage layer, which solves part of the problem, but RDS still provisions you a new cluster with its own endpoints, etc, which is slow ~10 mins, so not really practical for the integration testing use case.
nateroling•1mo ago
This is on the cluster level, while the article is talking about the database level, I believe.
TimH•1mo ago
Looks like it would probably be quite useful when setting up git worktrees, to get multiple claude code instances spun up a bit more easily.
1a527dd5•1mo ago
Many thanks, this solves integration tests for us!
BenjaminFaal•1mo ago
For anyone looking for a simple GUI for local testing/development of Postgres based applications. I built a tool a few years ago that simplifies the process: https://github.com/BenjaminFaal/pgtt
okigan•1mo ago
Would love to see a snapshot of the GUI as part of the README.md.

Also docker link seems to be broken.

BenjaminFaal•1mo ago
Fixed the package link. Github somehow made it private. I will add a snapshot right now.
peterldowns•1mo ago
Is this basically using templates as "snapshots", and making it easy to go back and forth between them? Little hard to tell from the README but something like that would be useful to me and my team: right now it's a pain to iterate on sql migrations, and I think this would help.
BenjaminFaal•1mo ago
Thats exactly what it is, just try it with the provided docker-compose file you will get it then.
radarroark•1mo ago
In theory, a database that uses immutable data structures (the hash array mapped trie popularized by Clojure) could allow instant clones on any filesystem, not just ZFS/XFS, and allow instant clones of any subset of the data, not just the entire db. I say "in theory" but I actually built this already so it's not just a theory. I never understood why there aren't more HAMT based databases.
zX41ZdbW•1mo ago
This is typical for analytical databases, e.g., ClickHouse (which I'm the author of) uses immutable data parts, allowing table cloning: https://clickhouse.com/docs/sql-reference/statements/create/...
ozgrakkurt•1mo ago
`ClickHouse (which I'm the author of)` just casually dropped that in the middle
orthecreedence•1mo ago
It was so casual I didn't even notice it until you pointed it out XD
nine_k•1mo ago
This is typical HN: everyone is here. I've seen a number of threads that unflod like this: "Lately I hacked up a satellite link to..." → "As an engineer who built the comm equipment of that satellite,.." → "As the astronaut who launched the satellite from ICS,..", etc.
chamomeal•1mo ago
Does datomic have built in cloning functionality? I’ve been wanting to try datomic out but haven’t felt like putting in the work to make a real app lol
radarroark•1mo ago
Surprisingly, no it does not. Datomic has a more limited feature that lets you make an in-memory clone of the latest copy of the db for speculative writes, which might be useful for tests, but you can't take an arbitrary version of the db with as-of and use it as the basis for a new version on disk. See: https://blog.danieljanus.pl/2025/04/22/datomic-forking-the-p...

There's nothing technically that should prevent this if they are using HAMTs underneath, so I'm guessing they just didn't care about the feature. With HAMT, cloning any part of the data structure, no matter how nested, is just a pointer copy. This is more useful than you'd think but hardly any database makes it possible.

majodev•1mo ago
Uff, I had no idea that Postgres v15 introduced WAL_LOG and changed the defaults from FILE_COPY. For (parallel CI) test envs, it make so much sense to switch back to the FILE_COPY strategy ... and I previously actually relied on that behavior.

Raised an issue in my previous pet project for doing concurrent integration tests with real PostgreSQL DBs (https://github.com/allaboutapps/integresql) as well.

christophilus•1mo ago
As an aside, I just jumped around and read a few articles. This entire blog looks excellent. I’m going to have to spend some time reading it. I didn’t know about Postgres’s range types.
pak9rabid•1mo ago
Range types are a godsend when you need to calculate things like overlapping or intersecting time/date ranges.
zachrip•1mo ago
Can you give a real world example?
christophilus•1mo ago
I think the examples here are pretty good: https://boringsql.com/posts/beyond-start-end-columns/
pak9rabid•1mo ago
This is kind of a complicated example, but here goes:

Say we want to create a report that determines how long a machine has been down, but we only want to count time during normal operational hours (aka operational downtime).

Normally this would be as simple as counting the time between when the machine was first reported down, to when it was reported to be back up. However, since we're only allowed to count certain time ranges within a day as operational downtime, we need a way to essentially "mask out" the non-operational hours. This can be done efficiently by finding the intersection of various time ranges and summing the duration of each of these intersections.

In the case of PostgreSQL, I would start by creating a tsrange (timestamp range) that encompases the entire time range that the machine was down. I would then create multiple tsranges (one for each day the machine was down), limited to each day's operational hours. For each one of these operational hour ranges I would then take the intersection of it against the entire downtime range, and sum the duration of each of these intersecting time ranges to get the amount of operational downtime for the machine.

PostgreSQL has a number of range functions and operators that can make this very easy and efficient. In this example I would make use of the '*' operator to determine what part of two time ranges intersect, and then subtract the upper-bound (using the upper() range function) of that range intersection with its lower-bound (using the lower() range function) to get the time duration of only the "overlapping" parts of the two time ranges.

Here's a list of functions and operators that can be used on range types:

https://www.postgresql.org/docs/9.3/functions-range.html

Hope this helps.

elitan•1mo ago
For those who can't wait for PG18 or need full instance isolation: I built Velo, which does instant branching using ZFS snapshots instead of reflinks.

Works with any PG version today. Each branch is a fully isolated PostgreSQL container with its own port. ~2-5 seconds for a 100GB database.

https://github.com/elitan/velo

Main difference from PG18's approach: you get complete server isolation (useful for testing migrations, different PG configs, etc.) rather than databases sharing one instance.

teiferer•1mo ago
You mean you told Claude a bunch of details and it built it for you?

Mind you, I'm not saying it's bad per se. But shouldn't we be open and honest about this?

I wonder if this is the new normal. Somebody says "I built Xyz" but then you realize it's vibe coded.

earthnail•1mo ago
Not sure why this is downvoted. For a critical tool like DB cloning, I‘d very much appreciate if it was hand written. Simply because it means it’s also hand reviewed at least once (by definition).

We wouldn’t have called it reviewed in the old world, but in the AI coding world we’re now in it makes me realise that yes, it is a form of reviewing.

I use Claude a lot btw. But I wouldn’t trust it on mission critical stuff.

ffsm8•1mo ago
Eh, DB branching is mostly only necessary for testing - locally, in CI or quick rollbacks on a shared dev instance.

Or at least I cannot come up with a usecase for prod.

From that perspective, it feels like it'd be a perfect usecase to embrace the LLM guided development jank

notKilgoreTrout•1mo ago
Mostly..

App migrations that may fail and need a rollback have the problem that you may not be allowed to wipe any transactions so you may want to be putting data to a parallel world that didn't migrate.

parthdesai•1mo ago
> App migrations that may fail and need a rollback have the problem that you may not be allowed to wipe any transactions so you may want to be putting data to a parallel world that didn't migrate.

This is why migrations are supposed to be backwards compatible

notKilgoreTrout•1mo ago
https://github.com/flyway/flywaydb.org/blob/gh-pages/documen...

You can certainly bet you followed that advice correctly, now what are the odds you could test a what-if like that in sufficient depth?

gavinray•1mo ago

  > Eh, DB branching is mostly only necessary for testing - locally
For local DB's, when I break them, I stop the Docker image and wipe the volume mounts, then restart + apply the "migrations" folder (minus whatever new broken migration caused the issue).
dpedu•1mo ago
It's being downvoted because the commenter is asking for something that is already in the readme. Furthermore, it's ironic that the person raising such an issue is performing the same mistake as they are calling out - neglecting to read something they didn't write.
earthnail•1mo ago
It‘s at the very bottom of the readme, below the MIT license mention. Yes, it’s there, but very much in the fineprint. I think the easier thing to spot is the CLAUDE.md in the code (and in particular how comprehensive it is).

Again, I love Claude, I use it a ton, but a topic like database cloning requires a certain rigour in my opinion. This repo does not seem to have it. If I had hired a consultant to build a tool like this and would receive this amount of vibe coding, I’d feel deceived. I wouldn’t trust it on my critical data.

rat9988•1mo ago
>Yes, it’s there, but very much in the fineprint.

This is where it belongs, at best. He doesn't even have to disclose it. Prompting so that the ai writes the code faster than you is okay.

renewiltord•1mo ago
If you don’t read code you execute someone is going to steal everything on your file system one day
dpedu•1mo ago
Huh? It says so right in the README.

https://github.com/elitan/velo/blame/12712e26b18d0935bfb6c6e...

And are we really doing this? Do we need to admit how every line of code was produced? Why? Are you expecting to see "built with the influence of Stackoverflow answers" or "google searches" on every single piece of software ever? It's an exercise of pointlessness.

renewiltord•1mo ago
I think you need to start with the following statement:

> We would like to acknowledge the open source people, who are the traditional custodians of this code. We pay our respects to the stack overflow elders, past, present, and future, who call this place, the code and libraries that $program sits upon, their work. We are proud to continue their tradition of coming together and growing as a community. We thank the search engine for their stewardship and support, and we look forward to strengthening our ties as we continue our relationship of mutual respect and understanding

Then if you would kindly say that a Brazilian invented the airplane that would be good too. If you don’t do this you should be cancelled for your heinous crime.

stronglikedan•1mo ago
> a Brazilian invented the airplane

lol, good one!

hu3•1mo ago
wasn't it?

last I checked, Wright brothers used a catapult while Santos-Dumont made a plane that took off by itself.

pbh101•1mo ago
I think it was the Wright brothers taking off from level ground while Santos-Dumpont got something flying off a cliff earlier.
Izkata•1mo ago
Also it looks like Santos-Dumont's plane was 2-3 years after the Wright brothers. He was doing airships before that though - lighter-than-air craft that rely on a large balloon.

Edit: So it looks like the Wright brothers had catapult but didn't actually need it (their claim-to-fame flights didn't use it), but did otherwise need a "dolly" (a wooden cart, not a catapult) because the plane didn't have wheels attached to it. Then also Santos-Dumont was declared first in Europe because he demonstrated it in Paris during a period bad reporting had people in Europe questioning the legitimacy of the Wright brothers' flight.

teiferer•1mo ago
Indeed. There is a difference between "I have learnes by reading a lot of SO" and "I have copied the contents of this file verbatim from SO". Using Claude is very close to the latter without saying it.
elAhmo•1mo ago
It is the new normal, whether you are against it or not.

If someone used AI, it is a good discussion to see whether they should explicitly disclose it, but people have been using assisted tools, from auto-complete, text expanders, IDE refactoring tools, for a while - and you wouldn't make a comment that they didn't build it. The lines are becoming more blurry over time, but it is ridiculous to claim that someone didn't build something if they used AI tools.

pritambarhate•1mo ago
Let's say there is an architect and he also owns a construction company. This architect, then designs a building and gets it built from of his employees and contractors.

In such cases the person says, I have built this building. People who found companies, say they have built companies. It's commonly accepted in our society.

So even if Claude built for it for GP, as long as GP designed it, paid for tools (Claude) to build it, also tested it to make sure that it works, I personally think, he has right to say he has built it.

If you don't like it, you are not required to use it.

greatgib•1mo ago
The architect knows what it is doing. And the workers are professionals with supervisors to check that the work is done properly.
pebble•1mo ago
No, it's more like the architect has a cousin who is like "I totally got this bro" and builds the building for them.
foobarbecue•1mo ago
Right and also in this world there are no building codes or building inspections.
testdelacc1•1mo ago
What an outrageously bad analogy. Everyone involved in that building put their professional reputations and licenses on the line. If that building collapses, the people involved will lose their livelihoods and be held criminally liable.

Meanwhile this vibe coded nonsense is provided “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. We don’t even know if he read it before committing and pushing.

pritambarhate•1mo ago
Even billion dollar software products have similar clauses, it doesn't have anything to do with vibe coding. To build and sell software no educational qualification is needed.

Quality of the software comes from testing. Humans and LLMs both make mistakes while coding.

tracker1•1mo ago
As an autodidact, and someone who has seen plenty of well educated idiots in the software profession, I'm happy there are no such requirements... I think a guild might be more reasonable than a professional org more akin to how it works for other groups (lawyers, doctors, etc).

There are of course projects that operate at higher development specification standards, often in the military or banking. This should be extended to all vehicles and invasive medical devices.

tracker1•1mo ago
Depends on the building type/size/scale and jurisdiction. Modern tract homes are really varied, hit or miss and often don't see any negative outcomes for the builders in question for shoddy craftsmanship.
pbh101•1mo ago
Same with any OSS. Up to you to validate whether or not it is worth depending on, regardless of how built. Social proof is a primary avenue to that and has little to do with how built.
rootnod3•1mo ago
That has to be the worst analogy I have read in a while, and I’m HN that says something.
fauigerzigerk•1mo ago
I agree that it's ultimately about the product.

But here's the problem. Five years ago, when someone on here said, "I wrote this non-trivial software", the implication was that a highly motivated and competent software engineer put a lot of effort into making sure that the project meets a reasonable standard of quality and will probably put some effort into maintaining the project.

Today, it does not necessarily imply that. We just don't know.

wahnfrieden•1mo ago
Hand-written code never implied much about quality no matter the author, especially as we all use libraries of reusable code of varying quality
foltik•1mo ago
Agree that just being hand-written doesn’t imply quality, but based on my priors, if something obviously looks like vibe-code it’s probably low quality.

Most of the vibe-code I’ve seen so far appears functional to the point that people will defend it, but if you take a closer look it’s a massively over complicated rat’s nest that would be difficult for a human to extend or maintain. Of course you could just use more AI, but that would only further amplify these problems.

fauigerzigerk•1mo ago
Not much, but infinitely more than now.

If someone puts weeks and months of their time into building something, then I'm willing to take that as proof of their motivation to create something good.

I'm also willing to take the existence of non-trivial code that someone wrote manually as proof of some level of competence.

The presence of motivation + competence makes it more likely that the result could be something good.

dabber•1mo ago
The original person didn't say "I wrote this non-trivial software", they said "I built Velo".
fauigerzigerk•1mo ago
...and pointed us to a repository containing non-trivial software.
pritambarhate•1mo ago
Even with LLMs delivering software that consistently works requires quite a bit of work and in most cases requires certain level of expertise. Humans also write quite a bit of garbage code.

People using LLMs to code these days is similar to how majority people stopped using assembly and moved to C and C++, then to garbage collected languages and dynamically typed languages. People were always looking for ways to make programmers more productive.

Programming is evolving. LLMs are just next generation programming tools. They make programmers more productive and in majority of the cases people and companies are going to use them more and more.

fauigerzigerk•1mo ago
I'm not opposed to AI generated code in principle.

I'm just saying that we don't know how much effort was put into making this and we don't know whether it works.

The existence of a repository containing hundereds of files, thousands of SLOCs and a folder full of tests tells us less today than it used to.

There's one thing in particular that I find quite astonishing sometimes. I don't know about this particular project, but some people use LLMs to generate both the implementation and the test cases.

What does that mean? The test cases are supposed to be the formal specification of our requirements. If we do not specify formally what we expect a tool to do, how do we know whether the tool has done what we expected, including in edge cases?

teiferer•1mo ago
I fully agree with your overall message and sentiment. But let me be nit-picky for a moment.

> The test cases are supposed to be the formal specification of our requirements

Formal methods folks would strongly disagree with this statement. Tests are informal specifications in the sense that they don't provide a formal (mathematically rigorous) description of the full expected behavior of the system. Instead, they offer a mere glimpse into what we hope the system would do.

And that's an important part, which is where your main point stands. The test is what confirms that the thing the LLM built conforms to the cases the human expected to behave in a certain way. That's why the human needs to provide them.

(The human could take help of an LLM to write the tests, as in they give an even-more-informal natural language description of what the test should do. But the human then needs to make sure that the test really does that and maybe fill in some gaps.)

halfcat•1mo ago
> If we do not specify formally what we expect a tool to do, how do we know whether the tool has done what we expected, including in edge cases?

You don’t. That’s the scary part. Up until now, this was somewhat solved by injecting artificial friction. A bank that takes 5 days for a payment to clear. And so on.

But it’s worse than this, because most problems software solves cannot even be understood until you partially solve the problem. It’s the trying and failing that reveals the gap, usually by someone who only recognizes the gap because they were once embarrassed by it, and what they hear rhymes with their pain. AI doesn’t interface with physical reality, as far as we know, or have any mechanism to course correct like embarrassment or pain.

In the future, we will have flown off the cliff before we even know there was a problem. We will be on a space ship going so fast that we can’t see the asteroid until it’s too la...

heliumtera•1mo ago
We know. It is not difficult to tell them apart. Good taste is apparent and beauty is universal. The amount of care and attention someone put into a craft is universally appreciated. Also, I am 100% confident this comment was the output of a human process. We can tell. There is something more. It is obvious for those that have a soul.
fauigerzigerk•1mo ago
We know if we make the effort to find out. But what we really want to know is not whether AI was used in the process of writing the software. What we want to know is whether it's worth checking out. That's what has become harder to know.
philipallstar•1mo ago
Exactly. It's like looking at assembly that's been written by a person vs by a compiler. There's just no soul in the latter! And that's why compilers never caught on.
pbh101•1mo ago
In general that is all implication and assumption, for any code, especially OSS code.
onion2k•1mo ago
the implication was that a highly motivated and competent software engineer put a lot of effort into making sure that the project meets a reasonable standard of quality and will probably put some effort into maintaining the project

That is entirely an assumption on the part of the reader. Nothing about someone saying "I built this complicated thing!" implies competence, or any desire to maintain it beyond building it.

The problem you're facing is survivorship bias. You can think of lots of examples of where that has happened, and very few where it hasn't, because when the author of the project is incompetent or unmotivated the project doesn't last long enough for you to hear about it twice.

fauigerzigerk•1mo ago
>Nothing about someone saying "I built this complicated thing!" implies competence, or any desire to maintain it beyond building it.

I disagree. The fact that someone has written a substantial amount of non-trivial code does imply a higher level of competence and motivation compared to not having done that.

dirtbag__dad•1mo ago
> Today we just don’t know

You never knew. There are plenty of intelligent, well-intentioned software engineers that publish FOSS that is buggy and doesn’t meet some arbitrary quality standards.

happymellon•1mo ago
That's a lot of ifs.
risyachka•1mo ago
Asking someone to build a house - and then saying I built it - is "very misleading" to put it nicely.

When you order a website on upwork - you didn't build it. You bought it.

victorbjorklund•1mo ago
Plenty of architects claim ”this is my building” even if they didn’t pour all the concrete
heliumtera•1mo ago
Every single commit is Claude. No human expert involved. Would you trust your company database to an 25 dollars vibe session? Would you live in a 5 dollars building? Is there any difference from hand tailored suit, constructed to your measurements, and a 5 dollars t-shirt? Some people don't want to live in a five dollars world.
wahnfrieden•1mo ago
Agent authorship doesn't imply unreviewed or underspecified code
heliumtera•1mo ago
Vibe coded means precisely that!
wahnfrieden•1mo ago
Yes but there’s no evidence this is vibe coded or not. You’re cynically claiming it due to agent authorship. As if there is no legitimate use.

> No human expert involved

You don’t know this, you are just hating.

Besides the close review and specification that may be conducted with agents, even if you handwrite / edit code, it will say that it was co-authored by the agent if you have the agent do the commit for you.

pbh101•1mo ago
Most of the OSS projects on HN are not worthy for you to base your company on, especially sight unseen. Using an agent has nothing to do with it.
philipallstar•1mo ago
> In such cases the person says, I have built this building

But this is also bad, because it's wrong. They drew it and maybe got some paperwork through a planning department. They didn't build it.

6r17•1mo ago
There was a recent wave of such comment on the rust subreddit - exactly in this shape "Oh you mean you built this with AI". This is highly toxic, lead to no discussion, and is literally drove by some dark thought from the commentator. I really hope HN will not jump on this bandwagon and will focus instead on creating cool stuff.

Everybody in the industry is vibecoding right now - the things that stick are due to sufficient quality being pushed on it. Having a pessimistic / judgmental surface reaction to everything as being "ai slop" is not something that I'm going to look forward in my behavior.

heliumtera•1mo ago
>This is highly toxic, lead to no discussion

Why good faith is a requirement for commenting but not for submissions? I would argue the good faith assumption should be disproportionately more important for submissions given the 1 to many relationship. You're not lying, it indeed is toxic and rapidly spreading. I'm glad this is the case.

Most came here for the discussion and enlightenment to be bombarded by heavily biased, low effort marketing bullshit. Presenting something that has no value to anyone besides the builder is the opposite of good faith. This submissions bury and neglect useful discussion, difficult to claim they are harmless and just not useful.

Not everyone in the industry is vibe coding, that is simply not true. but that's not the point I want to make. You don't need to be defensive about your generative tools usage, it is ok to use whatever, nobody cares. Just be ready to maintain your position and defend your ideals. Nothing is more frustrating then giving honest attention to a problem, considering someone else perspective, to just then realize it was just words words words spewed by slop machine. Nobody would give a second thought if that was disclosed. You are responsible for your craft. The moment you delegate that responsibility into the thrash you belong. If the slop machine is so great, why in hell would I need you to ask it to help me? Nonsensical.

6r17•1mo ago
Your bias is that you think that because you can use a bike then my bike efforts are worthless. Considering that I often thrash out what I generate and I know I do not generate -> ship ; but have a quality process that validate my work by itself - the way I'm reaching my goals present no value to my public.

The reason this discussion is pathetic, is that it shifts the discussion from the main topic (here it was a database implementation) - to abide by a reactionary emotive emulation with no grace or eloquence - that is mostly driven by pop culture at this point with a justification mostly shaping your ego.

There is no point in putting yourself above someone else just to justify your behavior - in fact it only tells me what kind of person you were in the first place - and as I said, this is not the kind of attitude that i'm looking up to.

rileymichael•1mo ago
> Everybody in the industry is vibecoding right now

no ‘everybody’ is not. a lot of us are using zero LLMs and continuing to build (quality) software just fine

6r17•1mo ago
Justifiably, there is 0 correlation between something written manually and quality - in fact I argue it's quiet the opposite since you were unable to process as much play and architecture to try & break, you have spent less time experimenting, and more time pushing your ego.
cstrahan•1mo ago
Do you take issue with companies stating that they (the company) built something, instead of stating that their employees built something? Should the architects and senior developers disclaim any credit, because the majority of tickets were completed by junior and mid-level developers?

Do you take issue with a CNC machinist stating that they made something, rather than stating that they did the CAD and CAM work but that it was the CNC machine that made the part?

Non-zero delegation doesn’t mean that the person(s) doing the delegating have put zero effort into making something, so I don’t think that delegation makes it dishonest to say that you made something. But perhaps you disagree. Or, maybe you think the use of AI means that the person using AI isn’t putting any constructive effort into what was made — but then I’d say that you’re likely way overestimating the ability of LLMs.

teiferer•1mo ago
Could we please avoid the strawmen? Nowhere have I claimed that they didn't put work into this. Nowhere did I say that delegation is bad. I'd like to encourage a discussion, but then please counter the opinion that I gave, not a made-up one that I neither stated nor actually hold.
eudoxus•1mo ago
> You mean you told Claude a bunch of details and it built it for you?

> Nowhere have I claimed that they didn't put work into this.

There's some mental gymnastics.

> please counter the opinion that I gave

The reply your responding to did exactly that, and you just gave more snarky responses.

teiferer•1mo ago
We all agree that crafting the right prompts (or however we call the CLAUDE.md instructions) is a lot of work, don't we? Of course they put work into this, it's a file of substantial size. And then Claude used it to build the thing. Where is a contradiction? I don't see the mental gymnastics, sorry.
taude•1mo ago
why does it matter?
whalesalad•1mo ago
Hell yeah. I’ve been meaning to prototype this exact thing but with btrfs.
elitan•1mo ago
interesting, you have a link?
Rovanion•1mo ago
You, is an interesting word to use given that you plagiarized it.
anonymars•1mo ago
Do you have a link to the original?
wahnfrieden•1mo ago
Please share the instant Postgres clones tool this copied! I'd love to try it
elitan•1mo ago
Plagiarized from what? Happy to address if you can point to what you're referring to.
eudoxus•1mo ago
I think they may be jumping on the "shit on AI assisted project" bandwagon. I am by no means reaching for ai tools at every turn, but to suggest its plagiarized is laughable.

Don't worry about these trolls.

72deluxe•1mo ago
Despite all of the complaints in other comments about the use of Claude Code, it looks interesting and I appreciated the video demo you put on the GitHub page.
buu700•1mo ago
Agentic coding detractors: "If AI is so great, where all the thriving new open source projects to prove it?"

Also agentic coding detractors: "How dare you use AI to help build a new open source project."

I'm joking and haven't read the comments you're referring to, but whether or not AI was involved is irrelevant per se. If anyone finds themselves having a gut reaction to "AI", just mentally replace it with "an intern" or "a guy from Fiverr". Either way, the buck stops with whomever is taking ownership of the project.

If the code/architecture is buggy or unsafe, call that out. If there's a specific reason to believe no one with sufficient expertise reviewed and signed off on the implementation, call that out. Otherwise, why complain that someone donated their time and expertise to give you something useful for free?

bicx•1mo ago
For real. For someone to even understand why this tool is useful and functions as intended, they need to have some deeper understanding of software development. Who cares if the implementation was done with AI. With Claude Code, I rarely write code by hand these days, yet my brain hurts more than ever from all the actual problem solving I’m able to drill into with all the programming cruft out of the way. I did it by hand for 15 years, and I don’t feel bad at all for handing that part over.
QuercusMax•1mo ago
A decade ago, a senior staff engineer at Google told me that he doesn't mind delegating the data-entry parts of his job to junior SWEs, so he can focus on higher-level problem solving.

This is how I've been treating AI, except instead of assuming your junior SWE is generally sane and has some understand of what you're doing, you have to make sure you double check everything.

halfcat•1mo ago
> If anyone finds themselves having a gut reaction to "AI", just mentally replace it with "an intern" or "a guy from Fiverr"

It’s not the guy from Fiverr anyone is annoyed with. It’s the tech CEOs who beat everyone over the head with:

- ”the future will be a-guy-from-Fiverr-native”

- ”we are mandating that 80% of our employees incorporate a-guy-from-Fiverr into their daily workflow by year end”

And everyone pretends this is serious.

Then there are people who are pulling off cool demo stunts that amount to duct taping fireworks to a lawn mower but they post about it on X doing their best Steve Jobs thought leader impersonation.

And again everyone pretends like this is serious.

The annoyance is like that friend you tell about this great new song, and they’re excited, but only because it’s something they can to tell other people and look cool. Not because they’re into music.

buu700•1mo ago
I mean, if the end result is that I get a bunch of guys from Fiverr constantly at my beck and call for pennies on the dollar, I'm not sure why I should care what some CEO thinks they have to say to make money.

(Regarding mandates, of course they're a hamfisted solution, but it's not totally unreasonable that management would attempt to establish an incentive for its workforce to learn and put into practice a valuable new skill.)

Either way, that doesn't address the response to this project. Johan isn't Sam Altman. All Johan is guilty of here is building something useful and giving it to the rest of us for free.

newusertoday•1mo ago
thanks for sharing its interesting approach. I am not sure why people are complaining most of the software is written with the help of agents these days.
theturtletalks•1mo ago
It’s rampant. Launch anything these days and it’s bombarded with “vibe-coded” comments.

The issue of quality makes sense since it’s so easy to build these days, but when the product is open-source, these vibe coded comments make no sense. Users can literally go read the code or my favorite? Repomix it, pop it into AI Studio, and ask Gemini what this person has built, what value it brings, and does it solve the problem I have?

For vibe coded proprietary apps, you can’t do that so the comments are sort of justified.

blibble•1mo ago
gee I wonder why people don't want "AI" code anywhere near their single source of truth (database)
theturtletalks•1mo ago
I'm not understanding. No one is saying put unverified code right next to your database. Maybe spin up a replica or mock database and test it there.
anonzzzies•1mo ago
Does it work with other dbs theoretically?
horse666•1mo ago
This is really cool, looking forward to trying it out.

Obligatory mention of Neon (https://neon.com/) and Xata (https://xata.io/) which both support “instant” Postgres DB branching on Postgres versions prior to 18.

oulipo2•1mo ago
Assuming I'd like to replicate my production database for either staging, or to test migrations, etc,

and that most of my data is either:

- business entities (users, projects, etc)

- and "event data" (sent by devices, etc)

where most of the database size is in the latter category, and that I'm fine with "subsetting" those (eg getting only the last month's "event data")

what would be the best strategy to create a kind of "staging clone"? ideally I'd like to tell the database (logically, without locking it expressly): do as though my next operations only apply to items created/updated BEFORE "currentTimestamp", and then:

- copy all my business tables (any update to those after currentTimestamp would be ignored magically even if they happen during the copy) - copy a subset of my event data (same constraint)

what's the best way to do this?

gavinray•1mo ago
You can use "psql" to dump subsets of data from tables and then later import them.

Something like:

  psql <db_url> -c "\copy (SELECT * FROM event_data ORDER BY created_at DESC LIMIT 100) TO 'event-data-sample.csv' WITH CSV HEADER"
https://www.postgresql.org/docs/current/sql-copy.html

It'd be really nice if pg_dump had a "data sample"/"data subset" option but unfortunately nothing like that is built in that I know of.

peterldowns•1mo ago
pg_dump has a few annoyances when it comes to doing stuff like this — tricky to select exactly the data/columns you want, and also the dumped format is not always stable. My migration tool pgmigrate has an experimental `pgmigrate dump` subcommand for doing things like this, might be useful to you or OP maybe even just as a reference. The docs are incomplete since this feature is still experimental, file an issue if you have any questions or trouble

https://github.com/peterldowns/pgmigrate

oulipo2•1mo ago
Indeed, but is there a way to do it as a "point in time", eg do a "virtual checkpoint" at a timestamp, and do all the copy operations from that timestamp, so they are coherent?
francislavoie•1mo ago
Is anyone aware of something like this for MariaDB?

Something we've been trying to solve for a long time is having instant DB resets between acceptance tests (in CI or locally) back to our known fixture state, but right now it takes decently long (like half a second to a couple seconds, I haven't benchmarked it in a while) and that's by far the slowest thing in our tests.

I just want fast snapshotted resets/rewinds to a known DB state, but I need to be using MariaDB since it's what we use in production, we can't switch DB tech at this stage of the project, even though Postgres' grass looks greener.

proaralyst•1mo ago
You could use LVM or btrfs snapshots (at the filesystem level) if you're ok restarting your database between runs
francislavoie•1mo ago
Restarting the DB is unfortunately way too slow. We run the DB in a docker container with a tmpfs (in-memory) volume which helps a lot with speed, but the problem is still the raw compute needed to wipe the tables and re-fill them with the fixtures every time.
renewiltord•1mo ago
I have not done this so it’s theorycrafting but can’t you do the following?

1. Have a local data dir with initial state

2. Create an overlayfs with a temporary directory

3. Launch your job in your docker container with the overlayfs bind mount as your data directory

4. That’s it. Writes go to the overlay and the base directory is untouched

francislavoie•1mo ago
But how does the reset happen fast, the problem isn't with preventing permanent writes or w/e, it's with actually resetting for the next test. Also using overlayfs will immediately be slower at runtime than tmpfs which we're already doing.
peterldowns•1mo ago
Yeah unfortunately I think that it's not really possible to hit the speed of a TEMPLATE copy with MariaDB. @EvanElias (maintainer of https://github.com/skeema/skeema about this) was looking into it at one point, might consider reaching out to him — he's the foremost mysql expert that I know.
evanelias•1mo ago
Thanks for the kind words Peter!

There's actually a potential solution here, but I haven't personally tested it: transportable tablespaces in either MySQL [1] or MariaDB [2].

The basic idea is it allows you to take pre-existing table data files from the filesystem and use them directly for a table's data. So with a bit of custom automation, you could have a setup where you have pre-exported fixture table data files, which you then make a copy of at the filesystem level, and then import as tablespaces before running each test. So a key step is making that fs copy fast, either by having it be in-memory (tmpfs) or by using a copy-on-write filesystem.

If you have a lot of tables then this might not be much faster than the 0.5-2s performance cited above though. iirc there have been some edge cases and bugs relating to the transportable tablespace feature over the years as well, but I'm not really up to speed on the status of that in recent MySQL or MariaDB.

[1] https://dev.mysql.com/doc/refman/8.0/en/innodb-table-import....

[2] https://mariadb.com/docs/server/server-usage/storage-engines...

exceptione•1mo ago
Resetting is free if you discard the overlayfs writes, no? I am not sure if one can discard at runtime, or if the next test should be run in a new container. But that should still be fast.

If your db is small enough to fit in tmpfs, than sure, that is hard to beat. But then xfs and zfs are overkill too.

EDIT: I see you mentioning that starting the db is slow due to wiping and filling at runtime. But the idea of a snapshot is that you don't have to do that, unless I misunderstand you.

renewiltord•1mo ago
Ah I was thinking you just start multiple overlays and run tests independent of each other.
ikatson•1mo ago
How about do the changes then bake them into the DB docker image. I.e. "docker commit".

Then spin up the dB using that image instead of an empty one for every test run.

This implies starting the DB through docker is faster than what you're doing now of course.

francislavoie•1mo ago
Yeah there's absolutely no way restarting the container will be faster.
briffle•1mo ago
LVM snapshots work well. Used it for years with other database tools.. But make sure you allocate enough write space for the COW.. when the write space fills up, LVM just 'drops' the snapshot.
pak9rabid•1mo ago
I was able to accomplish this by doing each test within its own transaction session that gets rolled-back after each test. This way I'm allowed to modify the database to suit my needs for each test, then it gets magically reset back to its known state for the next test. Transaction rollbacks are very quick.
hu3•1mo ago
As a consultant, I saw many teams doing that and it works well.

The only detail is that autoincrements (SEQUENCEs for PotgreSQL folks) gets bumped even if the transaction rollsback.

So tables tend to get large ids quickly. But it's just dev database so no problem.

fanf2•1mo ago
This doesn’t work for testing migrations because MySQL/MariaDB doesn’t support DDL inside transactions, unlike PostgreSQL.
pak9rabid•1mo ago
Migrations are kind of a different beast. In that case I just stand up a test environment in Docker that does what it needs, then just trash it once things have been tested/verified.
francislavoie•1mo ago
Unfortunately a lot of our tests use transactions themselves because we lock the user row when we do anything to ensure consistency, and I'm pretty sure nested transactions are still not a thing.
hu3•1mo ago
You can emulate nested transactions using save points. A client uses that in production. And others in unit tests.
pak9rabid•1mo ago
Bingo...this is how I get around that.
peterldowns•1mo ago
Really interesting article, I didn't know that the template cloning strategy was configurable. Huge fan of template cloning in general; I've used Neon to do it for "live" integration environments, and I have a golang project https://github.com/peterldowns/pgtestdb that uses templates to give you ~unit-test-speed integration tests that each get their own fully-schema-migrated Postgres database.

Back in the day (2013?) I worked at a startup where the resident Linux guru had set up "instant" staging environment databases with btrfs. Really cool to see the same idea show up over and over with slightly different implementations. Speed and ease of cloning/testing is a real advantage for Postgres and Sqlite, I wish it were possible to do similar things with Clickhouse, Mysql, etc.

riskable•1mo ago
PostgreSQL seems to have become the be-all, end-all SQL database that does everything and does it all well. And it's free!

I'm wondering why anyone would want to use anything else at this point (for SQL).

wahnfrieden•1mo ago
Can’t really run it on iOS. And its WASM story is weak
efxhoy•1mo ago
It’s the clear OLTP winner but for OLAP it’s still not amazing out of the box.
scottyah•1mo ago
It's heavy, I'd say sqlite3 close to the client and postgres back at the server farm is the combo to use.
aftbit•1mo ago
Once upon a time, MySQL/InnoDB was a better performance choice for UPDATE-heavy workloads. There was a somewhat famous blog post about this from Uber[1]. I'm not sure to what extent this persists today. The other big competitor is sqlite3, which fills a totally different niche for running databases on the edge and in-product.

Personally, I wouldn't use any SQL DB other that PostgreSQL for the typical "database in the cloud" use case, but I have years of experience both developing for and administering production PostgreSQL DBs, going back to 9.5 days at least. It has its warts, but I've grown to trust and understand it.

1: https://www.uber.com/blog/postgres-to-mysql-migration/

vl•1mo ago
“does it all well” is a stretch.

Any non-trivial amount of data and you’ll run into non-trivial problems.

For example, some of our pg databases got into such state, that we had to write custom migration tool because we couldn’t copy data to new instance using standard tools. We had to re-write schema to using custom partitions because perf on built-in partitioning degrades as number of partitions gets high, and so on.

nine_k•1mo ago
Postgres is wonderful, and has great many useful extensions. But:

* MySQL has a much easier story of master-master replication.

* Mongo has a much easier story of geographic distribution and sharding. (I know that Citus exists, and has used it.)

* No matter how you tune Postgres, columnar databases like Clickhouse are still faster for analytics / time series.

* Write-heavy applications still may benefit from something like Cassandra, or more modern solutions in this space.

(I bet Oracle has something to offer in the department of cluster performance, too, but I did not check it out for a long time.)

pstuart•1mo ago
And SQLite is capable enough in many cases too...
hu3•1mo ago
PostgreSQL has no mature Vitess alternative. Hence, the largest oss OLTP database deployments tend to be MySQL. Like YouTube and Uber for example.
bddicken•1mo ago
This is changing soon with Neki.

https://www.neki.dev

hu3•1mo ago
Nice. But it's not even available yet.

It will take years to call this mature. Certainly not "soon"

oblio•1mo ago
For me that page has basically 0 descriptive content.
dangoodmanUT•1mo ago
YouTube has migrated mostly to Spanner from what I hear
turtles3•1mo ago
To be fair, postgres still suffers from a poor choice of MVCC implementation (copy on write rather than an undo log). This one small choice has a huge number of negative knock on effects once your load becomes non-trivial
sheepscreek•1mo ago
I set this up for my employer many years ago when they migrated to RDS. We kept bumping into issues on production migrations that would wreck things. I decided to do something about it.

The steps were basically:

1. Clone the AWS RDS db - or spin up a new instance from a fresh backup.

2. Get the arn and from that the cname or public IP.

3. Plug that into the DB connection in your app

4. Run the migration on pseudo prod.

This helped up catch many bugs that were specific to production db or data quirks and would never haven been caught locally or even in CI.

Then I created a simple ruby script to automate the above and threw it into our integrity checks before any deployment. Last I heard they were still using that script I wrote in 2016!

Tostino•1mo ago
I love those "migration only fails in prod because of data quirks" bugs. They are the freaking worst. Have called off releases in the past because of it.
QuercusMax•1mo ago
You should almost never test in prod, but sometimes testing on [a copy of] prod is useful
leetrout•1mo ago
You should almost never stop testing in prod.

https://www.honeycomb.io/blog/testing-in-production

tehlike•1mo ago
Now i need to find a way to migrate from hydra columnar to pg_lake variants so i can upgrade to PG18.
hmokiguess•1mo ago
I’ve been a fan of Neon and it’s branching strategy, really handy thing for stuff like this.
wayeq•1mo ago
we just build the database, commit it to a container (without volumes attached), and programmatically stop and restart the container per test class (testcontainers.org). the overhead is < 5 seconds and our application recovers to the reset database state seamlessly. it's been awesome.
eatsyourtacos•1mo ago
I still cannot reliably restore any Postgres DB with the TimescaleDB extensions on it.. have tried a million things but still fails every time.
tudorg•1mo ago
This is really cool and I love to see the interest in fast clones / branching here.

We've built Xata with this idea of using copy-on-write database branching for staging and testing setups, where you need to use testing data that's close to the real data. On top of just branching, we also do things like anonymization and scale-to-zero, so the dev branches are often really cheap. Check it out at https://xata.io/

> The source database can't have any active connections during cloning. This is a PostgreSQL limitation, not a filesystem one. For production use, this usually means you create a dedicated template database rather than cloning your live database directly.

This is a key limitation to be aware of. A way to workaround it could be to use pgstream (https://github.com/xataio/pgstream) to copy from the production database to a production replica. Pgstream can also do anonymization on the way, this is what we use at Xata.