frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

The AI Vampire

https://steve-yegge.medium.com/the-ai-vampire-eda6e4f07163
47•SilverElfin•1h ago

Comments

Der_Einzige•1h ago
The last time I saw so much blue and orange (see the images) was the era of Battlefield 3/Battlefield 4 box art. I really do miss it tbh.

https://upload.wikimedia.org/wikipedia/en/6/69/Battlefield_3...

https://cdn11.bigcommerce.com/s-yzgoj/images/stencil/1280x12...

Downvoted within 1 minute. This website is a trash fire.

AceJohnny2•46m ago
> Downvoted within 1 minute. This website is a trash fire.

because your comment was completely offtopic.

kaoD•1h ago
> But hey, don’t take it from me. Take it from… the Copilot people. According to The Verge and a bunch of other reputable news sources, Microsoft is openly encouraging their employees to use multiple tools, and as a result, Claude Code has rapidly become dominant across engineering at Microsoft.

Well, that explains the sloppy results Microsoft is delivering lately!

paganel•1h ago
> Jeffrey Emanuel and his 22 accounts at $4400/month

Paying $4.4k per month for the privilege of writing code is absolute madness, I'm not quite so sure with how we got to this point but it's still madness. Maybe Yegge is indeed right, maybe this is just like regular gambling/addiction, which sucks when it comes to being a programmer but at least it gets the dopamine levels higher.

Smaug123•1h ago
It's not per se madness; companies pay much more than that for code. Instead it's an empirical question about whether they're getting that value from the code.
paganel•58m ago
The difference is that if those companies were to rely only on the AI part, and hence to transform us (computer programmers) only in copy-pasters and less, in about one to two years the "reasoning" behind the latest AI models would have become stale, i.e. because of no new human input. So good luck with that.

But my comment was not about companies, it was just about writing code, about the freedom that used to come from it, about the agency that we used to have. There's no agency and no freedom left when you start paying that much money in order to write code. I guess that can work for some companies, but for sure it won't work for computer programmers as actual human beings (and imo this blog-post itself tries to touch on that aspect).

Ygg2•1h ago
I'm starting to think Steve Yegge lost it. Or he never had it in the first place.
dgxyz•47m ago
He never had it. I haven't worked out if he's just a highly skilled troll to be fair.
iberator•1h ago
AI takes jobs faster than it creates new ones.

It should be banned in current form. No junior positions available - only those who lasted even have the chance to use them in commercial settings. After layoffs you will get how BAD it is :/ (if you are 35+)

Hammershaft•47m ago
Even as a software developer affected by it, I don't think it should be banned. Productivity improvements are how we get richer in aggregate over the long term, even if those impacted (like you & me) might feel the brunt of transitional pain.
mjr00•1h ago
I think Yegge hit the nail on the head: he has an addiction. Opus 4.5 is awesome but the type of stuff Yegge has been saying lately has been... questionable, to say the least. The kids call it getting "one-shotted by AI". Using an AI coding assistant should not be causing a person this much distress.

A lot of smart people think they're "too smart" to get addicted. Plenty of tales of booksmart people who tried heroin and ended up stealing their mother's jewelry for a fix a few months later.

davedx•17m ago
I'm a recovering alcoholic. One thing I learned from therapists etc. along the way is that there are certain personality types with high intelligence, and also higher sensitivity to other things, like noise, emotional challenges, and addictive/compulsive behaviour.

It does not surprise me at all that software engineers are falling into an addiction trap with AI.

rvz•56m ago
> But if you haven’t used specifically Opus 4.5/4.6 with specifically Claude Code for at least an hour, then you’re in for a real shock. Because all your complaining about AI not being useful for real-world tasks is obsolete. AI coding hit an event horizon on November 24th, 2025. It’s the real deal.

Yeah, it is over for several roles, especially frontend web development given Opus 4.6 is able to one shot your React frontend from a Figma design 90% of the time.

Why would I want to hire 10 senior frontend developers on a $200K asking price salary at this point when one AI can replace 9 of them (yes it can) and require one single junior-level engineer at a significant lower price?

This idea is very tempting for many companies to continue slashing headcount with less employees to find 'cost savings' by using AI to do more with less.

Hammershaft•42m ago
I think Opus 4.5 & 4.6 are an impressive step up in capabilities but I'm really skeptical that this model is replacing the output of 9/10 skilled front end engineers as a project grows beyond the early stages.
epicureanideal•36m ago
> Yeah, it is over for several roles, especially frontend web development

Only if the front end was super simple in the first place, IMO. And also only for the v1, which is still useful, whereas for ongoing development I think AI leads people down a path of tools that cost more to maintain and build on.

It may be that AI leads to framework and architecture choices best suited to AI, with great results up front, and then all the same challenges and costs of quick and dirty development by a human. Except 10x faster so, by the time anyone in management realizes the mess they’re in, and the cost/benefit ratio tilts negative even in the short run as opposed to the obvious to engineers long term, there’s going to be so much more code in that bad style that it’s 10x more expensive for expert humans to fix it.

delichon•53m ago
He talks about this new tech for extracting more value from engineers as if it were fracking. When they become impermeable you can now inject a mixed high pressure cocktail of AI to get their internal hydrocarbons flowing. It works but now he feels all pumped out. But the vampire metaphor is hopefully better in that blood replenishes if you don't take too much. A succubus may be an improved comparison, in that a creative seed is extracted and depleted, then refills over a refractory period.
dwedge•47m ago
Every time I say I don't see the productivity boost from AI, people always say I'm using the wrong tool, or the wrong model. I use Claude with Sonnet, Zed with either Claude Sonnet 4 or Opus 4.6, Gemini, and ChatGPT 5.2. I use these tools daily and I just don't see it.

The vampire in the room, for me, seems to be feeling like I'm the only person in the room that doesn't believe the hype. Or should I say, being in rooms where nobody seems to care about quality over quantity anymore. Articles like this are part of the problem, not the solution.

Sure they are great for generating some level of code, but the deeper it goes the more it hallucinates. My first or second git commit from these tools is usually closer to a working full solution than the fifth one. The time spent refactoring prompts, testing the code, repeating instructions, refactoring naive architectural decisions and double checking hallucinations when it comes to research take more than the time AI saves me. This isn't free.

A CTO this week told me he can't code or brainstorm anymore without AI. We've had these tools for 4 years, like this guy says - either AI or the competition eats you. So, where is the output? Aside from more AI-tools, what has been released in the past 4 years that makes it obvious looking back that this is when AI became available?

augment_me•31m ago
I am with you on this, and you can't win, because as soon as you voice this opinion you get overwhelmed with "you dont have the sauce/prompt" opinions which hold an inherent fallacy because they assume you are solving the same problems as them.

I work in GPU programming, so there is no way in hell that JavaScript tools and database wrapper tasks can be on equal terms with generating for example Blackwell tcgen05 warp-scheduled kernels.

WithinReason•29m ago
In my experience LLMs are useless for GPU compute code, just not enough in the training set.
augment_me•17m ago
Yeah, the argument here is that once you say this, people will say "you just dont know how to prompt, i pass the PTX docs together with NSight output and my kernel into my agent and run an evaluation harness and beat cuBLAS". And then it turns out that they are making a GEMM on Ampere/Hopper which is an in-distribution problem for the LLMs.

It's the idea/mindset that since you are working on something where the tool has a good distribution, its a skill issue or mindset problem for everyone else who is not getting value from the tool.

exfalso•26m ago
Exact same experience.

Here's what I find Claude Code (Opus) useful for:

1. Copy-pasting existing working code with small variations. If the intended variation is bigger then it fails to bring productivity gains, because it's almost universally wrong.

2. Exploring unknown code bases. Previously I had to curse my way through code reading sessions, now I can find information easily.

3. Google Search++, e.g. for deciding on tech choices. Needs a lot of hand holding though.

... that's it? Any time I tried doing anything more complex I ended up scrapping the "code" it wrote. It always looked nice though.

Davidzheng•23m ago
I don't understand what including the time of "4 years" does for your arguments here. I don't think anyone is arguing that the usefulness of these AIs for real projects started at GPT 3.5/4. Do you think the capabilities of current AIs are approximately the same as GPT 3.5/4 4 years ago (actually I think SOTA 4 years ago today might have been LaMDA... as GPT 3.5 wasn't out yet)?
enraged_camel•4m ago
Yeah. I started integrating AI into my daily workflows December 2024. I would say AI didn't become genuinely useful until around September 2025, when Sonnet 4.5 came out. The Opus 4.5 release in November was the real event horizon.
Terr_•17m ago
I'm an AI hipster, because I was confusing engagement for productivity before it was cool. :P

TFA mentions the slot machine aspect, but I think there are additional facets: The AI Junior Dev creates a kind of parasocial relationship and a sense of punctuated progress. I may still not have finished with X, but I can remember more "stuff" happening in the day, so it must've been more productive, right?

Contrast this to the archetypal "an idea for fixing the algorithm came to me in the shower."

Scaevolus•12m ago
Many engineers get paid a lot of money to write low-complexity code gluing things together and tweaking features according to customer requirements.

When the difficulty of a task is neatly encompassed in a 200 word ticket and the implementation lacks much engineering challenge, AI can pretty reliably write the code-- mediocre code for mediocre challenges.

A huge fraction of the software economy runs on CRUD and some business logic. There just isn't much complexity inherent in any of the feature sets.

amelius•11m ago
“We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”

― Roy Amara

fileeditview•46m ago
All this praise for AI.. I honestly don't get it. I have used Opus 4.5 for work and private projects. My experience is that all of the AIs struggle when the project grows. They always find some kind of local minimum where they cannot get out of but tell you this time their solution will work.. but it doesn't. They waste my time with this behaviour enormously. In the end I always have to do it myself.

Maybe when AIs are able to say: "I don't know how this works" or "This doesn't work like that at all." they will be more helpful.

What I use AIs for is searching for stuff in large codebases. Sometimes I don't know the name or the file name and describe to them what I am looking for. Or I let them generate some random task python/bash script. Or use them to find specific things in a file that a regex cannot find. Simple small tasks.

It might well be I am doing it totally wrong.. but I have yet to see a medium to large sized project with maintainable code that was generated by AI.

Bishonen88•39m ago
At what point does the project outgrow the AI in your experience? I have a 70k LOC backend/frontend/database/docker app that Claude still mostly one shots most features/tasks I throw at it. Perhaps, it's not as good remembering all the intertwined side-effects between functionalities/ui's and I have to let it know "in the calendar view, we must hide it as well", but that takes little time/effort.

Does it break down at some point to the extent that it simply does not finish tasks? Honest question as I saw this sentiment stated previously and assumed that sooner or later I'll face it myself but so far I didn't.

mschild•9m ago
I find that with more complex projects (full-stack application with some 50 controllers, services, and about 90 distinct full-feature pages) it often starts writing code that simply breaks functionality.

For example, had to update some more complex code to correctly calculate a financial penalty amount. The amount is defined by law and recently received an overhaul so we had to change our implementation.

Every model we tried (and we have corporate access and legal allowance to use pretty much all of them) failed to update it correctly. Models would start changing parts of the calculation that didn't need to be updated. After saying that the specific parts shouldn't be touched and to retry, most of them would go right back to changing it again. The legal definition of the calculation logic is, surprisingly, pretty clear and we do have rigorous tests in place to ensure the calculations are correct.

Beyond that, it was frustrating trying to get the models to stick to our coding standards. Our application has developers from other teams doing work as well. We enforce a minimum standard to ensure code quality doesn't suffer and other people can take over without much issue. This standard is documented in the code itself but also explicitly written out in the repository in simple language. Even when explicitly prompting the models to stick to the standard and copy pasting it into the actual chat, it would ignore 50% of it.

The most apt comparison I can make is that of a consultant that always agrees with you to your face but when doing actual work, ignores half of your instructions and you end up running after them to try to minimize the mess and clean up you have to do. It outputs more code but it doesn't meet the standards we have. I'd genuinely be happy to offload tasks to AI so I can focus on the more interesting parts of work I have, but from my experience and that of my colleagues, its just not working out for us (yet).

fhd2•38m ago
I think most of us - if not _all_ of us - don't know how to use these things well yet. And that's OK. It's an entirely new paradigm. We've honed our skills and intuition based on humans building software. Humans make mistakes, sure, but humans have a degree and style of learning and failure patterns we are very familiar with. Humans understand the systems they build to a high degree, this knowledge helps them predict outcomes, and even helps them achieve the goals of their organisation _outside_ writing software.

I kinda keep saying this, but in my experience:

1. You trade the time you'd take to understand the system for time spent testing it.

2. You trade the time you'd take to think about simplifying the system (so you have less code to type) into execution (so you build more in less time).

I really don't know if these are _good_ tradeoffs yet, but it's what I observe. I think it'll take a few years until we truly understand the net effects. The feedback cycles for decisions in software development and business can be really long, several years.

I think the net effects will be positive, not negative. I also think they won't be 10x. But that's just me believing stuff, and it is relatively pointless to argue about beliefs.

blahblaher•44m ago
He's totally correct on the extraction that companies do (always has been). What I kinda disagree is the notion that if a company doesn't go the same path as these others, where everyone is "10x'ing" with AI, that they will suddenly disappear. I really don't think it will work that way. Yeah, some might if another company/startup goes after their business and they build faster, but building faster doesn't mean your building what people want/need. You might be building bloat (Windows/MS) that no one cares about.

Companies still need to know what to build, not just build something/anything faster.

Bishonen88•42m ago
Some interesting parts in the text. Some not so interesting ones. The author seems to be thinking that he's a big deal it seems, though - a month ago, I did not know who he is. My work environment has never heard of him (SDE at FAANG). Maybe I'm an outlier and he indeed influences the whole expectation management at companies with his writing, or maybe the success (?) of gastown got to him and he thinks he's bigger than he actually is. Time will tell. In any case, the glorification of oneself in an article like that throws me off for some reason.
strstr•39m ago
Popular blogger from roughly a decade ago. His rants were frequently cited early in my career. I think he’s fallen off in popularity substantially since.
arjie•5m ago
He's early Amazon early Google so he's seen two companies super scale. Few people last two paradigm shifts so that's no guarantee of credentials. But at the time he was famous for a specific accidentally-public post that exposed people to the amount that Bezos's influence ramified through Amazon and how his choices contrasted with Google's approach to platforms.

https://news.ycombinator.com/item?id=3101876

TomasBM•40m ago
We're certainly in the middle of a whirlwind of progress. Unfortunately, as AI capabilities increase, so do our expectations.

Suddenly, it's no longer enough to slap something together and call it a project. The better version with more features is just one prompt away. And if you're just a relay for prompts, why not add an agent or two?

I think there won't be a future where the world adapts to a 4-hour day. If your boss or customer also sees you as a relay for prompts, they'll slowly cut you out of the loop, or reduce the amount they pay you. If you instead want to maintain some moat, or build your own money-maker, your working hours will creep up again.

In this environment, I don't see this working out financially for most people. We need to decide which future we want:

1. the one where people can survive (and thrive) without stable employment;

2. the one where we stop automating in favor of stable employment; or

3. the one where only those who keep up stay afloat.

mbgerring•37m ago
This is a good time to repeat that software engineers need a union. We needed this ten years ago, and we need it a lot more now.
bsaul•25m ago
As a european, yes please America, get a union. Get 2 even. You're going too fast, you're way too successful, we can't keep up.

So yes, please adopt our work ethic and legal framework. It's going to help us tremendously.

missingdays•5m ago
Are there any software engineer\quality assurance\any other IT related unions in EU? How do I join one?
mrkeen•20m ago
What concrete interests would you like such a union to protect?

Should a strike happen if devs are told to use Claude, or should a strike happen if devs aren't given access to Claude?

socialcommenter•36m ago
https://archive.md/ks83q

Let's spare the guy some web traffic.

walthamstow•20m ago
he's hosted on Medium
nhinck3•35m ago
> Let’s start with the root cause, which is that AI does actually make you more 10x productive, once you learn how.

> But hey, don’t take it from me. Take it from… the Copilot people. According to The Verge and a bunch of other reputable news sources, Microsoft is openly encouraging their employees to use multiple tools, and as a result, Claude Code has rapidly become dominant across engineering at Microsoft.

And what wonders they've achieved with it! Truly innovative enhancements to notepad being witnessed right now! The inability to shut down your computer! I can finally glimpse the 10x productivity I've been missing out on!

amoss•30m ago
Nobody wants to admit that we are living through this: https://xkcd.com/1319/

But at scale. Yegge gets close to it in this blog (which actually made me lol, good to see that he is back on form), but shies away from it.

If AI is producing a real productivity boom then we should be seeing a flood of high-quality non-AI related software. If building and shipping software is now easier and faster then all of the software that we have that doesn't quite work right should be displaced by high quality successors. It should be happening right now.

So where is it? Why is all this velocity going into tooling around AI instead? Face it, an entire industry has fallen into the trap of building the automation instead of the product they were trying to automate the production of.

Where is the new high quality C compiler that actually compiles the linux kernel to a measurably higher quality than gcc? If AI is really increasing productivity shouldn't we have that instead of a press-oriented hype flop?

benreesman•28m ago
I am a long time fan of Steve Yegge but he's too much part of the groupthink at this point.

You can't win with Claude Code. I understand that his API key isn't on the PID controller, so he gets a less bad deal, but he's still breaking even with some gee whizz factor.

Agents are like people on a long enough timeline: they will eventually do the lazy thing. But this happens in minutes not years.

If you don't have them on tracks made of iron, you are on a sugar high that will crash.

Formal methods, zero sorry, or it's another bounty for the vibecode cleanup guys.

koliber•15m ago
Am I getting Steve's point? It's a bit like what happened with the agricultural revolution.

A long time ago, food took effort to find, and calories were expensive. Then we had a breakthrough in cost/per/calories. We got fat, because we can not moderate our food intake. It is killing us.

A long time ago, coding took effort, and programmer productivity was expensive. Then we had a breakthrough in cost/per/feature. Now we are exhausted, because we can not moderate our energy and attention expenditure. It is killing us.

juanre•12m ago
I’m in Steve’s demographic, showing similar symptoms, and I’m as worried as he is about how we’re going to cope.

It’s a matter of opportunity cost. It used to be that when I rested for an hour, I lost an hour of output. Now, when I rest for an hour, I lose what used to be a day of output.

I need to rewire my brain and learn how to split the difference. There’s no point in producing a lot of output if I don’t have time to live.

The idea that you’ll get to enjoy the spoils when you grow up is false. You won’t. Just produce 5x and take some time off every day. You may even be more likely to reflect, and end up producing the right thing.

hebrides•12m ago
>With a 10x boost, if you give an engineer Claude Code, then once they’re fluent, their work stream will produce nine additional engineers’ worth of value.

I keep hearing about this 10x productivity, but where is it materializing? Most developers at my company use Claude Code, but we don't seem to be shipping new features at ten times the rate. In fact, tickets still take roughly the same amount of time to complete.

larodi•11m ago
It's real and I've been telling all the people around me who get vested in this sort of exponential growth, to be very wary of the impeding burnout, which spares no soul hungry to get high on information. getting high on information is now a thing, it is not cyberpunk fiction anymore, and burnout is a real threat - VR or not. perhaps one can burn out on tiktok these days.
brushfoot•4m ago
> But if you haven’t used specifically Opus 4.5/4.6 with specifically Claude Code for at least an hour, then you’re in for a real shock. Because all your complaining about AI not being useful for real-world tasks is obsolete.

These hyperbolic takes from Steve are wearing thin.

It wasn't my experience that Opus 4.5/4.6 was a sea change. It was a nice incremental improvement.

The same goes for Claude Code. Steve doesn't quantify what differentiates it from the other CLIs for him.

> And unfortunately, all your other tools and models are pretty terrible in comparison.

Personally, I like Copilot CLI. $10 a month for 300 requests. Copilot will keep working until it fulfills your request, no matter how many tokens it uses.

Put a large feature doc in a prompt. Copilot will chug away at it with Opus 4.6 or Sonnet 4.5. It could use millions of tokens. It doesn't matter. If you run out of prompts, each additional one costs just $0.04. The value is ridiculous.

Calling all other tools "pretty terrible" without specifics reminds me of crypto FOMO from the 2010s.

Show HN: Sigilla – Spaced repetition for browser tabs (stop hoarding)

1•northerndev•26s ago•0 comments

Big Tech Companies Prepare to Skirt Trump's $100k H-1B Fee

https://www.wsj.com/business/big-tech-companies-prepare-to-skirt-trumps-100-000-h-1b-fee-7cbf1ebe
2•johntfella•2m ago•0 comments

Show HN: Windy – Place wind turbines on a map, see residential impact

https://windy-pi.vercel.app/
1•baqiwaqi•4m ago•0 comments

Show HN: Web Scraping Sandbox Website

https://scrapingsandbox.com/
1•vrathee•5m ago•0 comments

My OpenClaw Desperately Needs a DevOps Agent

https://reorx.com/blog/devops-agent-is-the-next-openclaw-moment/
1•novoreorx•8m ago•0 comments

Streeting/Mandelson WhatsApp

https://shetlandj.github.io/w_text/
1•djgrant•9m ago•0 comments

DeepComputing Announced Early Access for the DC-ROMA RISC-V Mainboard III

https://store.deepcomputing.io/products/dc-roma-risc-v-mainboard-iii-for-framework-laptop-13-earl...
1•YesterdayOK94•11m ago•0 comments

Programmers should hedge against AI

https://amaca.substack.com/p/programmers-should-hedge-against
1•amacasubstack•11m ago•0 comments

Cursor-agent-team: Single-conversation, multi-role AI collaboration framework

https://github.com/thiswind/cursor-agent-team
1•thiswind•14m ago•1 comments

I Built Free Legal Skills for AI Agents

https://www.skala.io/legal-skills
1•dariagurevich•15m ago•1 comments

Show HN: The Hidden Bit - A Game Based proof architecture for P vs. NP

https://vakofmaya.github.io/pnp.html
1•TheAuditor•17m ago•0 comments

LLVM: Concerns about low-quality PRs being merged into main

https://discourse.llvm.org/t/concerns-about-low-quality-prs-being-merged-into-main/89748
2•latexr•17m ago•0 comments

GroMach

https://gromach.com/
1•hushuting•22m ago•0 comments

Show HN: AI-Templates for Obsidian Templater

https://github.com/ady1981/obsidian-templater-core-kbt
1•ady1981•23m ago•0 comments

Why Jony Ive put buttons in the electric Ferrari

https://www.youtube.com/watch?v=6Wv1btxCjVE
1•enaaem•24m ago•0 comments

Show HN: Tigement – private timeline planner that recalculates when tasks change

https://tigement.com
1•sodomak•24m ago•0 comments

Allumeria receives false DMCA from Microsoft over infringement of Minecraft IP

https://old.reddit.com/r/PhoenixSC/comments/1r0utq6/allumeria_receives_false_dmca_from_microsoft_...
1•joelkoen•27m ago•0 comments

Ask HN: How do you trace your own mental loops?

1•schneak•28m ago•0 comments

Show HN: We let GPT OSS 120B write and run Python and ARC AGI 2 jumped 4x

https://github.com/gutfeeling/arc-agi-2-submission
1•steinsgate•28m ago•0 comments

Show HN: IntentCode

https://intentcode.dev
1•jasfi•29m ago•1 comments

Deferred Member Initialization in C++

https://www.sandordargo.com/blog/2026/02/11/deferred-map-initialization
1•jandeboevrie•31m ago•0 comments

New open source model achieves same score as GPT 5.2 High on AIME2026 I

https://matharena.ai/?view=problem&comp=aime--aime_2026
2•mh3467•39m ago•2 comments

US Gov borrowed $43.5B per week in Q1 2026

https://fortune.com/2026/02/10/government-borrowing-cbo-report-deicit-lending-interest/
1•chirau•40m ago•0 comments

FAA Halts All Flights at El Paso Airport for 10 Days

https://www.nytimes.com/2026/02/11/us/faa-el-paso-flight-restrictions.html
27•edward•41m ago•11 comments

How to Be Less Wrong in a Polycrisis

https://blog.dougbelshaw.com/polycrisis-ebook/
1•dajbelshaw•42m ago•1 comments

Web Tiles: composable docs and apps safe in any context

https://webtil.es/
1•cernocky•46m ago•0 comments

How to level up the Fediverse [video]

https://fosdem.org/2026/schedule/event/HVJRNV-how_to_level_up_the_fediverse/
1•todsacerdoti•46m ago•0 comments

Using YouTube as Cloud Storage

https://www.youtube.com/watch?v=l03Os5uwWmk
1•birdculture•48m ago•0 comments

Human Line Project – Protecting emotional well-being in the age of AI

https://www.thehumanlineproject.org/
1•leethargo•49m ago•0 comments

A curated list of excellent books to learn PostgreSQL

https://github.com/sara8086/PostgresBooks
1•dariubs•49m ago•0 comments