frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Why Is the US Sliding Toward Authoritarianism?

https://text.tchncs.de/afoqfda99f
1•doener•1m ago•0 comments

The rise of Chinese memory [video]

https://www.youtube.com/watch?v=qzfhhAfxK-A
1•elestor•1m ago•0 comments

Pondpilot: A lightweight local first SQL analytics tool using DuckDB

https://github.com/pondpilot/pondpilot
1•tanelpoder•2m ago•0 comments

A Screen Time Limiting App That Works

https://apps.apple.com/us/app/appblock-app-usage-hard-stop/id6757147703
1•dev_at•2m ago•1 comments

Fear Is More Dangerous Than Evil [video]

https://www.youtube.com/watch?v=GQ3-_olGe2s&list=PLcs1ZorNr2uTGPZPZnBa408qLVHjbMTzT&index=4
1•eamag•3m ago•0 comments

Wix plans to let AI write most code, leaving engineers to redefine their role

https://www.calcalistech.com/ctechnews/article/r1u0ydglzg
1•myth_drannon•3m ago•0 comments

Ask HN: I am a surgeon/doctor. What ai literature/courses would you recommend?

1•emmabruns•6m ago•0 comments

Launch of "Russian Starlink" Postponed Due to Satellite Production Failure

https://militarnyi.com/en/news/launch-of-russian-starlink-postponed-due-to-satellite-production-f...
1•giuliomagnifico•6m ago•0 comments

git-pkgs: A Git subcommand that indexes dependency changes into a database

https://git-pkgs.dev/
1•zdw•7m ago•0 comments

Thucydides's Trap Case File

https://www.belfercenter.org/programs/thucydidess-trap/thucydidess-trap-case-file
1•doener•9m ago•0 comments

Show HN: PromptUI – AI kept giving me the same boring UI. So I fixed it

https://www.promptui.xyz/
1•exos-xyz•12m ago•0 comments

December in Servo: multiple windows, proxy support, better caching, and more

https://servo.org/blog/2026/01/23/december-in-servo/
1•t-3•15m ago•0 comments

Ask HN: To chase "The Flow" or to avoid it as progress doesn't come easy?

2•gajnadsgjoas•16m ago•1 comments

Tao Te Ching – Translated by Ursula K. Le Guin

https://github.com/nrrb/tao-te-ching/blob/master/Ursula%20K%20Le%20Guin.md
2•andsoitis•17m ago•0 comments

US science after a year of Trump: what has been lost and what remains

https://www.nature.com/immersive/d41586-026-00088-9/index.html
13•Anon84•18m ago•0 comments

Shapeshifting materials could power next generation of soft robots

https://techxplore.com/news/2026-01-shapeshifting-materials-power-generation-soft.html
2•Brajeshwar•24m ago•0 comments

Show HN: Kloak – a privacy-first Discord alternative with no email or passwords

https://kloak.app/
1•lakshikag•24m ago•2 comments

Sylve – Modern Bhyve Virtualization and Clustering on FreeBS

https://gyptazy.com/blog/sylve-a-proxmox-alike-webui-for-bhyve-on-freebsd/
1•indigodaddy•24m ago•0 comments

The future of work when work is meaningless

https://letters.thedankoe.com/p/the-future-of-work-when-work-is-meaningless
3•saikatsg•25m ago•0 comments

ollama launch

https://ollama.com/blog/launch
3•tosh•26m ago•0 comments

We are experiencing an issue with Gmail beginning on Saturday, 2026-01-24 13:02

https://www.google.com/appsstatus/dashboard/incidents/NNnDkY9CJ36annsfytjQ
16•donutshop•27m ago•8 comments

Agent is building things you'll never use

https://mahdiyusuf.com/your-agent-is-building-things-youll-never-use/
3•googletron•27m ago•1 comments

Discourse Poison Fountain

https://github.com/elmuerte/discourse-poison-fountain
2•atomic128•27m ago•1 comments

I made a CLI tool that turns free Gemini into a local AI agent

https://pypi.org/project/gemcli/
3•bossTheCross•28m ago•1 comments

Surviving the Crawlers

https://chronicles.mad-scientist.club/tales/surviving-the-crawlers
2•homebrewer•32m ago•0 comments

Is Greenland in danger of being overrun by Russia and China?

https://www.thetimes.com/world/europe/article/greenland-russians-chinese-nato-8q6pvgdx6
4•lysace•33m ago•4 comments

Progress in Graph Processing (2015)

https://github.com/frankmcsherry/blog/blob/master/posts/2015-12-24.md
2•tosh•34m ago•0 comments

NES Game Genie Technical Notes (2001)

https://tuxnes.sourceforge.net/gamegenie.html
3•PaulHoule•34m ago•0 comments

We won't need CI in 5 years

https://thefridaydeploy.substack.com/p/we-wont-need-ci-in-5-years
3•telliott1984•35m ago•0 comments

Man shot and killed by federal agents in south Minneapolis this morning

https://www.startribune.com/ice-raids-minnesota/601546426
50•oceansky•35m ago•28 comments
Open in hackernews

After two years of vibecoding, I'm back to writing by hand [video]

https://www.youtube.com/watch?v=SKTsNV41DYg
74•written-beyond•1h ago

Comments

atq2119•1h ago
Having used Claude Code in anger for a while now, I agree that given the state of these agents, we can't stop writing code by hand. They're just not good enough.

But that also doesn't mean they're useless. Giving comparatively tedious background tasks to the agents that I check in on once or twice an hour does feel genuinely useful to me today.

There's a balance to be found that's probably going to shift slowly over time.

exegete•56m ago
To me the biggest benefit has been getting AI to write scripts that automate some things for me that are tedious but not needed to be deployed. Those scripts don’t have to be production-grade and just have to work.
amarant•30m ago
Similar experience. I just tried Claude for the first time last week, and I gave it very small tasks. "Create a data class myClass with these fields<•••> and set it up to generate a database table using micronaut data" was one example. I still have to know exactly what to do, but I find it very nice that I didn't have to remember how to configure micronaut data, (which tbf is really easy) I just had to know that that's what I wanted to use. It's not as revolutionary as the hype, but it does increase productivity quite a bit, and also makes programming more fun I think. I get to focus on what I want to build, instead of trying to remember jdbc minutiae. Then I just sanity check the generated code, and trust that I will spot mistakes in that jdbc connection. It felt like the world's most intuitive abstraction layer between me and the keyboard, a pretty cool feeling.

Just for fun, once I had played a bit with it like that, I just told it to finish the application with some vague Jira-epic level instructions on what I wanted in it and then fed it the errors it got.

It eventually managed to get something working but... Let's just say it's a good thing this was a toy project I did specifically to try out Claude, and not something anyone is going to use, much less maintain!

felixbecker•59m ago
Contrary to what most devs belief: Most code is not shipped to hundreds of millions of users and passing the test and actually implementing a feature is worth more that drowning in backlog.

The video is spot on for codebases of products that are critical systems: payment, erp etc -> single source of truth.

Simple Crud apps/ Frontends for ecom that abstracted away the critical functionality to backend APIs (ERP, shop system, payment etc) benefit from vibe slop vs no shipping cadence

jgoodhcg•58m ago
I wonder when I will feel compelled to go back. Right now it just feels too productive for me to let AI write the code.
exegete•58m ago
I think the points about code ownership and responsibility are spot on. Management wants you to increase velocity with these agents so inevitably there is pressure to ship crappy code. You then will be responsible for the code, not the AI. It’s the idea of being a reverse centaur.

I also like the comments on how developers should be frequently reading the entire code and not just the diffs. But again there is probably pressure to speed up and then that practice gets sacrificed.

2OEH8eoCRo0•48m ago
The rush it out the door mentality of this industry is stupid.
CharlesW•43m ago
If your Management thinks it's acceptable to increase velocity by shipping crappy code, I'm surprised they never thought to do this before AI.

From what I've seen, companies using AI well are shipping better code because of all the artifacts (supporting context like Architectural Design Records, project-specific skills and agents, etc.) and tests needed to support that. I understand that many are not using AI well.

mlinhares•57m ago
While I greatly dislike the hype and don't believe most of what people say is real or that whatever they're building is just bullshi, I can definitely see the improvement of productivity, specially when working with agents.

I think the problem is that people:

* see the hype;

* try to replicate the hype;

* it fails miserably;

* they throw everything away;

I'm on call this week on my job, one of the issues was adding a quick validation (verifying the length of a thing was exactly 15). I could have sat and done that but I just spun an agent, told it where it was, told it how to add the change (we always add feature flags to do that), read the code, prompted it to fix a thing and boom, PR is ready. I wrote 3 paragraphs, didn't have to sit and wait for CI or any of the other bullshit to get it done, focused on more important stuff but still got the fix out.

Don't believe the hype but also don't completely discount the tools, they are incredible help and while they will not boost your productivity by 500%, they're amazing.

shuraman7•46m ago
quick validation for which you wrote a 3 paragraphs long prompt? sounds like you've wasted time
joenot443•43m ago
For plenty of people, writing three paragraphs of prose they're already aware of can easily be <2min of work.

This comment took 15s, typing can be very fast.

aszen•28m ago
I bet writing the code directly could have been even faster, llms aren't magically fast
cat-snatcher•20m ago
> llms aren't magically fast

They literally are.

zeroonetwothree•10m ago
I had to make a small CSS change yesterday. I asked the LLM to do it, which took about 2 min. I also did it myself at the same time just to check and it took me 23 seconds.
code51•45m ago
Wait, your job was len(x) == 15?
raddan•28m ago
The hardest part is figuring out how to use leftpad.
kranner•44m ago
With respect for your priorities that may be different from mine, it would sadden me a little to always have to work like this, because it would rob me of the chance to think deeply about the problem, perhaps notice analogies with other processes, or more general or more efficient ways to solve the problem, perhaps even why it was necessary at all. Those are the tasty bits of the work, a little something just for me, not for my employer.

Instead my productivity would be optimised in service of my employer, while I still had to work on other things, the more important work you cite. It's not like I get to finish work early and have more leisure time.

And that's not to mention, as discussed in the video, what happens if the code turns out to be buggy later. The AI gets the credit, I get the blame.

Lerc•7m ago
>the chance to think deeply about the problem, perhaps notice analogies with other processes, or more general or more efficient ways to solve the problem, perhaps even why it was necessary at all. Those are the tasty bits of the work, a little something just for me, not for my employer.

You should be aiming to use AI in a way that the work it does gives you more time to work on these things.

I can see how people could end up in an environment where management expects AI use is expected to simply increase the speed of exactly what you do right now. That's when people expect the automobile to behave like a faster horse. I do not envy people placed in that position. I don't think that is how AI should be used though.

I have been working on test projects using AI. These are projects where there is essentially no penalty for failure, and I can explore the bounds of what they offer. They are no panacea, people will be writing code for a long while yet, but the bounds of their capability are certainly growing. Working on ideas with them I have been able to think more deeply about what the code was doing and what it was should do. Quite often a lot of the deep thinking in programming is gaining a greater understanding of what the problem really is. You can gain a benefit from using AI to ask for a quick solution simply to get a better understanding of why a naive implementation will not work. You don't need to use any of that code at all, but it can easily show you why something is not as simple as it seems at first glance.

I might post a show HN in a bit of a test project I started over the Christmas break. It's a good example of what I mean. I did it in Claude Artifacts instead of using Claude Code just to see how well I can develop something non-trivial in this manner. There have been certainly been periods of frustration trying to get Claude to understand particular points, but some of those areas of confusion came from my presumptions of what the problem was and how it differed to what the problem actually was. That is exactly the insight that you refer to as the tasty bits.

I think there is some adaptation needed to how you feel about the process of working on a solution. When you are stuck on a problem and are trying things that should make it work, the work can absorb you in the process. AI can diminish this, but I think some of that is precisely because it is giving you more time to think about the hard stuff, and that hard stuff is, well, hard.

pipo234•38m ago
> I think the problem is that people:

> * see the hype;

> * try to replicate the hype;

> * it fails miserably;

> * they throw everything away;

I'm sure doing two years of vibecoding is is a considerably more sincere attempt than "trying to replicate the hype and failing at it".

returnInfinity•35m ago
This isn't the usecase that's being criticized. Here you are responsible for the fix. You will validate and ship it and if something goes wrong, you will answer for it. The AI won't answer for it.
Aurornis•13m ago
I've started disregarding any AI take that is all-or-nothing. These tools are useful for certain work once you've learned their limitations. Anyone making sweeping claims about vibecoding-style use being viable at scale or making claims that they're completely useless is just picking a side and running with it.

Different outlets tilt different directions. On HN and some other tech websites it's common to find declarations that LLMs are useless from people who tried the free models on ChatGPT (which isn't the coding model) and jumped to conclusions after the first few issues. On LinkedIn it's common to find influencers who used ChatGPT for a couple things at work and are ready to proclaim it's going to handle everything in the future (including writing the text of their LinkedIn post)

The most useful, accurate, and honest LLM information I've gathered comes from spaces where neither extreme prevails. You have to find people who have put in the time and are realistic about what can and cannot be accomplished. That's when you start learning the techniques for using these tools for maximum effect and where to apply them.

tbagman•4m ago
>> The most useful, accurate, and honest LLM information I've gathered >> comes from spaces where neither extreme prevails

Do you have any pointers to good (public) spaces like this? Your take sounds reasonable, and so I'm curious to see that middle-ground expression and discussion.

nicoburns•12m ago
> didn't have to sit and wait for CI or any of the other bullshit to get it done

You run CI on human-generated PRs, but not AI-generated PRs? Why would there be a difference in policy there?

servercobra•4m ago
Same, it's so good for these little things. And once you start adding rules and context for the "how to add the change/feature flags", etc you get that 3 paragraphs down. Now our head of product is able to fire off small changes instead of asking a dev, making them context switch, etc. Devs still review but the loop is so much shorter.
hajimuz•45m ago
It’s now ship shit but it’s the right way to do. We need to figure out how to make it ship high quality code as possible as we can. Not just give it up.
returnInfinity•37m ago
The key word from the video is "responsibility".

Who is going to be responsible for the code? AI is definitely not responsible.

mmaunder•36m ago
Try implementing something that is too hard for you. Usually that'll involve implementing math in a high performance language or with parallelization. Then try going back to "writing by hand".
pessimizer•7m ago
> Try implementing something that is too hard for you.

This is almost the only thing I'm against when it comes to LLMs. You have no ability to figure out if it is right, and you will be overly impressed by garbage because you aren't qualified to judge. Has anybody come up with a pithy way to describe Dunning-Kruger for evaluating the output of LLMs, or are people too busy still denying that Dunning and Kruger noticed anything?

When it comes to implementing math, the main problem is that the tiniest difference can make the entire thing wrong, often to the degree of inverting it. I wouldn't be in any way comfortable in shipping something I didn't even understand. The LLM certainly didn't understand it; somebody should.

dfajgljsldkjag•34m ago
Recently I've seen coworkers frequently turn what should be a <10 line bugfix into a 500+ line refactor across multiple files. I suspect it's due to AI.

There's a time and place for refactoring, but just fixing an isolated bug isn't it. But I've seen that often AI can't help itself from making changes you didn't ask for.

feverzsj•28m ago
I'm feeling AI is already falling apart dramatically in the first month of 2026, with so many negative news.
jesse_dot_id•9m ago
No.
hsaliak•27m ago
There is a balance to be struck. Not everyone is going to be comfortable with ralph loops. Some are going to be OK with running a single agent, some with advanced code completion or code generation for specific functionality and so on.

The tooling is going to change how we do development no doubt, but people are going to find their comfortable spot, and be productive.

treelover•24m ago
"If you're a software developer and you're worried about your job, you haven't spent enough time actually using these AI agents. Anyone who spent eight hours plus a day over the last year using these agents is not at all scared of these agents taking their jobs. They're not... Your job is not going anywhere."

I agree with this take... for now. I wouldn't be surprised if the AI agents improved exponentially (in the next few years) to the point where his statement is no longer true.

throwup238•21m ago
That’s what I said about self driving cars nearly a decade ago!

The 80/20 rule is a painful lesson to internalize but it’s damn near a universal constant now. That last exponential improvement that takes LLMs over the finish line will take a lot longer than we think.

strange_quark•13m ago
I think self driving cars is a good analog. We got lane centering and adaptive cruise control pretty much universally, and some systems are more advanced, but you cannot buy a fully autonomous car. Sure there’s Waymo and others pushing at the edge in very very limited contexts, but most people are still driving their own cars, just with some additional support. I suspect the same will be true for software engineering.
cess11•11m ago
I'm not sure what "exponential improvement" would mean in this context, but large models have been a massively hyped and invested thing for what, three-four years or so, right?

And what do they run on? Information. The production of which is throttled by the technology itself, in part because the salespeople claim it can (and should) "replace" workers and thinkers, in part because many people have really low standards for entertainment and accept so called slop instead of cheap tropes manually stitched together.

So it would seem unlikely that they'll get the required information fed into them that would be needed for them to outpace the public internet och and widely pirated books and so on.

teucris•19m ago
Software developers should be worried about their jobs, not because these tools are capable of replacing them or reducing a company’s need for human developers, but rather because the _perception_ that they can/will replace developers is causing a major disruption in hiring practices.

I truly don’t know how this is going to play out. Will the software industry just be a total mess until agents can actually replace developers? Or will companies come to their senses and learn that they still need to hire humans - just humans that know how to use agents to augment their work?

thefourthchime•12m ago
Software development hiring is terrible right now, but hiring has been pretty slow in general. We gained 2 million jobs in 2024 and only 500,000 in 2025.
AstroBen•11m ago
That can't possibly be a long term disruption. If it doesn't work it doesn't work

If AI can't replace developers, companies can't replace developers with it. They can try — and then they'll be met with the reality. Good or bad

rekabis•3m ago
> the _perception_ that they can/will replace developers is causing a major disruption in hiring practices.

Bingo. And it’s causing the careers of a majority of juniors to experience fatal delays. Juniors need to leap into their careers and build up a good head of steam by demonstrating acquired experience, or they will wander off into other industries and fail to acquire said experience. But when no-one is hiring, this “failure to launch” will cause a massive developer shortage in the next 5-15 years, to the point where I believe entire governments will have this as a policy pain point.

After all, when companies are loathe to actually conduct any kind of on-the-job training, and demand 2-5 years of experience in an whole IT department’s worth of skills for “entry level” jobs, an entire generation of potential applicants with a fraction of that (or none at all) will cause the industry to have figurative kittens.

I mean, it will be the industry’s own footgun that has hurt them so badly. I would posit it may even become a leggun. The schadenfreude will be copious and well-deserved. But it’s going to produce massive amounts of economic pain.

SV_BubbleTime•16m ago
Counterpoint…

I am not worried about losing my programming role to AI.

I am worried about hiring employees and contractors. I haven’t had to hire anyone i office since, but I have specially avoided Upwork and new contractors. It’s too hard to tell if anyone knows anything anymore.

Everyone has the right or right enough answers for an interview or test.

The bar to detect bullshit has been moved deeper into the non-detectable range. It’s like everyone has open-book testing for interviews.

Even if I can sus out who is full of shit in a video or phone interview… the number of people I need to sort through is too large to be effective.

For Upwork specifically, this was an issue for years already. With people buying US accounts and lying about their location or subcontracting to cheaper foreign labor.

So, is vibe coding something I want to hire? Absolutely not. But, I don’t see being able to avoid it or at least not suffering from someone cutting corners.

pdpi•14m ago
I expect it'll be a sigmoid curve — we're in the exponential growth phase, but it'll flatten out. Then we'll need to wait for the next Big Idea to give us another the next sigmoid.
neomantra•23m ago
I really appreciate all of his message -- responsibility and actual engineering are critical and can't be (deceptively) lost even though Pull Request and CI/CD workflows exist. I hate the term vibe-coding because it seems flippant, and I've leaned into LLM-assistance to frame it better.
SV_BubbleTime•11m ago
I consider vibe coding and LLM-assistance to be distinctively separate things.

I am vibe coding, if I needed x, I lay that out task with any degree of specificity, and ask for the whole result. Maybe it’s good, I gave the LLM a lot of rope to hang me.

I am using an LLM for assistance if I need something like this file renamed, and all its functions renamed to match, and all the project meta to change, and every comment that mentions the old name to be updated. There is an objectively correct result.

It’s a matter of scope.

fibonachos•19m ago
Multiline autocomplete is still the biggest productivity boost for me. This works well in a familiar codebase with reasonably consistent patterns.

After that it’s the “ask” capability when I need to get oriented in unfamiliar and/or poorly documented code. I can often use the autocomplete pretty effectively once I understand the patterns and naming conventions.

Similarly, agents are good for a first pass triage and plan when troubleshooting tricky bugs.

Still haven’t had a good candidate for going full vibe code. Maybe that’s because I don’t do a lot of greenfield coding outside of work, which seems to be where it shines.

Just my experience. It’s new set of tools in the toolbox, but not always the right one for a given task.

ajross•19m ago
The term "vibe coding" is less than a year old!
lombasihir•17m ago
i am new to llm assisted coding, but i dont like the way it try to add more stuff when fixing thing, instead of going simplest less code possible, it recreate bunch of already coded logic, it also mostly try to code workaround and accumulate spagetti-esque codes. any advise? appreciate it
catlifeonmars•3m ago
[delayed]
Vaslo•13m ago
All of these “go back to handcoding” posts seem to be done by very experienced coders. The fact that a less than mediocre coder like me can have sql statements or Python ETL written and tested for me in seconds rather than hours is all I need to see.
dnautics•9m ago
> "in isolation this code makes a lot of sense"..."what is this junk"

I mean I am left with two thoughts:

1. programming language skill issue. Some languages are simply much better at composition than others. I find that yes, this happens, but actually on the order of a day, and once the code is "good", it doesn't really change that much in the grand scheme of things?

2. Even for languages where composition is better, this is exactly what happens with human development too?

catlifeonmars•7m ago
[delayed]
techmetaphorist•9m ago
From my attempts, still, it takes more time to fix the code than it would if writing it all by hand... BUT! It's getting better and I look at it as just the next abstraction layer we will get used to. Think of it... Back in University I had to write code on PAPER! To think of memory allocation manually! Then came out managed code.. Then huge SDKs.. Then smart IDEs with intellij/intellisense..

And some dinosaurs even remember riding the CPU with mov rcx, 5 :D

We were just shifting our focus byte by byte from the nits and bolts towards the actual problem we are solving. In other words going less "hard"-ware, more and more "soft"-ware.

AI is just continuing this evolution, adding another abstraction layer in soft dev process.