frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Native all the way, until you need text

https://justsitandgrin.im/posts/native-all-the-way-until-you-need-text/
98•dive•1h ago•54 comments

I don't think AI will make your processes go faster

https://frederickvanbrabant.com/blog/2026-05-15-i-dont-think-ai-will-make-your-processes-go-faster/
56•TheEdonian•1h ago•34 comments

Apple Silicon costs more than OpenRouter

https://www.williamangel.net/blog/2026/05/17/offline-llm-energy-use.html
57•datadrivenangel•1h ago•32 comments

Every AI Subscription Is a Ticking Time Bomb for Enterprise

https://www.thestateofbrand.com/news/ai-subscription-time-bomb
46•mooreds•1h ago•30 comments

Zerostack – A Unix-inspired coding agent written in pure Rust

https://crates.io/crates/zerostack/1.0.0
463•gidellav•15h ago•235 comments

Prolog Basics Explained with Pokémon

https://unplannedobsolescence.com/blog/prolog-basics-pokemon/
58•birdculture•2d ago•8 comments

Mozilla to UK regulators: VPNs are essential privacy and security tools

https://blog.mozilla.org/netpolicy/2026/05/15/mozilla-to-uk-regulators-vpns-are-essential-privacy...
370•WithinReason•7h ago•146 comments

A nicer voltmeter clock

https://lcamtuf.substack.com/p/a-nicer-voltmeter-clock
241•surprisetalk•14h ago•29 comments

Colossus: The Forbin Project

https://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project
137•doener•2d ago•47 comments

Hosting a website on an 8-bit microcontroller

https://maurycyz.com/projects/mcusite/
165•zdw•12h ago•14 comments

Moving away from Tailwind, and learning to structure my CSS

https://jvns.ca/blog/2026/05/15/moving-away-from-tailwind--and-learning-to-structure-my-css-/
592•mpweiher•1d ago•330 comments

OpenAI and Government of Malta partner to roll out ChatGPT Plus to all citizens

https://openai.com/index/malta-chatgpt-plus-partnership/
236•bookofjoe•17h ago•281 comments

How Diamonds Are Made

https://diamond.jaydip.me/
22•lemonberry•1d ago•2 comments

Playing Atari ST Music on the Amiga with Zero CPU

https://arnaud-carre.github.io/2026-05-15-ym-fast-emu/
72•z303•5h ago•24 comments

SANA-WM, a 2.6B open-source world model for 1-minute 720p video

https://nvlabs.github.io/Sana/WM/
363•mjgil•1d ago•142 comments

Mado: Fast Markdown linter written in Rust

https://github.com/akiomik/mado
13•nateb2022•2d ago•2 comments

Illusions of understanding in the sciences

https://link.springer.com/article/10.1007/s42113-026-00271-1
64•sebg•2d ago•31 comments

Twilight of the Velocipede: Typesetting Races Before the Age of Linotype

https://publicdomainreview.org/essay/twilight-of-the-velocipede/
30•benbreen•16h ago•1 comments

We've made the world too complicated

https://user8.bearblog.dev/the-world-is-too-complicated/
340•James72689•1d ago•331 comments

Roman Letters

https://romanletters.org/
53•diodorus•2d ago•10 comments

The Third Hard Problem

https://mmapped.blog/posts/48-the-third-hard-problem
107•surprisetalk•3d ago•51 comments

Accelerando (2005)

https://www.antipope.org/charlie/blog-static/fiction/accelerando/accelerando.html
308•eamag•1d ago•176 comments

Frontier AI has broken the open CTF format

https://kabir.au/blog/the-ctf-scene-is-dead
396•frays•1d ago•412 comments

Why did Clovis toolmakers choose difficult quartz crystal?

https://phys.org/news/2026-04-clovis-toolmakers-difficult-quartz-crystal.html
33•PaulHoule•2d ago•21 comments

Halt and Catch Fire

https://unstack.io/halt-and-catch-fire
163•ScottWRobinson•19h ago•81 comments

MCP Hello Page

https://www.hybridlogic.co.uk/blog/2026/05/mcp-hello-page
114•Dachande663•15h ago•36 comments

δ-mem: Efficient Online Memory for Large Language Models

https://arxiv.org/abs/2605.12357
226•44za12•1d ago•58 comments

Unknowable Math Can Help Hide Secrets

https://www.quantamagazine.org/how-unknowable-math-can-help-hide-secrets-20260511/
60•Xcelerate•3d ago•13 comments

Self-Distillation Enables Continual Learning [pdf]

https://arxiv.org/abs/2601.19897
78•teleforce•12h ago•20 comments

A molecule with half-Möbius topology

https://www.science.org/doi/10.1126/science.aea3321
103•bryanrasmussen•4d ago•7 comments
Open in hackernews

I don't think AI will make your processes go faster

https://frederickvanbrabant.com/blog/2026-05-15-i-dont-think-ai-will-make-your-processes-go-faster/
54•TheEdonian•1h ago

Comments

usernametaken29•49m ago
Instead of mandatory AI workshops simply cancel all meetings with more than 3 people and no written agenda. Instead block the meeting time for productive work. That’ll be 2000$ of advisory fees for the insane productivity gains I just unlocked you. You’re welcome
teaearlgraycold•39m ago
If people got paid for telling the truth you’d be rich.
steveBK123•12m ago
Yes, there are MANY in tech/non-tech management that will quietly admit that a lot of this top-down stuff is to create the appearance of motion to appease a higher more tech/AI ignorant authority.
praneetbrar•46m ago
If the underlying workflow is noisy, ambiguous, or overloaded with coordination overhead, faster generation just produces more low-context output to review and reconcile.
Havoc•44m ago
It absolutely will make some things faster. Anyone that has ever churned out some boilerplate code with it knows that.

...but yeah most organizational processes & people aren't set up for leveraging it and roll out will be slow (same on learning where it does / doesn't work).

sarchertech•17m ago
I’m not convinced. I’ve been using AI pretty heavily for about 18 months and agents for a little over 6 months.

I’m currently working on a data migration for an enormous dataset. I’m writing the tooling in go, which is a language I used to be very familiar with, but that I hadn’t touched in about 12 years when I started this. It definitely helped me get back into go faster.

But after the initial speed up, I found myself in the last 10% takes the other 90% of the time phase. And it definitely took longer for me to wrap my head around the code than it would have if I’d skipped the AI. I might have some overall speed up, but if so it’s on the order of 10-20%. Nothing revolutionary.

I have been able to vibe code a few little one off tools that have made my life a little easier. And I have vibe coded a few iPad games for my kids for car trips, but for work I still have to understand the code and reading code is still harder than writing it.

This is also not from lack of trying , I spent $1000 last week during a company wide “AI week”. Mostly on trying to get AI to replicate my migration tooling, complete with verification agents, testing agents, quality gates, elaborate test harnesses etc…

I’d let Claude (opus 4.7 max effort) crank away overnight only to immediately find that had added some horrible new bug or managed to convince the verification agent that it wasn’t really cheating to pass my quality tests.

What I learned from last week is that we are so far away from not needing to understand the code that everyone who says otherwise is probably full of shit. Other people who I trust who have been running the same experiments have told me the same thing.

Until and unless we get to that point, it’s always going to be a 10-50% speed up (if that).

echelon•40m ago
It makes small teams without organizational overhead go lightning fast.

It might be the ultimate tool of disruption.

michaelbuckbee•32m ago
Exactly. The larger the organization the less percentage time devs actually are doing dev work and the less direct benefit there is from AI assisted coding tools.
tgv•28m ago
I have a colleague who vibes the shit out of his part, and it results in large commits that take a lot of time to understand, and that makes cooperation practically impossible. LLMs are not team players.
kj4211cash•38m ago
On the one hand, this is a clean post that explains exactly what a lot of us have been thinking and seeing on the job at large organizations doing tech work. Dear Author, I agree with you 110% and want everybody else to come to understand what you have written.

On the other hand, it feels like we've been over this tens of times recently, on HN specifically and IRL at work. Another blog post isn't going to convince leaders that this is how the world works when they are socially and financially incentivized to pretend like AI really will speed things up. So now I just wait for their AI projects to fail or go as slowly as previous projects and hope they learn something.

cmrdporcupine•27m ago
Yep. I have the luxury of having my mortgage paid off and being able to be a bit picky about my work for a little bit.

So I am spending my days gardening and obsessively working on personal coding projects with these agentic tools. Y'know, building a high performance OLTP database from scratch, and a whole new logic relational persistent programming environment, a synthesizer based on some funky math, an FPGA soft processor. Y'know, normal things normal people do.

So I know what these tools are capable of in a single person's hands. They're amazing.

But I hear the stories from my friends employed at companies setting minimum token quotas or having leaderboards of people who are "star AI coders" telling people "not to do code reviews" and "stop doing any coding by hand" and I shake my head.

I dipped my toes into some contract work in the winter and it was fine but it mostly degraded into dueling LLMs on code reviews while the founder vibe coded an entire new project every weekend.

These tools suck for team work or any real team software engineering work.

I'll just let this shake out and sit out until the industry figures it out. The only places that are going to be sane to work at are places with older wiser people on staff who know how to say "slow down!" and get away with it.

In the meantime, quantities of cut rhubarb $5 a bunch in Hamilton, Ontario area for sale. Also asparagus. Lots and lots of asparagus.

utopiah•24m ago
I disagree, I think the visuals, Gantt charts, are precisely the kind of "PM speak" that can be understood. Sure it won't solve anything as long as C-suite and investors do innovation signaling but that itself can only last so long.
yakattak•5m ago
Sadly I think you’re right. I even shy away from sharing these types of posts at work because it feels like anything that doesn’t mesh with the status quo isn’t received well.
p2detar•35m ago
> Yes, AI can generate code quickly (whether that’s a good thing is open for debate), but that doesn’t mean it’s generating the correct code.

No, the code is actually almost always correct. The way it’s added is probably not what you’re going to like, if you know your code base well enough. You know there’s some ceremony about where things are added, how they are named, how much comments you’d like to add and where exactly. Stuff like that seems to irritate people like me when not being done right by the agent, and it seems to fail even if it’s in the AGENTS.md.

> If you were to give human developers the same amount of feature/scope documentation you would also see your productivity skyrocket.

Almost 2 decades in IT and I absolutely do not believe this can ever happen. And if it does, it’s so rare, it’s not even worth talking about it.

adam_patarino•32m ago
> Every software developer knows that you can’t make projects go faster just by typing faster. If that were the case we would all be taking typing lessons.

So well said.

AI is unveiling how the bureaucracy is the slow part.

jagged-chisel•26m ago
> AI is unveiling how the bureaucracy is the slow part.

Computing has been doing that for decades. If your process is fucked, computers make it fucked faster.

It’s just that now, we have entire generations alive that have never seem a world without digital computers. ~LLMs~ AI is a fun new lever in some uses so clearly it is finally the hammer that will drive the screws and bolts for us, with less effort on our part!

They just have to learn from experience. It’s what you do when you can’t be bothered to learn the lessons of the past.

adam_patarino•13m ago
Completely agree. It amazes me how some folks think AI is unlike any other technology revolution. History repeats.
cmrdporcupine•6m ago
You're right it's just like any other mechanization/automation revolution. Except it's not.

It's happening about 10x faster than any other I've seen or read about.

Conceive how long it took just to get barcode scanners rolled out in grocery stores. Or direct payment terminals. Or how many decades it's been getting robotics into the manufacturing of cars at scale. I worked through the .com boom and I can tell you that "webification" took 10 years or more for most businesses (and many of them now just gave up and just have a Facebook page instead etc)

This is a little insane what's happening now. It really does change everything. People who don't work in software I don't think have any idea what's coming.

steveBK123•9m ago
Bureaucracy cannot learn the problems of the past with bureaucracy because it is against their self interest.

Work in large orgs long enough and you will recognize these creatures. Ladder climbing is a skill orthogonal to adding any value to the customer/company.

CharlieDigital•31m ago

    > ...but that doesn’t mean it’s generating the correct code.
Something I'm observing is that now a lot of the pressure moves to the product team to actually figure out the correct thing to build. Some product teams are simply not used to this and are YOLO-ing prototypes now, iterating, finding out they built and shipped the wrong thing, and then unwinding.

Before, when there was the notion that "building is expensive", product teams would think things through, do user interviews up-front, actually do discovery around the customer + business context + underlying human process being facilitated with software.

This has shortened the cycle to first working prototype, but I'd guess that in the longer scale, it extends the time to final product because more time is wasted shifting the deliverable and experience on the user during this process of discovery versus nailing most of the product experience in big, stable chunks through design.

At the end of the day, there is a hidden cost to fast iterative shifts on the fundamental design of the software intended for humans to use and for which humans are responsible for operation. First is the cost on the end users who have to stop, provide feedback, and then retrain on each cycle. Second is that such compounding complexities in the underlying implementation as product learns requirements and vibe-codes the solution creates a system that becomes very challenging for humans to operationalize and maintain.

Ultimately, I think the bookends of the software development process are being neglected (as author points out) to the detriment of both the end users and the teams that end up supporting the software. I do wonder if we're entering an "Ikea era" of software where we should just treat everything as disposable artifacts instead.

phyzix5761•28m ago
I think when LLMs first came out people thought they could just say something like, "Make a Facebook clone". But now we're realizing we need to be more exact with our requirements and define things better. That has always been the bottle neck in software.

When I was working we used to get requirements that literally said things like, "Get data and give it to the user". No definition of what data is, where its stored, or in what format to return it. We would then spend a significant amount of time with the product person trying to figure out what they really wanted.

In order to get good results with LLMs we need to do something similar. Vague requirements get vague results.

steveBK123•14m ago
> When I was working we used to get requirements that literally said things like, "Get data and give it to the user". No definition of what data is, where its stored, or in what format to return it. We would then spend a significant amount of time with the product person trying to figure out what they really wanted.

This is a big HN LLM discussion divide. I am in the same no-specs work background camp, and so the idea that the humans who input that into dev teams are suddenly going to get anything out of an LLM if they directly input the same is laughable. In my career most orgs there has been no product person and we just talked directly to end users.

For that kind of org, it will accelerate some parts of the SWEs job at different multipliers, but all the non-dev work to get there with discussions, discovery, iteration, rework, etc remains.

If the input to your work is a 20 page specification document to accompany multi-paragraph Jira tickets with embedded acceptance criteria / test cases / etc, then yes there is a danger the person creating that input just feed it into an LLM.

shalmanese•10m ago
> I think when LLMs first came out people thought they could just say something like, "Make a Facebook clone". But now we're realizing we need to be more exact with our requirements and define things better. That has always been the bottle neck in software.

This was substantially predicted by Fred Brooks in 1986 in the classic No Silver Bullets [1] essay under the sections "Expert Systems" and "Automatic Programming".

In it, he lays out the core features of vibe coding and exactly the experience we are having now with it: Initial success in a few carefully chosen domains and then a reasonable but not ground breaking increase in productivity as it expands outside of those domains.

[1] https://worrydream.com/refs/Brooks_1986_-_No_Silver_Bullet.p...

rubyfan•9m ago
We now have product owners trying to farm out their work to an LLM. The process didn’t work before because the person writing the requirements either put out vague requirements or bad requirements because they didn’t understand the business intent (or were careless).

LLMs just take the same vague or poor requirements and make them look believable until you dig in to them.

r2ob•23m ago
Large corporations with orthodox methodologies will take time to extract the best benefits from AI. Small teams, which still remember the original Agile Manifesto, will soar and overtake their competitors.
eddy-sekorti•20m ago
Yes, it is true for large enterprises, but not for startups ans individual creators. AI is accelerating speed for anyone who is not stuck in Corporate breaucratic processes.
sunir•16m ago
Our current most popular methods of using AI with software development is either waterfall or autocomplete. We aren't at a great pair programming experience yet. I presume that would improve speed and accuracy, but it's still unclear.
shalmanese•15m ago
This is all substantially correct and gives us hints as to where to focus for AI to make the processes go faster.

Eg: I had a product manager say to me that he envisions a future where any meeting with stakeholders that does not result in an interactive prototype by the end of the meeting would be considered a failure. This feels directionally correct to me.

The other thing I expect to see is Vibecoding being the "Excel 2.0" where it allows significant self-serve of building interactive apps that's engaged in a continual war with IT to turn them into something with better security guarantees, proper access control & logging, scalability, change management etc.

But the larger historical point here is that every revolutionary transition produces, in the early stages, "Steam Horses". The invention of the steam engine had people imagining that the future of transportation would involve horse shaped objects, powered by steam, pulling along conventional carts. It wasn't until later developments that we understood the function of transportation as divorced from the form.

I started talking about Steam Horses originally in the context of MOOCs, which was a classic Steam Horse idea.

pu_pe•13m ago
Some organizations added a ton of process around software development because it is expensive and risky. They require a ton of approvals and sign-offs, then some managing overhead on top to check if their investment is on the right track. This approval process is bound to change by the fact that development is far cheaper and faster now.

Another aspect that is not captured here is that the lawyers and subject matter experts will also be using AI to speed up their parts.

sillysaurusx•11m ago
I actually have data on this. I’ve been building sharc, a Common Lisp port of Hacker News. https://www.github.com/shawwn/sharc

If that sounds familiar, it’s because it’s what dang did over the course of several years.

It’s taken a few weeks. I started right around May, and now it’s able to render large HN threads (900+ comments) within a factor of five of production HN performance. (Thank you to dang for giving actual performance numbers to compare against.)

A couple days ago, mostly out of curiosity, I ran Claude with “/goal make this as fast as HN.” Somewhat surprisingly, it got the job done within a couple hours. I kept the experiment on separate branches, because the code is a mess, just like all AI generated code starts as. But the remarkable part is that it worked, and I can technically claim to have recreated HN within a few weeks.

The real work is in the specifications. My port of HN is missing around a hundred features. Things from favorited comments, to hiding threads, to being able to unvote and re-vote.

But catching up to HN is clearly a matter of effort (time spent actually working on the problem with Claude), not complexity. Each feature in isolation is relatively easy. Getting them all done within a short time span without ruining the codebase is the hard part. And I think that’s where a lot of people get tripped up: you can do a lot, but you have to manage it tightly, or else the codebase explodes into an unreadable mess.

It’s true that if you don’t do that crucial step of “manage the results”, you’ll end up making more work for yourself in the long run, by a large factor. But it’s also true that AI sped me up so much that I was able to do in weeks what would’ve otherwise taken years (and did take dang years). I’m not claiming parity, just that I got close enough to be an interesting comparison point.

AI can clearly accelerate us. But we need to be disciplined in how we use it, just like any other new tool. That doesn’t change the fact that it does work, and I think people might be underestimating how good the results can be.

king_geedorah•10m ago
> If you were to give human developers the same amount of feature/scope documentation you would also see your productivity skyrocket.

This is how I felt when I first started seeing people discuss things like AGENTS.md etc.

delichon•9m ago
The promise of AI is in doing things at all that couldn't be automated before, at least economically. And when you find a use case where a bit of automated inference is sufficient and can replace human inference, it can wildly speed up a process, from when Susan has time for it, to right now.
chilmers•7m ago
It’s amazing to see some people talk with 100% confidence about the macro view of AI assisted development when we have had strong coding agents available for less than a year.