frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Intel Core i9-14900KF reaches 9.2Ghz setting a new CPU frequency world record

https://www.notebookcheck.net/Intel-Core-i9-14900KF-reaches-9-2Ghz-setting-a-new-CPU-frequency-wo...
1•theanonymousone•2m ago•0 comments

LogTape 2.1.0: Throttling, logfmt, and smarter redaction

https://github.com/dahlia/logtape/discussions/165
2•dahlia•7m ago•0 comments

(VBS-NN) ML – 512k context length pre-training on a 12GB GPU

https://github.com/ega4l/VBS-NN/tree/main/code
2•gromio•7m ago•0 comments

Stochastic Flocks and the Critical Problem of 'Useful' AI

https://www.techpolicy.press/stochastic-flocks-and-the-critical-problem-of-useful-ai/
1•bryanrasmussen•8m ago•0 comments

Construction on Meta's largest data center brings chaos to rural Louisiana

https://lailluminator.com/2025/11/22/meta-data-center-crashes/
2•bwoah•9m ago•0 comments

Coal Makes a Comeback, Fueled by War in the Middle East

https://www.wsj.com/business/energy-oil/coal-makes-a-comeback-fueled-by-war-in-the-middle-east-fb...
1•melling•9m ago•0 comments

AsymFlow: Turning Latent Diffusion Models into Pixel-Space Generators

https://firethering.com/asymflow-pixel-diffusion-image-model/
1•steveharing1•10m ago•0 comments

CUDA Books

https://github.com/alternbits/awesome-cuda-books
2•dariubs•12m ago•0 comments

Astronomers produce most detailed map of the cosmic web, across 13.7B years

https://news.ucr.edu/articles/2026/05/11/astronomers-produce-most-detailed-map-cosmic-web
1•giuliomagnifico•12m ago•0 comments

MatterSim-MT: A multi-task foundation model for materials characterization

https://arxiv.org/abs/2605.07927
1•ttths•14m ago•0 comments

Signals vs. Noise: How to spot architectural shifts

2•moniazamla•16m ago•0 comments

Reducing "show less like this" by 11% with NSFW filtering

https://blog.foryou.club/3mm2fbh4vp22r?auth_completed=true
1•lonk11•18m ago•0 comments

Yes, you can be allergic to water

https://www.popsci.com/health/water-allergy/
1•saikatsg•18m ago•0 comments

Learning-focused CTFs are Facing a Restructure

https://exploiting.systems/posts/2026-05-17-learning-focused-ctfs-are-facing-a-restructure
2•ropbear•18m ago•1 comments

Agentic Trading with Safe Guardrails

https://github.com/ShurikenTrade/shuriken-skills
2•jgan0978•20m ago•1 comments

Self-hosted browser fingerprinting and bot detection with real-world constraints

https://github.com/antoinevastel/fpscanner
1•mmarian•20m ago•0 comments

Show HN: I vibe coded a music box

https://www.quaxio.com/music_box/
1•amenghra•22m ago•0 comments

Hubski

1•Mamimina•24m ago•0 comments

Ask HN: How do you approach a new codebase?

2•praneetbrar•25m ago•3 comments

Post-Quantum JWT Library/Package for Node.js/JS/TypeScript (NIST FIPS 204M-DSA)

https://www.npmjs.com/package/@pq-jwt/core
2•bchain•26m ago•1 comments

The jobs apocalypse: a (very) short history

https://www.economist.com/finance-and-economics/2026/05/14/the-jobs-apocalypse-a-very-short-history
1•dcminter•27m ago•0 comments

What Do You Want?

https://dekodiert.de/en/articles/was-wollt-ihr-eigentlich
2•sdoering•32m ago•0 comments

'Once in a lifetime find': Dinosaur tail discovered trapped in amber (2016)

https://www.cnn.com/2016/12/08/health/dinosaur-tail-trapped-in-amber-trnd
1•downbad_•32m ago•0 comments

Async I/O in Zig 0.16, today

https://lalinsky.com/2026/05/11/async-io-in-zig-016-today.html
1•danborn26•33m ago•0 comments

Refactor: Unified Codebase for Better Performance

https://github.com/thesysdev/openui/pull/517
1•freakynit•35m ago•0 comments

WorkClarity – Free AI tools fo freelancers

https://workclarity.co
1•bmackler•35m ago•0 comments

Show HN: CLI for image/video to ASCII art

https://github.com/k-wong/ascii-art-generator
1•kevinwong•36m ago•0 comments

OpenSMTPD Is the Mail Server for the Future

https://bsdly.blogspot.com/2026/05/opensmtpd-is-mail-server-for-future.html
1•peter_hansteen•40m ago•0 comments

How a blind taste competition launched the American wine industry

https://thehustle.co/originals/how-a-blind-taste-competition-launched-the-american-wine-industry
1•Anon84•41m ago•1 comments

GDS weighs in on the NHS's decision to retreat from Open Source

https://shkspr.mobi/blog/2026/05/gds-weighs-in-on-the-nhss-decision-to-retreat-from-open-source/
2•edent•43m ago•1 comments
Open in hackernews

I don't think AI will make your processes go faster

https://frederickvanbrabant.com/blog/2026-05-15-i-dont-think-ai-will-make-your-processes-go-faster/
38•TheEdonian•50m ago

Comments

usernametaken29•25m ago
Instead of mandatory AI workshops simply cancel all meetings with more than 3 people and no written agenda. Instead block the meeting time for productive work. That’ll be 2000$ of advisory fees for the insane productivity gains I just unlocked you. You’re welcome
teaearlgraycold•16m ago
If people got paid for telling the truth you’d be rich.
praneetbrar•23m ago
If the underlying workflow is noisy, ambiguous, or overloaded with coordination overhead, faster generation just produces more low-context output to review and reconcile.
Havoc•20m ago
It absolutely will make some things faster. Anyone that has ever churned out some boilerplate code with it knows that.

...but yeah most organizational processes & people aren't set up for leveraging it and roll out will be slow (same on learning where it does / doesn't work).

echelon•17m ago
It makes small teams without organizational overhead go lightning fast.

It might be the ultimate tool of disruption.

michaelbuckbee•9m ago
Exactly. The larger the organization the less percentage time devs actually are doing dev work and the less direct benefit there is from AI assisted coding tools.
tgv•4m ago
I have a colleague who vibes the shit out of his part, and it results in large commits that take a lot of time to understand, and that makes cooperation practically impossible. LLMs are not team players.
kj4211cash•15m ago
On the one hand, this is a clean post that explains exactly what a lot of us have been thinking and seeing on the job at large organizations doing tech work. Dear Author, I agree with you 110% and want everybody else to come to understand what you have written.

On the other hand, it feels like we've been over this tens of times recently, on HN specifically and IRL at work. Another blog post isn't going to convince leaders that this is how the world works when they are socially and financially incentivized to pretend like AI really will speed things up. So now I just wait for their AI projects to fail or go as slowly as previous projects and hope they learn something.

cmrdporcupine•3m ago
Yep. I have the luxury of having my mortgage paid off and being able to be a bit picky about my work for a little bit.

So I am spending my days gardening and obsessively working on personal coding projects with these agentic tools. Y'know, building a high performance OLTP database from scratch, and a whole new logic relational persistent programming environment, a synthesizer based on some funky math, an FPGA soft processor. Y'know, normal things normal people do.

So I know what these tools are capable of in a single person's hands. They're amazing.

But I hear the stories from my friends employed at companies setting minimum token quotas or having leaderboards of people who are "star AI coders" telling people "not to do code reviews" and "stop doing any coding by hand" and I shake my hand.

I dipped my toes into some contract work in the winter and it was fine but it mostly degraded into dueling LLMs on code reviews while the founder vibe coded an entire new project every weekend.

These tools suck for team work or any real team software engineering work.

I'll just let this shake out and sit out until the industry figures it out.

In the meantime, quantities of cut rhubarb $5 a bunch in Hamilton, Ontario area for sale. Also asparagus. Lots and lots of asparagus.

utopiah•1m ago
I disagree, I think the visuals Gantt charts are precisely the kind of "PM speak" that can be understood. Sure it won't solve anything as long as C-suite and investors do innovation signaling but that itself can only last so long.
p2detar•11m ago
> Yes, AI can generate code quickly (whether that’s a good thing is open for debate), but that doesn’t mean it’s generating the correct code.

No, the code is actually almost always correct. The way it’s added is probably not what you’re going to like, if you know your code base well enough. You know there’s some ceremony about where things are added, how they are named, how much comments you’d like to add and where exactly. Stuff like that seems to irritate people like me when not being done right by the agent, and it seems to fail even if it’s in the AGENTS.md.

> If you were to give human developers the same amount of feature/scope documentation you would also see your productivity skyrocket.

Almost 2 decades in IT and I absolutely do not believe this can ever happen. And if it does, it’s so rare, it’s not even worth talking about it.

adam_patarino•9m ago
> Every software developer knows that you can’t make projects go faster just by typing faster. If that were the case we would all be taking typing lessons.

So well said.

AI is unveiling how the bureaucracy is the slow part.

jagged-chisel•3m ago
> AI is unveiling how the bureaucracy is the slow part.

Computing has been doing that for decades. If your process is fucked, computers make it fucked faster.

It’s just that now, we have entire generations alive that have never seem a world without digital computers. ~LLMs~ AI is a fun new lever in some uses so clearly it is finally the hammer that will drive the screws and bolts for us, with less effort on our part!

They just have to learn from experience. It’s what you do when you can’t be bothered to learn the lessons of the past.

CharlieDigital•8m ago

    > ...but that doesn’t mean it’s generating the correct code.
Something I'm observing is that now a lot of the pressure moves to the product team to actually figure out the correct thing to build. Some product teams are simply not used to this and are YOLO-ing prototypes now, iterating, finding out they built and shipped the wrong thing, and then unwinding.

Before, when there was the notion that "building is expensive", product teams would think things through, do user interviews up-front, actually do discovery around the customer + business context + underlying human process being facilitated with software.

This has shortened the cycle to first working prototype, but I'd guess that in the longer scale, it extends the time to final product because more time is wasted shifting the deliverable and experience on the user during this process of discovery versus nailing most of the product experience in big, stable chunks through design.

At the end of the day, there is a hidden cost to fast iterative shifts on the fundamental design of the software intended for humans to use and for which humans are responsible for operation. First is the cost on the end users who have to stop, provide feedback, and then retrain on each cycle. Second is that such compounding complexities in the underlying implementation as product learns requirements and vibe-codes the solution creates a system that becomes very challenging for humans to operationalize and maintain.

Ultimately, I think the bookends of the software development process are being neglected (as author points out) to the detriment of both the end users and the teams that end up supporting the software. I do wonder if we're entering an "Ikea era" of software where we should just treat everything as disposable artifacts instead.

phyzix5761•5m ago
I think when LLMs first came out people thought they could just say something like, "Make a Facebook clone". But now we're realizing we need to be more exact with our requirements and define things better. That has always been the bottle neck in software.

When I was working we used to get requirements that literally said things like, "Get data and give it to the user". No definition of what data is, where its stored, or in what format to return it. We would then spend a significant amount of time with the product person trying to figure out what they really wanted.

In order to get good results with LLMs we need to do something similar. Vague requirements get vague results.