frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Sacred Ass Life Course

https://sacredass.com/
1•ZguideZ•45s ago•1 comments

Ask HN: What will Software Engineering evolve to?

1•lopespm•2m ago•0 comments

Willpower isn't a muscle - Here's a better way to think of it

https://psyche.co/ideas/no-willpower-isnt-a-muscle-heres-a-better-way-to-think-of-it
1•helloplanets•4m ago•0 comments

'medical freedom' fuels worst US measles outbreak in 30 years

https://www.reuters.com/business/healthcare-pharmaceuticals/doctors-bear-burden-medical-freedom-f...
1•u1hcw9nx•6m ago•0 comments

Show HN: Goutils – 70 type-safe generic functions for async/functional Go

https://github.com/skatiyar/goutils
2•skatiyar•7m ago•0 comments

A Veteran Teacher Shadows Two Students and Learns a Sobering Lesson

https://larrycuban.wordpress.com/2026/01/24/a-veteran-teacher-shadows-two-students-and-learns-a-s...
1•Tomte•7m ago•0 comments

Ask HN: How to control the sound volume on macOS with a SPDIF Adapter?

1•faebi•8m ago•0 comments

Learning nature's assembly language with polymers

https://www.pnas.org/doi/10.1073/pnas.2519094123
1•XzetaU8•12m ago•0 comments

What web businesses will continue to make money post AI?

2•surume•21m ago•2 comments

Changes in MySQL 9.6.0

https://dev.mysql.com/doc/relnotes/mysql/9.6/en/news-9-6-0.html
1•ksec•22m ago•0 comments

I was a dating coach, so I vibe-coded an app to replace myself

https://play.google.com/store/apps/details?id=com.tryagaintext.flirtfix&hl=en_IN
1•bhattattreya•23m ago•1 comments

Show HN: I built a leaderboard ranking AI products by Stripe payment traffic

https://aiboom.tools/ranking
1•HenryZheng99•26m ago•1 comments

FTC wants Apple News to promote more Fox News and Breitbart stories

https://arstechnica.com/tech-policy/2026/02/trump-ftc-denies-being-speech-police-but-says-apple-n...
2•rbanffy•28m ago•0 comments

Show HN: Goxe v1.3.1 Is Out

https://github.com/DumbNoxx/goxe
1•nxus_dev•29m ago•0 comments

Nothing Big Is Happening

https://twitter.com/basedtorba/status/2021985056118415661
1•MrBuddyCasino•29m ago•0 comments

AI Bubble Fears Are Creating New Derivatives

https://www.bloomberg.com/news/articles/2026-02-14/ai-bubble-fears-are-creating-new-derivatives-c...
2•koolhead17•32m ago•0 comments

How to Add DRM to Your Back End (Easy) [2026 Working]

https://maia.crimew.gay/posts/kinemaster-drm/
1•todsacerdoti•33m ago•0 comments

Two different tricks for fast LLM inference

https://www.seangoedecke.com/fast-llm-inference/
6•swah•36m ago•1 comments

MacKenzie Scott: college roommate loaned her $1000 to keep her from droping out

https://fortune.com/article/mackenzie-scott-roommate-loaned-her-1000-to-not-drop-out-inspired-26-...
1•gurjeet•38m ago•0 comments

Show HN: Retry script for Oracle Cloud free tier ARM instances

2•ekadet•39m ago•0 comments

Seedance 2.0: ByteDance's AI video model with native audio-video co-generation

https://medium.com/@channeler.h/seedance-2-0-bytedances-ai-video-model-that-generates-audio-and-v...
2•howardV•44m ago•0 comments

The Vibe Coding Slot Machine

https://alexanderweichart.de/5_Archive/4_Projects/ai-slot-machine/The-Vibe-Coding-Slot-Machine
2•surrTurr•45m ago•1 comments

Real time comms between agents and IDE AIs

https://xfor.bot/skill
1•petruspennanen•46m ago•1 comments

Darktable 5.4.1 Released

https://github.com/darktable-org/darktable/releases/tag/release-5.4.1
1•ekianjo•46m ago•0 comments

Four Column ASCII (2017)

https://garbagecollected.org/2017/01/31/four-column-ascii/
2•tempodox•48m ago•1 comments

Turned idea dump into full product, while experimenting and learning

https://idea-scout-app.vercel.app/
2•notclawd•51m ago•1 comments

Git is a file system. We need a database for the code

https://gist.github.com/gritzko/6e81b5391eacb585ae207f5e634db07e
5•gritzko•54m ago•0 comments

DjVu and its connection to Deep Learning (2023)

https://scottlocklin.wordpress.com/2023/05/31/djvu-and-its-connection-to-deep-learning/
2•tosh•58m ago•0 comments

Show HN: Shareful.ai – Stack Overflow for AI Coding Agents

https://shareful.ai/
10•thebrownproject•58m ago•12 comments

Running SN on Apple Silicon (2024)

https://atcold.github.io/2024/08/09/SN-code.html
2•tosh•58m ago•0 comments
Open in hackernews

No Coding Before 10am

https://michaelxbloch.substack.com/p/no-coding-before-10am
26•imartin2k•1h ago

Comments

wesselbindt•30m ago
> If 10x more tokens saves a day, spend the tokens. The bottleneck is human decision-making time, not compute cost.

This seems entirely backwards. Why spend money to optimize something that _isn't_ the bottleneck?

Towaway69•23m ago
Any human in the loop will be a bottleneck in comparison to AI performance.

If we take that to its logical conclusion, I think we can answer that question.

Getting rid of humans, unfortunately, also takes away their earnings and therefore their ability to purchase whatever product you are developing. The ultra rich can only purchase your product so often - hence better make it a subscription model.

So there is pressure on purchasing power versus earnings. Interesting to see what happens and why.

swiftcoder•23m ago
I think I finally understand why the LLM craze is like catnip to management types - they think they've found a cheat code to workaround the mythical man-month
walterbell•9m ago
https://x.com/a16z/status/2018418113952555445

  For my whole life in technology, there was this thing called the Mythical Man Month: nine women cannot have a baby in a month. If you're Google, you can't just put a thousand software engineers on a product and wipe out a startup because you can only... build that product with seven or eight people. Once they've figured it out, they've got that lead.

  That's not true with AI. If you have data and you have enough GPUs, you can solve almost any problem. It is magic. You can throw money at the problem. We've never had that in tech.
Davidzheng•20m ago
Am i misunderstanding? spending more tokens certainly is not optimizing for compute cost. It's the opposite
Cyphase•13m ago
Maybe it could have been written slightly clearer, but I think the intended meaning is, "If 10x more tokens saves a day, spend the tokens. The bottleneck should be human decision-making time, not agent compute time."
gozzoo•54s ago
I'm not sure I agree with this. 10x more tokens means leaaving the agent to work for 10x longer, which may lead to bugs and misintepretation of the intention. Breaking the goal into multiple tasks seems more efficient in terms of tokens and getting close to the desired goal. Of course this means more human involvment, but probably not 10x more.
koakuma-chan•17m ago
> What would your team’s tenets look like? I’d genuinely love to hear.

My team is incredibly clueless and complacent. I can't even get them to use TypeScript or to migrate from Yarn v1.

pjmlp•16m ago
This is easy, no need for AI, just join any public servant IT organisation, regardless of the country. :)
isoprophlex•13m ago
lol. my personal preference has always to do ALL the coding as early as possible. i get progressively dumber as the day wears on, seems sad to waste the prime hours on meetings and other more human things.

I don't see how that would change if you accept the premise that code is now a commodity.

My_Name•8m ago
That is essentially what the article says, that mornings are the most productive time, but it has shifted the focus from you doing the work, and mostly in the morning, to you outlining the work clearly in the morning, and the agent doing the work all day (and all night, and while you commute, and while you are in meetings)
hbogert•8m ago
Not linear in my case. My best is somewhere around 11ish. So that's usually when I start my ballmer peak and take my first beer. (Joking, of course, for people who don't get the reference)
suddenlybananas•8m ago
This sounds like a genuinely awful way to work.
motbus3•8m ago
Coding tools are less stable as the code grows for several reasons.

Some recent techniques claim to be solving this problem but none reached a release yet.

Working with what we have now, this is a recipe for disaster. Agents often lies about the outputs. The shorter the context space they have to manage while the bigger the data already in context makes it prone to lie and deceive.

It works ok for small changes on top of human code. That's what we know works now. The rest is more yet to be reached

hanspeter•5m ago
It has always been like this.

Plan before you code. Now your plan is just in a prompt.

jddj•4m ago
> Don’t spec the process, spec the outcome.

For this, which summarises vibe coding and hence the rest of the article, the models aren't good enough yet for novel applications.

With current models and assuming your engineers are of a reasonable level of experience, for now it seems to result in either greatly reduced velocity and higher costs, or worse outcomes.

One course correction in terms of planned process, because the model missed an obvious implication or statement, can save days of churning.

The math only really has a chance to work if you reduce your spend on in-house talent to compensate, and your product sits on a well-trodden path.

In terms of capability we're still at "could you easily outsource this particular project, low touch, to your typical software farm?"