frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Josh Collison and Dwarkesh Patel Interview Elon Musk [video]

https://www.youtube.com/watch?v=BYXbuik3dgA
1•surprisetalk•3m ago•0 comments

Human brain cells on a chip learned to play Doom in a week

https://www.newscientist.com/article/2517389-human-brain-cells-on-a-chip-learned-to-play-doom-in-...
1•alex_young•4m ago•0 comments

Malm Whale in Gothenburg

https://www.atlasobscura.com/places/malm-whale
1•thunderbong•5m ago•0 comments

Plugtest

https://en.wikipedia.org/wiki/Plugtest
1•dhorthy•6m ago•0 comments

Show HN: EmCogni Code, the context engine for the "why" behind your codebase

https://www.emcogni.com/
1•ssbodapati•7m ago•0 comments

Simple Made Inevitable: The Economics of Language Choice in the LLM Era

https://felixbarbalet.com/simple-made-inevitable-the-economics-of-language-choice-in-the-llm-era/
1•puredanger•9m ago•0 comments

Idiot Plot

https://en.wikipedia.org/wiki/Idiot_plot
1•treetalker•11m ago•0 comments

Interview with Thomas Wouters by Guido van Rossum

https://gvanrossum.github.io/interviews/Thomas.html
3•tzury•14m ago•0 comments

Translatorhub

https://translatorhub.org/
1•zidana•20m ago•0 comments

Show HN: ClaudeTerminal – A tabbed terminal manager for Claude Code

https://github.com/Mr8BitHK/claude-terminal
1•mr8bit•22m ago•0 comments

NeurIPS 2021 Papers (2021)

https://tanelp.github.io/neurips2021/
1•vinhnx•26m ago•0 comments

Office of Technology Assessment

https://en.wikipedia.org/wiki/Office_of_Technology_Assessment
1•softwaredoug•27m ago•0 comments

MidnightBSD Excludes Calif. From Desktop Use Due to Digital Age Assurance Act

https://ostechnix.com/midnightbsd-excludes-california-digital-age-assurance-act/
4•WaitWaitWha•30m ago•2 comments

OpenSandbox

https://github.com/alibaba/OpenSandbox
1•nileshtrivedi•31m ago•0 comments

Why Is Your Operating System Debugging Hackers for Free?

1•agarmte•31m ago•0 comments

Polymarket Iran Bets Hit $529M as New Wallets Draw Notice

https://www.bloomberg.com/news/articles/2026-02-28/polymarket-iran-bets-hit-529-million-as-new-wa...
2•petethomas•33m ago•0 comments

Show HN: Computer Agents – Agents that work while you sleep

https://computer-agents.com
3•janlucasandmann•33m ago•0 comments

Uplift Privileges on FreeBSD

https://vermaden.wordpress.com/2026/03/01/uplift-privileges-on-freebsd/
1•vermaden•33m ago•0 comments

Artichoke induces sweet taste (PubMed)

https://pubmed.ncbi.nlm.nih.gov/5084667/
1•valzevul•33m ago•0 comments

Edge – Generate structured evaluation criteria for any domain using a local LLM

https://github.com/EviAmarates/fresta-edge
1•TiagoSantos•44m ago•0 comments

Have you used Terragrunt in the past? Keen to hear your thoughts

https://techroom101.substack.com/p/terragrunt-what-it-solves-what-it
1•ahaydar•45m ago•0 comments

Two-way Discord bridge-autonomous Claude Code sessions(WebSocket+local queue)

https://github.com/AetherWave-Studio/autonomous-claude-code
1•Drew-Aetherwave•45m ago•1 comments

Token Anxiety

https://writing.nikunjk.com/p/token-anxiety
1•vinhnx•46m ago•0 comments

A State Government Tried to Regulate Linux; It Went How You'd Expect

https://www.youtube.com/watch?v=mQLdDR-hJpc
1•cable2600•51m ago•1 comments

I built AI agents that do the grunt work solo founders hate

2•Seleci•58m ago•0 comments

TorchLean: Formalizing Neural Networks in Lean

https://leandojo.org/torchlean.html
2•matt_d•58m ago•0 comments

Hackers Expose the Surveillance Stack Hiding Inside "Age Verification"

https://www.techdirt.com/2026/02/25/hackers-expose-the-massive-surveillance-stack-hiding-inside-y...
3•nobody9999•59m ago•1 comments

Japanese firm Space One plans to launch Kairos No.3 rocket on Sunday

https://www3.nhk.or.jp/nhkworld/en/news/20260301_01/
2•HardwareLust•1h ago•2 comments

Show HN: Sailor.ai – source-backed personalized outbound emails

https://trysailor.ai/
1•bill_waybird•1h ago•1 comments

Show HN: Brand Analytics for AI Search Engines (Beta)

https://explore.somantra.ai/dashboard/141d19d6-1ee7-4a25-81cf-411e6792e286/Australia
1•prasaar•1h ago•0 comments
Open in hackernews

Ask HN: Maintaining code quality with widespread AI coding tools?

3•raydenvm•9mo ago
I've noticed a trend: as more devs at my company (and in projects I contribute to) adopt AI coding assistants, code quality seems to be slipping. It's a subtle change, but it's there.

The issues I keep noticing: - More "almost correct" code that causes subtle bugs - The codebase has less consistent architecture - More copy-pasted boilerplate that should be refactored

I know, maybe we shouldn't care about the overall quality and it's only AI that will look into the code further. But that's a somewhat distant variant of the future. For now, we should deal with speed/quality balance ourselves, with AI agents in help.

So, I'm curious, what's your approach for teams that are making AI tools work without sacrificing quality? Is there anything new you're doing, like special review processes, new metrics, training, or team guidelines?

Comments

mentalgear•9mo ago
I also share this experience/concern.

Yet, it could be as easy as having a specialised model which is a code quality checker, refactor-er or QA tester.

Also, claimify (MS research) could be interesting for isolating claims about what the code should do, and then following up on writing granular unit test coverage.

raydenvm•9mo ago
Thanks for sharing! Never heard of claimify, already looking into it...
furrball010•9mo ago
I share your concern, but perhaps for a different reason. I think the more code is added, the more problems/bugs emerge, whether a human or AI codes it.

However, with AI coding tools it's becoming a lot easier to write A LOT of code. And all this code (similar to when a human would write it) adds complexity and bugs. So it's not just the quality, it's also the quantity of code that damages existing code bases (in my view).

raydenvm•9mo ago
Yeah, more code in the same amount of time. And then it is tough to find more time for code review
sargstuff•9mo ago
?? code quality ?? more management quality. AI provides ability to spot possibility of 'issues'/conflicts sooner.

Really need to be adhering to set of defined specifications (functional / non-functional / domain specific), (work,project, etc). (and/or looking at what level(s) the specifications still relevant, post definition of specifications -- historically via different management levels). Note: doesn't necssarily mean riedgid specs first, code next, document.

Sigificant coding is "DFA" per setting/defining pre/post environment : repository check-in/out can be setup to do specification checking/diffing for auto-documentation, 'language/project features requirements, aka use, do not use, only use when, never use' can be done/filtered via . Above certain 'size', 're-inventions' would be an AI statisticall inference thing per amount of information.

Non-DFA aka "context sensitive" stuff : AI would only make sense if way to compare specifications with 'intentions'. aka generate confidence in how much newer coder has been on-boarded relative to coding attempts & project/work specifications. Perhaps also give work place management insite into how relevent things are (vs. "worker is the issue"). aka non-adherance to 'spec' because spec doesn't cover issue(s). Time to review spec. Still need human(s) in loop to figure out the relevant tangibles/intangibles. AI can certainly help identify ambiguities in specifications & how specifications are implimented/used. aka code debt & code drift