frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

What Killed Flash Player

https://medium.com/@aglaforge/what-really-killed-flash-player-a-six-year-campaign-of-deliberate-p...
1•jbegley•9s ago•0 comments

Ask HN: Anyone orchestrating multiple AI coding agents in parallel?

1•buildingwdavid•1m ago•0 comments

Show HN: Knowledge-Bank

https://github.com/gabrywu-public/knowledge-bank
1•gabrywu•7m ago•0 comments

Show HN: The Codeverse Hub Linux

https://github.com/TheCodeVerseHub/CodeVerseLinuxDistro
3•sinisterMage•8m ago•0 comments

Take a trip to Japan's Dododo Land, the most irritating place on Earth

https://soranews24.com/2026/02/07/take-a-trip-to-japans-dododo-land-the-most-irritating-place-on-...
2•zdw•8m ago•0 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
6•bookofjoe•8m ago•1 comments

BookTalk: A Reading Companion That Captures Your Voice

https://github.com/bramses/BookTalk
1•_bramses•9m ago•0 comments

Is AI "good" yet? – tracking HN's sentiment on AI coding

https://www.is-ai-good-yet.com/#home
1•ilyaizen•10m ago•1 comments

Show HN: Amdb – Tree-sitter based memory for AI agents (Rust)

https://github.com/BETAER-08/amdb
1•try_betaer•11m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
2•anhxuan•11m ago•0 comments

Show HN: Seedance 2.0 Release

https://seedancy2.com/
2•funnycoding•11m ago•0 comments

Leisure Suit Larry's Al Lowe on model trains, funny deaths and Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
1•thelok•11m ago•0 comments

Towards Self-Driving Codebases

https://cursor.com/blog/self-driving-codebases
1•edwinarbus•12m ago•0 comments

VCF West: Whirlwind Software Restoration – Guy Fedorkow [video]

https://www.youtube.com/watch?v=YLoXodz1N9A
1•stmw•13m ago•1 comments

Show HN: COGext – A minimalist, open-source system monitor for Chrome (<550KB)

https://github.com/tchoa91/cog-ext
1•tchoa91•13m ago•1 comments

FOSDEM 26 – My Hallway Track Takeaways

https://sluongng.substack.com/p/fosdem-26-my-hallway-track-takeaways
1•birdculture•14m ago•0 comments

Show HN: Env-shelf – Open-source desktop app to manage .env files

https://env-shelf.vercel.app/
1•ivanglpz•18m ago•0 comments

Show HN: Almostnode – Run Node.js, Next.js, and Express in the Browser

https://almostnode.dev/
1•PetrBrzyBrzek•18m ago•0 comments

Dell support (and hardware) is so bad, I almost sued them

https://blog.joshattic.us/posts/2026-02-07-dell-support-lawsuit
1•radeeyate•19m ago•0 comments

Project Pterodactyl: Incremental Architecture

https://www.jonmsterling.com/01K7/
1•matt_d•19m ago•0 comments

Styling: Search-Text and Other Highlight-Y Pseudo-Elements

https://css-tricks.com/how-to-style-the-new-search-text-and-other-highlight-pseudo-elements/
1•blenderob•21m ago•0 comments

Crypto firm accidentally sends $40B in Bitcoin to users

https://finance.yahoo.com/news/crypto-firm-accidentally-sends-40-055054321.html
1•CommonGuy•21m ago•0 comments

Magnetic fields can change carbon diffusion in steel

https://www.sciencedaily.com/releases/2026/01/260125083427.htm
1•fanf2•22m ago•0 comments

Fantasy football that celebrates great games

https://www.silvestar.codes/articles/ultigamemate/
1•blenderob•22m ago•0 comments

Show HN: Animalese

https://animalese.barcoloudly.com/
1•noreplica•22m ago•0 comments

StrongDM's AI team build serious software without even looking at the code

https://simonwillison.net/2026/Feb/7/software-factory/
3•simonw•23m ago•0 comments

John Haugeland on the failure of micro-worlds

https://blog.plover.com/tech/gpt/micro-worlds.html
1•blenderob•23m ago•0 comments

Show HN: Velocity - Free/Cheaper Linear Clone but with MCP for agents

https://velocity.quest
2•kevinelliott•24m ago•2 comments

Corning Invented a New Fiber-Optic Cable for AI and Landed a $6B Meta Deal [video]

https://www.youtube.com/watch?v=Y3KLbc5DlRs
1•ksec•25m ago•0 comments

Show HN: XAPIs.dev – Twitter API Alternative at 90% Lower Cost

https://xapis.dev
2•nmfccodes•26m ago•1 comments
Open in hackernews

AI and Home-Cooked Software

https://mrkaran.dev/posts/ai-home-cooked-software/
61•todsacerdoti•4mo ago

Comments

_aavaa_•3mo ago
“Every line of AI-generated code is a plausible-looking liability. It may pass basic tests, only to fail spectacularly in production with an edge case you never considered.”

Every time I read something along the lines I have to wonder whose code these people review during code reviews. It’s not like the alternative is bulletproof code.

adocomplete•3mo ago
I was thinking the same thing. Humans push terrible code to production all the time that slips through code reviews. You spot it, you fix it, and move on.
kanwisher•3mo ago
Also a lot of the AI code reviewer tools catch bugs that you wouldn't catch otherwise
resize2996•3mo ago
I do not know the future, every line of code is a plausible-looking liability.
moomoo11•3mo ago
They set up a GitHub action that has AI do an immediate first pass (hallucinates high on drugs and not the good kind) and leave a review.

Considering 80% of team mates are usually dead weight or mid at best (every team is carried by that 1 or 2 guys who do 2-3x), they will do the bare minimum review. Let’s be real.. PIP is real. Job hopping because bad is real.

It’s a problem. I have dealt with this and had to fire.

peacebeard•3mo ago
A lot of people seem to equate using AI tools and deploying code that you don’t understand. All code should be fully understood by the person using the AI tool, then again by the reviewer. The productivity benefit of these tools is still massive, and there is benefit to doing the research and investigation to understand what the LLM is doing if it was not clear up front.
ruszki•3mo ago
> All code should be fully understood by the person using the AI tool, then again by the reviewer.

Should, yeah. But that was not true even before LLMs.

peacebeard•3mo ago
Correct. The problem of poor code review is not new and it is not unique to LLMs.
nakamoto_damacy•3mo ago
The G in AGI is a big deal and it’s missing from LLMs.

Anything coded by an LLM risks being under-generalised.

Asking an LLM to think in a generalised way does not make it an AGI. The critical ability to generalised beyond learned patterns and to not only come up with arbitrary patterns but to use correct logic to derive them is missing from LLMs because LLMs do not have a logical layer, only a probabilistic one with learned constraints. The defect is the lack of internal logical constraints. It’s a big subject.

I say more about it here:

https://www.forbes.com/sites/hessiejones/2025/09/30/llms-are...

Aka

“layered system”

Balinares•3mo ago
Good code is explicit about its assumptions and enforces them; good companies set hiring bars so as to filter out developers that can't write good code.

There's no such thing as bulletproof, but there is definitely such a thing as knowing where your vital organs are and how to tell when they've been hit.

_aavaa_•3mo ago
> good companies set hiring bars so as to filter out developers that can't write good code.

And the others are going to be replaced by one engineer and an AI of equivalent caliber.

hitarpetar•3mo ago
that's right, all your coworkers are incompetent but YOU have the secret
_aavaa_•3mo ago
Aside from what you think that I think of myself, do you have a disagreement and actual counter argument to what I said?

1. Many developers are currently employees who objectively produce code of equal or lower quality than AI as it currently is.

2. It is cheaper and more productive to higher fewer, more competent, people and replace the less productive ones by AI (possible due to 1).

3. Short of regulations preventing it, companies will follow through on 2.

hitarpetar•3mo ago
> Many developers are currently employees who objectively produce code of equal or lower quality than AI as it currently is.

what can be asserted without evidence can also be dismissed without evidence

MostlyStable•3mo ago
I've made this point before, and in the short to medium term, I really do think it's one of the biggest and most underrated uses of AI. If I am making a tool for myself, and only myself, and if I deeply understand both the inputs and the expected outputs, and if it's a one off script or tool designed to solve a specific problem I have, then a huge swath of the issues with AI coding go away.

It's still not perfect, but it is dramatically easier to be fast and productive, and it is a huge leap in capabilities for people who previously couldn't code anything at all, but had deep enough domain knowledge to know what tools they wanted, and approximately how they should work, what kind of information they should ingress, and what kind of information they should egress.

bitwize•3mo ago
Ah, another "Now that we have AI, people can do [thing people could do for decades]" article. If there was something you wanted a computer to do, that it did not yet do, you programmed it. And if you didn't know how, you learned. BASIC was always there.

But the industry as a whole moved away from the idea that end users are to program computers sometime in the 80s or 90s (the glorious point and click future was not evenly distributed). So now the only tools for writing software out there are either outdated, or require considerable ceremony to get started with (even npm install). So what, we're gonna paper over the gap with acres of datacenter stealing our energy and fresh water to play token numberwang? Fuck me!

This article, and generative AI in general, is appealing to the people on Golgafrinchian Ark Fleet Ship B (aka "the managerial class") because it helps them convince themselves that they can now do all the things the folks on Golgafrinchian Ark Ship A can do (so who needs them, anyway) without having to learn anything. Now you can program without having to program! You're an Idea Person, and that's what's really important; so just idea-person into ChatGPT and all the rest will be taken care of for you. I think these folks are in for a rude awakening.

djmips•3mo ago
I feel like you've never actually tried to make tool with Claude Code or similar because BASIC is not it - that's viewing the past with rose coloured glasses. However I understand your central thesis that we could have actually put effort into making something that average folk could use to effectively leverage computers in a way that requires 'code'. But you know we have tried - we have Scratch - we have all of the node graph spaghetti in Unreal Engine and others - I am a programmer but I finally sat down and went through the process of making a working finished tool in a language I'm unfamliar with and using Claude Code and it went really well. And if folks like Ben Krasnow of Applied Science channel are using AI coding tools that they would formally take them 3 to 5 x longer to struggle through unfamiliarity then practically it's working - although I also take a nod to your 'at what cost'. But the idea that we could have been living in some Utopian BASIC derived alternate uinverse seems a little bit optimistic to me. I like AI coding (if I don't have to think of the costs)
bitwize•3mo ago
I'm not saying BASIC is it. But it was good enough in its day—my father used it to write engine simulations. It was the first language to attack the problem "getting computing nonprofessionals to write their own programs for their own needs" and it achieved that very well by the standards of the 01960s-01980s. But the fact that every computer shipped with a language that allowed users to get started with programming right away, was a noble thing we should have sought to preserve even in the present day. Scratch is for kids, and node spaghetti presents the usual no-code issues. HolyC comes close, but you know, Terry Davis. An acquired taste.

I was kinda hoping that language would be Python, but even that requires ceremony these days.

pickledonions49•3mo ago
From what I can tell, some professionals seem to perceive it as a way to not write the easy stuff and only deal with the harder, more specific stuff that llms don't get (because they are incapable of new ideas). I don't know all the facts, but it seems as though this "home cooked software" boom will be really dull because of llm limitations. It always seems like I am actually learning something when copying code from books, maybe that is one reason why the 80s was interesting in terms of software, but what do I know, I wasn't alive.
seabombs•3mo ago
In my experience, the people using AI to make programs are still programmers. As in, they were trained in programming pre-AI. Managers are using AI to output manager stuff - documents, spreadsheets, etc. Similarly, marketers are using AI to write ad copy, not the marketing manager.

This may change as tools become better known or adopted. But for know the same people who did the job before are now using AI in that job.

FWIW, in my IRL experience, all the work output of those using AI for whatever task has been of poorer quality than without.

heeton•3mo ago
Member of Golgafrinchian Ark Fleet Ship C here.

I like to make stuff, hack on projects. I code, I woodwork, I solder, I build. I love AI in the same way I love a router and dovetail jig in the workshop.

My son and I were playing minecraft, and we wanted to build a massive egg, just for fun. (My son is 5).

We try, we fail, we try again. We start learning about spheres and how to draw circles, this is peak project-based-learning. But we still can't build an egg that looks good, and now it's becoming less fun and there's no drive to keep trying.

So I spin up claude after hours and in ~30 mins I have an parametric egg-generator in 3d space, mapped to voxels. There is no universe in which my child would be interested in the many years of training to get to the point of building this himself. I also don't have 20 hours free to learn and implement my own 3d voxel rendering systems, just to build an egg in minecraft as a silly teaching exercise.

That weekend we try to use this tool, and we see it's really hard to just see an egg model and build it, so 15 minutes later it can now show us a sliced layer-by-layer view.

The weekend, my son built a massive fucking egg in Minecraft and he's been talking about circles and radiuses and eggs and coding software ever since. He was SO excited to see that we could take a running program, something "real" in the world, and then directly change it. (And now he's trying to learn about code and graphics and stuff. Again, he's 5 - this is the passing interest of curiosity in a child, he's not studying 50-hour courses to learn low level skills)

Are you saying that's not a massive win for everyone involved?

SpecialistK•3mo ago
I feel personally targeted :D

Programming classes didn't work out for me in college, so I went into sysadmin with a dash of Devops.

Now I can make small tools for things like syncing my living room PC to a big LED panel above the TV (was app-only but there's a Python reverse engineering which I vibe-coded a frontend for) or an orchestration script which generates a MAC address, assigns a DHCP reservation in OPNsense, and created the VM or CT using that MAC and a password which gets stored in my password manager.

I could have done either of these projects myself with a few weekends and tutorials. Now it's barely an evening for proof of concept and another evening to patch it up to an acceptable level. Knowing most of the stuff around coding itself definitely helped a lot.

jjmarr•3mo ago
Long-term SWEs at non-tech companies will spend much of their time reviewing vibe coded features/prototypes/scripts from non-technical employees and scaling them once they become critical infrastructure.

This'll eliminate jobs in the "develop CRUD app" industry but will create better jobs in security/scalability/quality expertise. But it'll take a few years as all these vibe coded business process scripts start to fail.

Programmers miss the human element, which is that many managers look at a software project as too risky, even if automating a business process could trivially save money. There are millions of people in the USA who spend most of their day manually transferring data from incompatible systems.

AI allows anyone to create a hacky automation that demonstrates immediate value but has obvious flaws that can be fixed by a skilled SWE. That will make it easier to justify more spending on devs.

bitwize•3mo ago
That is literally the exact promise of CASE tools in the 80s and the early 90s; UML code generation tools in the 2000s, and "low-code/no-code" platforms in the 2010s. It turned out to be a disaster every time, especially when the Idea Persons chucked their creations over the wall to SWEs to bash them into actual products because the Idea Persons had Far More Important Things To Do than maintain their coalesced brain farts.

We're repeating history but with more energy consumption.

ndileas•3mo ago
I wasn't around for the second millennium versions. At some point, doesn't there exist a kind of activation energy threshold where enough money/promise etc is gained from the prototype that this pattern works for good ideas and not for bad ones?
bonsai_spool•3mo ago
I do think that there’s a difference in kind here - we’re not producing UML graphs that require programmer time to implement (or sending the diagrams to SE Asia and then code reviewing).

The code ‘works’ - and the folks who are improving the prototype can also benefit from the tools that the Idea Person used.

sarchertech•3mo ago
The code worked for the examples the OP gave as well. They weren’t talking just about UML graphs, but about automated tools to turn those graphs into code.

And in the case of low code/no code, those produced working prototypes as well. And you could in most cases export them to raw code.

quxbar•3mo ago
LLMs unlock a fundamentally different paradigm of interaction, in my experience a non-technical person with a good humanities background can describe what they want adequately. This is without needing to master the arbitrary grammar of a no-code system. Does often inevitably turn out to be a 'toy' version of what a real business needs? Yes, but it's still strictly better than previous ways of working.
jjmarr•3mo ago
It was a disaster because the tools were too difficult to use by their end-users, not because the software quality sucked.

Meanwhile my mom can vibe code actual Python scripts to do parts of her job now.

ares623•3mo ago
There better be new bootcamps on how to maintain these systems because without CRUD jobs how would someone new get the experience
macNchz•3mo ago
I've been thinking in a similar way over the past year or so—we're seeing the emergence of more widespread access to custom software for use cases that previously never would have justified the investment.

There are so many situations where a little program can save one or a few people hours of work on tedious tasks, but wouldn't make sense to build or support given traditional software development overhead, but that becomes realistic with AI-assisted development. This is the idea of a sysadmin who has automated 80% of his job with a variety of shell scripts, borne out into many other roles.

We've seen the early phases of it with Replit and Lovable et al, but I think there's a big playing field for secure/constrained runtime environments for non-developers to be able to build things with AI. Low/no-code tooling increasingly seems anachronistic to me: I prefer code, just let an AI write and maintain it.

There's also a whole world of opportunity around the fact that many people who could benefit greatly from AI-built programs are simply not particularly suited to building them themselves, barring a dramatically more capable AI, where I think enterprising software engineers can likely spin out tons of useful stuff and address a client base they might never have previously been able to address.

chasd00•3mo ago
i'm not an llm superfan but i do find them useful for coming up with one-off scripts to process datafiles or other small tasks that would have taken me a couple hours to get right. The llm produces something 85% of the way there including command line argument processing and all that stuff i hate to type out. I just fix the broken bits and make some adjustments and i have what i need in about 20 min. For that kind of task they are very useful IMO.
ivanech•3mo ago
AI tools have been so good for me for making home-cooked software. As a new-ish parent, it’s so much easier to do stuff. I don’t need to go into extra-deep focus mode to learn how to center a div for the hundredth time, I can spend that precious focus time on the problems that matter / the core motivation.
infinitezest•3mo ago
I find LLMs really useful on a daily basis but I keep wondering, what's going to happen when the VC money dries up and the real cost of inference kicks in? Its relatively cheap now but its also being heavily subsidized. The usual answer is to just jam ads into your product and slowly increase the price over time (see: Netflix) but I don't know how that'll work for LLMs.
evolighting•3mo ago
you could localhost ollama, vLLM, or something like that; Open Models are good enough for simple task, With a bit of extra effort and learning, this is usually just works for most case.

But in that situation, there may be no further updates, the future remains uncertain.

Gigachad•3mo ago
Local llms are good for language based tasks where no specific knowledge is needed, but certainly not programming.
pickledonions49•3mo ago
I heard that photonic chip stuff might make running this stuff cheaper in data center environments.
dzink•3mo ago
AI use on corporate code means exponentially compounding complexity. Even if carefully planned for, AI enables more features to be added and more will be added a-la-carte, and the tower inevitably becomes bigger than humans can manage, especially if they are allowed increasingly less time to maintain a fast growing pile of code. That means eventually large enough code bases will be manageable only by AI or not at all. Talk about lock-in.
horizonVelox999•3mo ago
AI tools aren't perfect, but they're great for quick personal projects where you understand exactly what you need. It's like having a helpful assistant who writes the boring parts while you focus on solving the actual problem.
bronlund•3mo ago
This is all temporary. In a not too distant future, people will just get the AI to simulate whatever application they want directly - skipping that annoying programming stage altogether :D https://www.youtube.com/watch?v=dGiqrsv530Y
bayindirh•3mo ago
I can't wait to see all remote exploits embedded into the code by these AI servants. No need to phish targets. The tools send everything automatically.