frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
1•1vuio0pswjnm7•41s ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
1•1vuio0pswjnm7•2m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•4m ago•0 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•7m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•12m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•13m ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•17m ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•29m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•30m ago•0 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•31m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•44m ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•47m ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•50m ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•58m ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•59m ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•1h ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•1h ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
2•basilikum•1h ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•1h ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•1h ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
3•throwaw12•1h ago•1 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•1h ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•1h ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•1h ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•1h ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•1h ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•1h ago•1 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•1h ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•1h ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•1h ago•0 comments
Open in hackernews

The Leverage Paradox in AI

https://www.indiehackers.com/post/lifestyle/the-leverage-paradox-ksRiX6y6W7NzfBE57dzt
81•ChanningAllen•5mo ago

Comments

kazinator•5mo ago
If we give runners motorcycles, they reach finish lines faster. But the motor sport is still competitive and takes effort; everyone else has a bike, too. And since the bike parameters are tightly controlled (basically everyone is on the same bike), the competition is intense.
ares623•5mo ago
The cost of losing the race is losing your home and starving. Very intense.
interstice•5mo ago
Analogy holds because its way more expensive, stressful, and the stakes are higher. Also it's harder to get in to without already having an advantage (like rich parents).
satisfice•5mo ago
This article says that the stairs have been turned into an escalator. But I think it’s an escalator to slop.

Therefore, it doesn’t affect my work at all. The only thing that affects my prospects is the hype about AI.

Be a purple cow, the guy says. Seems to me that not using AI makes me a purple cow.

sdesol•5mo ago
> Therefore, it doesn’t affect my work at all.

But that isn't what the author is talking about. The issues is, your good code can be equal to slop that works. What the author says needs to happen is, you need to find a better way to stand out. I suspect for many businesses where software superiority is not a core requirement, slop that works will be treated the same as non-slop code.

bobnamob•5mo ago
> slop that works

Until that slop that works leads to therac-26 or PostOfficeScandal2 electric boogaloo. Neither of those applications required software superior to their competitors, just working software

The average quality of software can only trend down so far before real world problems start manifesting, even outside of businesses with a hard requirement on "software superiority"

satisfice•5mo ago
Anyone can say that something works. Lots of things look like they work even though they harbor severe and elusive bugs.
oceanplexian•5mo ago
It's so bizarre to me seeing these comments as a professional software engineer. Like, you do realize that at least 80% of the code written in large companies like Microsoft, Amazon, etc was slop long before AI was ever invented right?

The stuff you get to see in open source, papers, academia- that's a very small curated 1% of the actual glue code written by an overworked engineer at 1am that holds literally everything together.

satisfice•5mo ago
Why is it bizarre? I’m a tester with 38 years in the business. I’ve seen pretty much every kind of project and technology.

I was testing at Microsoft on the week that Windows 2000 shipped, showing them that Photoshop can completely freeze Windows (which is bad, and something they needed to know about).

The creed of a tester begins with faith in existence of trouble. This does not mean we believe anything is perfectible— it means we think it is necessary to be vigilant.

AI commits errors in a way and to a degree that should alarm any reasonable engineer. But more to the point: it tends to alienate engineers from their work so that they are less able to behave responsibly. I think testing is more important than ever, because AI is a Gatling gun of risk.

satisfice•5mo ago
You are focusing on code. That is the wrong focus. Creating code was never the job. The job was being trustworthy about what I deliver and how.

AI is not worthy of trust, and the sort of reasonable people I want to deal with won’t trust it and don’t. They deal with me because I am not a simulation of someone who cares— I am the real thing. I am a purple cow in terms of personal credibility and responsibility.

To the degree that the application of AI is useful to me without putting my credibility at risk, I will use it. It does have its uses.

(BTW, although I write code as part of my work, I stopped being a full-time coder in my teens. I am tester, testing consultant, expert witness, and trainer, now.)

personjerry•5mo ago
This seems like an unsubstantial article, ironically it might have been written by AI. Here's the entire summary:

AI makes slop

Therefore, spend more time to make the slop "better" or "different"

[No, they do not define what counts as "better" or "different"]

lawlessone•5mo ago
I've been thinking something similar about any company that has AI do all it's software dev.

Where's your moat? If you can create the software with prompts so can your competitors.

Attackers knowing which model(s) you use could also do similar prompts and check the output code, to speculate what kind of exploits your software might have.

A lawyer knowing what model his opposition uses could speculate on their likely strategies.

davidhunter•5mo ago
I’d suggest reading about competitive moats and where they come from. The ability to replicate another’s software does not destroy their moat.
hamdingers•5mo ago
The set of commercially successful software that could not be reimplemented by a determined team of caffeinated undergrads was already very small before LLM assistance.

Turns out being able to write the software is not the only, or even the most important factor in success.

bonoboTP•5mo ago
TL;DR relative status is zero sum
cortesoft•5mo ago
Do people really try to one-shot their AI tasks? I have just started using AI to code, and I found the process very similar to regular coding… you give a detailed task, then you iterate by finding specific issues and giving the AI detailed instructions on how to fix the issues.

It works great, but I can’t imagine skipping the refinement process.

ssharp•5mo ago
Every tool I've tinkered with that hints at one-shotting (or one-shot and then refine) ends up with a messy app that might be 60-70% of what you're looking for but since the foundation is not solid, you're never going to get the extra 30-40% of your initial prompt, let the multiples of work needed to bolt of future functionality.

Compare that to the approach you're using (which is what I'm also doing), and you're able have have AI stay much closer to what you're looking for, be less prone to damaging hallucinations, and also guide it to a foundation that's stable. The downside is that it's a lot more work. You might multiply your productivity by some single digit.

To me, that 2nd approach is much more reasonable than trying to 100x your productivity but actually end up getting less done because you end up stuck in a rabbit hole you don't know you're in and you'll never refine your way out of it.

braaileb•5mo ago
I got stuck in that rabbit hole you mention. Ended up ditching AI and just picked up a no/low-code web app builder cause I don’t handle large project contexts in my own head well enough to chunk design into tasks that AI can handle. But the builder I use can separate the backend from the front end which allows for a custom front end template source code to be consumed by an ai agent if you want. I’m hoping I can manage this context better but I still have to design and deploy a module to consume user submitted photos and process with an ai model for instant quote generation
sdesol•5mo ago
> Do people really try to one-shot their AI tasks?

Yes. I almost always end with "Do not generate any code unless it can help in our discussions as this is the design stage" I would say, 95% of my code for https://github.com/gitsense/chat in the last 6 months were AI generated, and I would say 80% were one shots.

It is important to note that I can easily get into the 30+ messages of back and forth before any code is generated. For complex tasks, I will literally spend an hour or two (that can span days) chatting and thinking about a problem with the LLM and I do expect the LLM to one shot them.

jplusequalt•5mo ago
Do you feel as if your ability to code is atrophying?
sdesol•5mo ago
Not even remotely since the 5% that I need to write is usually quite complex. I do think my writing proficiency will decrease though. However my debugging and problem solving skills should increase.

Having said all of that, I do believe AI will have a very negative affect on developers where the challenge is skill and not time. AI is implementing things that I can do if given enough time. I am literraly implementing things in months that would have taken me a year or more.

My AI search is nontrivial but it only took two months to write. I should also note the 5% that I needed to implement was the difference between throw away code and a usuable search engine.

jplusequalt•5mo ago
>Not even remotely since the 5% that I need to write is usually quite complex.

Not sure I believe this. If you suddenly automate away 95% of any task, how could it be the case you retain 100% of your prior abilities?

>However my debugging and problem solving skills should increase

By "my", I assume you mean "my LLM"?

>I do think my writing proficiency will decrease though.

This alone is cause for concern. The ability for a human being to communicate without assistance is extremely important in an age where AI is outputting a significant fraction of all new content.

sdesol•5mo ago
> Not sure I believe this. If you suddenly automate away 95% of any task, how could it be the case you retain 100% of your prior abilities?

I need to review like crazy now, so it is not like I am handing off my understanding of the problem. If anything, I learn new things from time to time, as the LLM will generate code in ways that I haven't thought of before.

The AI genie is out of the bottle now and I do believe in a year or two, companies are going to start asking for conversations along with the LLM generated code, which is how I guess you can determine if people are losing their skill. When my code is fully published, I will include conversations for every feature/bug fix that is introduced.

> The ability for a human being to communicate without assistance is extremely important

I agree with this, but once again, it isn't like I don't have to review everything. When LLMs get much better, I think my writing skills may decline, but as it currently stands, I do find myself having to revised what the LLM writes to make it sound more natural.

Everything is speculation at this point, but I am sure I will lose some skills but I also think will gain new ones by being exposed to something that I haven't thought of before.

I wrote my chat app because I needed a more comfortable way to read and write *long* messages. For the foreseeable future, I don't see my writing proficiency to decrease in any significant manner. I can see myself being slower to write in the future though, as I find myself being very comfortable speaking to the LLM in a manner that I would not to a human. LLMs are extremely good at inferring context, so I do a lot lazy typing now to speed things up, which may turn into a bad habit.

pcfwik•5mo ago
> This is the leverage paradox. New technologies give us greater leverage to do more tasks better. But because this leverage is usually introduced into competitive environments, the result is that we end up having to work just as hard as before (if not harder) to remain competitive and keep up with the joneses.

Off-topic, but in biology circles I've heard this type of situation (where "it takes all the running you can do, to keep in the same place" because your competitors are constantly improving as well) called a "Red Queen's race" and really like the picture that analogy paints.

https://en.wikipedia.org/wiki/Red_Queen%27s_race

EGreg•5mo ago
Also known as induced demand, and why adding a lane on the highway doesn’t help for long

https://en.wikipedia.org/wiki/Induced_demand

sefrost•5mo ago
I feel that I understand the leverage paradox concept, and the induced demand concept, but I don't understand how they are the same concept. Can you explain the connection a little more?
EGreg•5mo ago
More leverage = more productivity = more supply of good and services

The induced remand for more goods and services therefore fills the gap, and causes people to work just as hard as before -- similarly to how a highway remains full after adding a lane

Dracophoenix•5mo ago
This circumstance is more commonly known as the Jevons Paradox

https://en.wikipedia.org/wiki/Jevons_paradox

fxtentacle•5mo ago
My prediction is that the next differentiator will be response time.

First we got transparent UIs, now everyone has them. Then we got custom icons, then Font Awesome commoditized them. Then flat UI until everyone copied it. Then those weird hand-painted Lottie illustrations, and now thanks to Gen-AI everyone has them. (Then Apple launched their 2nd gen transparent UI.)

But the one thing that neither caffeinated undergrads nor LLMs can pull off is making software efficient. That's why software that responds quickly to user input will feel magical and stand out in a sea of slow and bloated AI slop.

patrickhogan1•5mo ago
Allan Schnaiberg's concept of the treadmill of production where actors are perpetually driven to accumulate capital and expand the market in an effort to maintain relative economic and social position.

Interesting that radical abundance may create radical competition to utilize more abundant materials in an effort to maintain relative economic and social position.

patrickhogan1•5mo ago
People dislike the word slop because it sounds harsh.

But what’s unique today becomes slop tomorrow, AI or not.

Art has meaning. Old buildings feel special because they’re rare. If there were a thousand Golden Gate Bridges, the first wouldn’t stand out, as much.

Online, reproduction is trivial. With AI, reproducing items in the physical world will get cheaper.

1718627440•5mo ago
> Old buildings feel special because they’re rare.

No. When you have a city full of old houses all from the same era, maybe even by the same architect, the new building still looks ugly. The old house looks beautiful, even when you have hundreds copies next to it.

furyofantares•5mo ago
With my current project (a game project), I full-vibed as hard as I could to test out the concept, as well as get some of the data files in place and write a tool for managing the data. This went great, and I have made technology choices for AI-coding and have gained enough skill with AI-coding that I can get prettttty far this way. But it does produce a ball-of-mud pattern and a lot of cruft that will cause it to hit a brick wall.

Then I copied the tool and data to a new directory and fully started over, with a more concrete description of the product I wanted in place and a better view of what components I would want, and began with a plan to implement one small component at a time, each with its own test screen, reviewing every change and not allowing any slop through (including any features that look fine from a code standpoint but are not needed for the product).

So far I'm quite happy with this.

braaileb•5mo ago
Where does the product description sit in your project so the ai can reference it? Is it like a summary form that describes what the project basically should do or be used for, asking for a friend
furyofantares•5mo ago
It's right in CLAUDE.md

For take #1 I said what tech to use and a high level description of the game and it's features. I guess I failed to mention this part, but when I threw take #1 away, I first used Claude + hand editing to update it to have a detailed description of each screen and feature in the game. So take #2 had a much more detailed description of exactly what was going to be built, but still, right in CLAUDE.md

I did also create a DEVELOPMENT-PLAN.md first with Claude and have been having it update it with what's been done before every commit. I don't know yet have a good idea of how impactful that part has been.

becomevocal•5mo ago
> Generative AI gives us incredible first drafts to work with, but few people want to put in the additional effort it requires to make work that people love

and

> So make your stuff stand out. It doesn't have to be "better." It just has to be different.

equals... craft?

Isn't that what has always mattered a great deal

interstice•5mo ago
I wouldn't say everything that gets hugely popular has a ton of craft behind it, to me craft is about skill, but a badly drawn webcomic (random example) can still be very popular if it has something other point of difference.
esafak•5mo ago
To win big financially you have to be able to use AI better than others. Even if you use it merely as well as the next person, your productivity has increased, reducing costs, which is a good thing. The bad news for some is that they are not enjoying the parts of the work left over from automation.
conartist6•5mo ago
I don't see how that can be. There is no exponential return on "investing" in using AI real good.

Investing in your understanding and skill, on the other hand, has nearly limitless returns.

esafak•5mo ago
I did not speak of "exponential" returns, but it is now feasible for one person to compete with a team, or a small team with a big one, due to co-ordination costs and the difficulty of assembling the right people.
conartist6•5mo ago
What?? That isn't a complete idea. It has always been possible for a small team to compete with a big one.

As someone on a very small team competing with a very big one I don't have time for anything that can't bring exponential returns. I have no time for LLMs.

esafak•5mo ago
What even is an exponential return? You need to be more precise with your terms.
conartist6•5mo ago
An investment in a person has superlinear returns: with time the human student becomes the teacher. Each person you teach might teach two more people, with the overall trend following exponential growth deriving from the value of the initial investment in a single person.

LLMs promise to speed you up right now in direct proportion to the amount you pay for tokens while sacrificing your own growth potential. You'd have to be a cynic to do it -- you'd have to believe that your own ideas aren't even worth investing in over the long term

esafak•5mo ago
Returns for who, the company or the student? Juniors are often a net negative for companies. Some stay that way because they just won't learn. You would get further by hiring seniors and learning from them.
conartist6•5mo ago
Again that's the cynical view which assumes management but no leadership with any ability or conviction. Sadly that's commonly the reality now.
flashgordon•5mo ago
Out of curiousity isnt this very similar to Jevon's paradox? Or is JP talking about supply/demand vs this being about competitiveness/skill?
pluto_modadic•5mo ago
Ghibli images are not "cows", they're /an artists style/, and a particular shop that has expressly asked that you *not copy their work*, because it cheapens what humans do.
interstice•5mo ago
The article is defining cows as something we see too much of. Copying Ghiblis work turns the images into cows, regardless of how the artist feels about it. Obviously it would be ideal if that wasn't happening.
furyofantares•5mo ago
Maybe you already don't find cows beautiful and so didn't appreciate the metaphor. Here's another take: Driving the road to Hana on Maui, I think you drive by like 50 waterfalls. We were in awe for the first dozen, but by the 50th, it was just another waterfall. Or seeing nonstop bald eagles in Alaska, by the time you leave, they're like pigeons.

The point being made exactly that something beautiful has being cheapened.

chii•5mo ago
why should this argument work, but not the same argument for using a combine harvester which also cheapens the work of a farmer?
Avshalom•5mo ago

    "Then, within twenty minutes, we started ignoring the cows. … Cows, after you’ve seen them for a while, are boring"
Skill issue. I've been looking at cows for 40 years and am still enchanted by them. Maybe it helps that I think of cows as animals instead of story book illustrations; you'd get lynched if you claimed you got bored of your pet cat after 20 minutes.
advael•5mo ago
It's crazy to me that people will reference Keynes' prediction of leisure without acknowledging that we chose not to do that. The dystopian way in which work has become more competitive, intensive, and ill-compensated even as economies have supposedly continued to become more productive is the result of policy choices, not some inevitable fact of the universe
BrenBarn•5mo ago
> New technologies give us greater leverage to do more tasks better. But because this leverage is usually introduced into competitive environments, the result is that we end up having to work just as hard as before (if not harder) to remain competitive and keep up with the joneses.

More flour more water. More water more flour.