frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Facebook seemingly randomly bans tons of users

https://old.reddit.com/r/facebookdisabledme/
1•dirteater_•11s ago•1 comments

Global Bird Count

https://www.birdcount.org/
1•downboots•36s ago•0 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
2•soheilpro•2m ago•0 comments

Jon Stewart – One of My Favorite People – What Now? With Trevor Noah Podcast [video]

https://www.youtube.com/watch?v=44uC12g9ZVk
1•consumer451•4m ago•0 comments

P2P crypto exchange development company

1•sonniya•18m ago•0 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
1•jesperordrup•23m ago•0 comments

Write for Your Readers Even If They Are Agents

https://commonsware.com/blog/2026/02/06/write-for-your-readers-even-if-they-are-agents.html
1•ingve•23m ago•0 comments

Knowledge-Creating LLMs

https://tecunningham.github.io/posts/2026-01-29-knowledge-creating-llms.html
1•salkahfi•24m ago•0 comments

Maple Mono: Smooth your coding flow

https://font.subf.dev/en/
1•signa11•31m ago•0 comments

Sid Meier's System for Real-Time Music Composition and Synthesis

https://patents.google.com/patent/US5496962A/en
1•GaryBluto•38m ago•1 comments

Show HN: Slop News – HN front page now, but it's all slop

https://dosaygo-studio.github.io/hn-front-page-2035/slop-news
4•keepamovin•39m ago•1 comments

Show HN: Empusa – Visual debugger to catch and resume AI agent retry loops

https://github.com/justin55afdfdsf5ds45f4ds5f45ds4/EmpusaAI
1•justinlord•42m ago•0 comments

Show HN: Bitcoin wallet on NXP SE050 secure element, Tor-only open source

https://github.com/0xdeadbeefnetwork/sigil-web
2•sickthecat•44m ago•1 comments

White House Explores Opening Antitrust Probe on Homebuilders

https://www.bloomberg.com/news/articles/2026-02-06/white-house-explores-opening-antitrust-probe-i...
1•petethomas•44m ago•0 comments

Show HN: MindDraft – AI task app with smart actions and auto expense tracking

https://minddraft.ai
2•imthepk•49m ago•0 comments

How do you estimate AI app development costs accurately?

1•insights123•50m ago•0 comments

Going Through Snowden Documents, Part 5

https://libroot.org/posts/going-through-snowden-documents-part-5/
1•goto1•51m ago•0 comments

Show HN: MCP Server for TradeStation

https://github.com/theelderwand/tradestation-mcp
1•theelderwand•54m ago•0 comments

Canada unveils auto industry plan in latest pivot away from US

https://www.bbc.com/news/articles/cvgd2j80klmo
3•breve•55m ago•1 comments

The essential Reinhold Niebuhr: selected essays and addresses

https://archive.org/details/essentialreinhol0000nieb
1•baxtr•57m ago•0 comments

Rentahuman.ai Turns Humans into On-Demand Labor for AI Agents

https://www.forbes.com/sites/ronschmelzer/2026/02/05/when-ai-agents-start-hiring-humans-rentahuma...
1•tempodox•59m ago•0 comments

StovexGlobal – Compliance Gaps to Note

1•ReviewShield•1h ago•1 comments

Show HN: Afelyon – Turns Jira tickets into production-ready PRs (multi-repo)

https://afelyon.com/
1•AbduNebu•1h ago•0 comments

Trump says America should move on from Epstein – it may not be that easy

https://www.bbc.com/news/articles/cy4gj71z0m0o
7•tempodox•1h ago•4 comments

Tiny Clippy – A native Office Assistant built in Rust and egui

https://github.com/salva-imm/tiny-clippy
1•salvadorda656•1h ago•0 comments

LegalArgumentException: From Courtrooms to Clojure – Sen [video]

https://www.youtube.com/watch?v=cmMQbsOTX-o
1•adityaathalye•1h ago•0 comments

US moves to deport 5-year-old detained in Minnesota

https://www.reuters.com/legal/government/us-moves-deport-5-year-old-detained-minnesota-2026-02-06/
9•petethomas•1h ago•3 comments

If you lose your passport in Austria, head for McDonald's Golden Arches

https://www.cbsnews.com/news/us-embassy-mcdonalds-restaurants-austria-hotline-americans-consular-...
2•thunderbong•1h ago•0 comments

Show HN: Mermaid Formatter – CLI and library to auto-format Mermaid diagrams

https://github.com/chenyanchen/mermaid-formatter
1•astm•1h ago•0 comments

RFCs vs. READMEs: The Evolution of Protocols

https://h3manth.com/scribe/rfcs-vs-readmes/
3•init0•1h ago•1 comments
Open in hackernews

Spending on AI Is at Epic Levels. Will It Ever Pay Off?

https://www.wsj.com/tech/ai/ai-bubble-building-spree-55ee6128
50•RyanShook•4mo ago

Comments

techblueberry•4mo ago
No
scrubs•4mo ago
(Violins playing) and now what?
alexnewman•4mo ago
Agreed
kingo55•4mo ago
I look forward to the cheap compute flooding the market when the music stops.
lomase•4mo ago
People still waiting for GPUs to be cheap after the blockchain bubble.
chasd00•4mo ago
Touche. I was just about to comment on snapping up the cheap gpus
SoftTalker•4mo ago
We created the LLM bubble to prop up those investments.
0000000000100•4mo ago
Not egregious API spending, but ChatGPT Pro was been one of the best investments our company has paid for.

It is fantastic at reasonable scale ports / refactors, even with complicated subject matter like insurance. We have a project at work where Pro has saved us hours of time just trying to understand the over complicated that is currently in place.

For context, it’s a salvage project with a wonderful mix of Razor pages and a partial migration to Vue 2 / Vuetify.

It’s best with logic, but it doesn’t do great with understanding the particulars of UI.

neuronic•4mo ago
How are you getting these results? Even with grounding in sources, careful context engineering and whatever technique comes to your mind we are just getting sloppy junk out of all models we have tried.

The sketchy part is that LLMs are super good at faking confidence and expertise all while randomly injected subtle but critical hallucinations. This ruins basically all significant output. Double-checking and babysitting the results is a huge time and energy sink. Human post-processing negates nearly all benefits.

Its not like there is zero benefit to it, but I am genuinely curious how you get consistently correct output for a "complicated subject matter like insurance".

cjbarber•4mo ago
What are you trying to use LLMs for and what model are you using?
0000000000100•4mo ago
Depends a lot. Use it for one off scripts, particularly for anything Microsoft 365 related (expanding Sharepoint drives, analyzing AWS usage, general IT stuff). Where there is a lot of heavy context based business logic it will fail since there’s too much context for it to be successful.

I work in custom software where the gap in non-LLM users and those who at least roughly know how to use it is huge.

It largely depends on the prompt though. Our ChatGPT account is shared so I get to take a gander at the other usages and it’s pretty easy see: “okay this person is asking the wrong thing”. The prompt and the context has a major impact on the quality of the response.

In my particular line of work, it’s much more useful than not. But I’ve been focusing on helping build the right prompts with the right context, which makes many tasks actually feasible where before it would be way out of scope for our clients budgets.

kace91•4mo ago
Could you give an example of a prompt?
yeasku•4mo ago
You are a top stackoverflow contributor with 20 years of experience in...
kace91•4mo ago
I meant an example of the prompts he was attempting, in case it helped provide advice.
oblio•4mo ago
> Its not like there is zero benefit to it, but I am genuinely curious how you get consistently correct output for a "complicated subject matter like insurance".

Most likely by trying to get a promotion or bonus now and getting the hell out of Dodge before anyone notices those subtle landmines left behind :-)

fn-mote•4mo ago
Cynical, but maybe not wrong. We are plenty familiar with ignoring technical debt and letting it pile up. Dodgy LLM code seems like more of that.

Just like tech debt, there's a time for rushing. And if you're really getting good results from LLMs, that's fabulous.

I don't have a final position on LLM's but it has only been two days since I worked with a colleague who definitely had no idea how to proceed when they were off the "happy path" of LLM use, so I'm sure there are plenty of people getting left behind.

0000000000100•4mo ago
Wow the bad faith is quite strong here. As it turns out, small to mid sized insurance companies have some ridiculously poorly architected front ends.

Not everyone is the biggest cat in town with infinite money and expertise. I have no intention of leaving anytime soon, so I have confidence that the code that was generated by the AI (after confirming with our guy who is the insurance OG) is solid improvement over what was before.

oblio•4mo ago
The bad faith is super strong when it's being swamped by a lot more bad faith driven by greed. I'm not talking about you, but about all these companies with overnight valuations in the billions and their PR machines.

To your example, frankly, I would have started with that very important caveat, of an initial situation defined by very poor quality. It's a very valid angle as a lot of code that's available today is of very low quality and if AI can't take 1/10 or 2/10 and make it 5/10 or 6/10, yes, everyone benefits.

bdangubic•4mo ago
I genuinely think that biggest issue LLM tools is that most people expect magic because first attempts at some simple things feel magical. however, they take insane amount of time to get expertise in. what is confusing is that I think SWEs spent immense amounts of time in general learning the tools of the trade but this seems to escape a lot of people when it comes to LLMs. on my team, every developer is using LLMs all day, every day. on average based on sprint retros each developer spends no less than an hour each day experimenting/learning/reading… how to make them work. the realization we made early is that when it comes to LLMs there are two large groups:

- group that see them as invaluable tools capable of being an immense productivity multiplier

- group that tried things here and there and gave up

we collectively decided that we want to be in the first group and were willing to put time to be in that group.

lomase•4mo ago
I have been in teams that do this and in teams that dont.

I have not see any tangible difference in the output of both.

bdangubic•4mo ago
year-over-year we are at around 45% in increased productivity and this trajectory is on an upward slope
danpalmer•4mo ago
How are you measuring increased productivity? Honest question, because I've seen teams claim more code, but I've also seen teams say they're seeing more unnecessary churn (which is more code).

I'm interested in business outcomes, is more code or perceived velocity translating into benefits to the business? This is really hard to measure though because in pretty much any startup or growing company you'll see better business outcomes, but it's hard to find evidence for the counterfactual.

bdangubic•4mo ago
same as we have before LLMs for a decade - story points. we move faster now, we have automated stuff we could never automate before. same project, largely same team since 2016, we just get a lot more shit done, a lot more
gedy•4mo ago
So something like: automate unit tests, where the tests are X points where you'd not have done these before?

Not snarking, but if they are automated away, then isn't this like 0 story points for effort/complexity?

bdangubic•4mo ago
hehe not snarky at all - great question. this was heavily discussed but in order to measure productivity gains (we are phasing this out now) we kept the estimations the same as before. as my colleague put it you don’t estimate based on “10x developer” so we applied the same concept. now that everyone is “on board” we are phasing this out
gedy•4mo ago
Thanks, I'm probably a kook but I've never wanted to put any non-product, user-visible feature-related tasks on the board with story points (tests, code cleanup, etc) and just folded that into the related user work (mainly to avoid some product person thinking they "own" that and can make technical decisions).

So the product velocity didn't exactly go up, but you are now producing less technical debt (hopefully) with a similar velocity, sounds reasonable.

danpalmer•4mo ago
I'm glad you're more productive, although I would question this result both in terms of objectivity (story points are typically very subjective), and in terms of capturing all externalities of the LLM workflow. It's easy to have "build the thing", "fix the thing", "remove tech debt in the thing", "replace the thing" be 4 separate projects, each with story points, where "build the better thing" would have been one, and churn is something that is evidenced with LLM development.
lomase•4mo ago
This reads like the bullshit bulletpoints people write on their CV.
bdangubic•4mo ago
comments like this give me warm and fuzzy feeling that theoretically we compete for same jobs - no worries about job security for forseeable future :)
lomase•4mo ago
Someones ego got hurt.
bdangubic•4mo ago
talking to yourself in third person? :)
lomase•4mo ago
You keep comming back to this fights online because is the only real interactions you can have with people outside work.

You will live the rest of your life like that. Because nobody likes you. Enjoy.

bdangubic•4mo ago
ouch that is not very nice :)
danpalmer•4mo ago
I'm persisting, have been using LLMs quite a bit for the last year, they're now where I start with any new project. Throughout that time I've been doing constant experimentation and have made significant workflow improvements throughout.

I've found that they're a moderate productivity increase, i.e. on a par with, say, using a different language, using a faster CI system, or breaking down some bureaucracy. Noticeable, worth it, but not entirely transformational.

I only really get useful output from them when I'm holding _most_ of the context that I'd be holding if writing the code, and that's a limiting factor on how useful they can be. I can delegate things that are easy, but I'm hand-holding enough that I can't realistically parallelise my work that much more than I already do (I'm fairly good at context switching already).

caseyf7•4mo ago
Where are you finding the best material for reading/learning?
bdangubic•4mo ago
- everything that simon writes (https://simonwillison.net/)

- anything that goes deep into issues (I seldom read “i love llms” type posts like this is great: https://blog.nilenso.com/blog/2025/09/15/ai-unit-of-work/)

- lots of experimentation - specifically I have spent hours and hours doing the exact same feature (my record is 23 times).

- if something “doesn’t work” I create a task immediately to investigate and understand it. even the smallest thing that bother me I will spend hours to figure out why it might have happened (this is sometimes frustrating) and how to prevent it from happening again (this is fun)

My collegue describes the process as Javascript developer trying to learn Rust while tripping on mushrooms :)

vivzkestrel•4mo ago
dont you think it would be better off getting that expertise in actual system design, software engineering and all the programming related fields. by involving chat GPT to make code, we ll eventually lose the skill to sit and craft code like we used to do all these years. after all the brain s neural pathways only remember what you put to work daily
gamblor956•4mo ago
A lot of programmers that say that LLMs are awesome tend to be inexperienced, not good programmers, or just gloss over the significant amount of extra work that using LLMs requires.

Programmers tend to overestimate their knowledge of non-programming domains, so the OP is probably just not understanding that there are serious issues with the LLM's output for complicated subject matters like insurance.

mwkaufma•4mo ago
It's almost tiresome to keep citing Betteridge's law of headlines, but editors at legacy publications keep it relevant. If there was any compelling evidence, they wouldn't have to phrase it as a hypothetical.
profsummergig•4mo ago
(1999) - "Spending on Amazon warehouses Is at Epic Levels. Will It Ever Pay Off?"
simonw•4mo ago
Amazon weren't spending a single digit percentage of GDP on GPUs with a shelf life measured in just a few years though.
kanwisher•4mo ago
but we collectively there was a single digit spend on things like fiber that ended up paying off for the public later
layoric•4mo ago
The on going costs via power consumption are on a completely different scale
SaberTail•4mo ago
I'd suggest a better analogy would be telecommunications fiber[1].

[1] https://internethistory.org/wp-content/uploads/2020/01/OSA_B...

lomase•4mo ago
Is not similar at all.

Even the smallest and poorest countries in the world invested in their fiber networks.

Only China and the US have money to create models.

ACCount37•4mo ago
In that, it's closest to the semiconductor situation.

Few companies and very few countries have the bleeding edge frontier capabilities. A few more have "good enough to be useful in some niches" capabilities. The rest of the world has to get and use what they make - or do without, which isn't a real option.

zdragnar•4mo ago
Fiber is a decades long investment into hardware- one that I would argue we hardly needed. Google fiber started with the question, what would people do with super high speed? The answer was stream higher quality videos and that's about it. In fact, by the time fiber became widespread, many had moved off of PCs to do the majority of their Internet use via cell phones.

With that said, the fiber will be good for many years. None of the LLM models or hardware will be useful in more than a few years, with everything being replaced to newer and better on a continual basis. They're stepping stones, not infrastructure.

yeasku•4mo ago
We reemplaced one tech that was used by literally the whole world, pair copper wires, with something orders of magnitude better and future proof. My pc literally cant handle the bandwidth of my fiber connection.

We did not need it? Did you ever used DSL?

What is AI replacing? People?

majewsky•4mo ago
> Did you ever use DSL?

Where I live (Germany), lots of people have vDSL at advertised speeds of 100 Mib/s, using pair copper wires. Not saying that fiber is not better, it obviously is, and hence the government is subsidizing large-scale fiber buildouts. But as it stands right now, I'm confident that for 99% of consumers, vDSL is indeed enough.

In the 90s and 2000s, I remember our (as in: tech nerds') argument to policy-makers being "just give people more bandwidth and they will find a way to use it", and in that period, that was absolutely true. In the 2000s, lots of people got access to broadband internet, and approximately five milliseconds later, YouTube launched.

But the same argument now falls apart because we have the hindsight of seeing lots of people with hundreds of megabits or even gigabit connections... and yet the most bandwidth-demanding thing most of them do is video streaming. I looked at the specs for GeForce Now, and it says that to stream the highest quality (a 5K video feed at 120Hz), you should have 65 Mib/s downstream. You can literally do that with a vDSL line. [1] Sure, there are always people with special usecases, but I don't recall any tech trend in the last 10 years that was stunted because not enough consumers had the bandwidth required to adopt it.

[1] Arguably, a 100 Mib/s line might end up delivering less than that, but I believe Nvidia have already factored this into their advertised requirements. They say that you need 25 Mib/s to sustain a 1080p 60fps stream, but my own stream recordings in the same format are only about 5 Mib/s. They might encode with higher quality than I do, but I doubt it's five times more bitrate.

yeasku•4mo ago
Personally I could not work from home with adsl.

Is not only the bandwidth but latency, i get 5 ms ping to servers in eu.

The quality of life I get from that cant be paid with money.

Also with DSL because how it works they can give you only 10% of the bandwidth and is fine.

afavour•4mo ago
(2000) - “Spending on Kozmo warehouses is at epic levels. Will it ever pay off?”

I believe the relevant term here is “survivorship bias”.

option•4mo ago
Yes
lioeters•4mo ago
And..?
rubyfan•4mo ago
Is it possible all this capital would be better deployed creating value through jobs that leverage human creativity?
colkassad•4mo ago
Meat-based LLMs trained for billions of years are underrated! Too bad they need healthcare (and sleep).
surgical_fire•4mo ago
No. The wet dream of the elites is to get rid of the pesky underclass that provides labor.

They have a visceral hatred of workers. The sooner people accept that as reality, the better.

tim333•4mo ago
Indeed. I'm optimistic on AI but think it would be better if they spend less on data centers and more on research.

If they AI companies just charged enough to users to cover their costs then the demand would drop 10x and we wouldn't need most of the data centers.

hettygreen•4mo ago
Pay off for who? After learning about "The Gospel" [0], does anyone else wonder if spending on AI is actually just an arms race?

[0] https://en.wikipedia.org/wiki/AI-assisted_targeting_in_the_G...

naveen99•4mo ago
It’s not like anyone is going in debt to pay for gpu’s though. So it’s probably ok. Now if banks start selling 30 year mortgages for gpu’s, I might get a little worried.
noosphr•4mo ago
People act like big tech didn't have a mountain of cash they didn't know what to do with. Each of the big players has around 100 billion that just sitting there doing nothing.
prewett•4mo ago
Well, Apple spends some of that pre-paying TSMC for their next node in exchange for exclusivity...
fprog•4mo ago
Oracle is going to use debt to finance the buildout of AI cloud infrastructure to meet their obligations to customers. They’re the first hyperscaler to do so. Made the news two weeks ago.
naveen99•4mo ago
Yikes. Oracle just issued 40 year mortgages , I mean bonds. Ok, you can be a little worried now. There balance sheet looks a lot like Coreweave or mstr. I guess the market will allow it for some more time. Nvidia gpu treasury companies can sell a dollar worth of future gpu for $2 for some time I guess. Of course I won’t be buying oracle or mstr… but bitcoin and gpu’s are still fine.
jemmyw•4mo ago
I don't know if that was epic sarcasm, but companies are doing exactly that. CoreWeave has taken on something like $30b of debt against the value of their GPUs. https://www.forbes.com/sites/rashishrivastava/2025/09/22/cor...

They aren't the only company doing this.

scuff3d•4mo ago
While it has its uses I have yet to see a single use case, or combination of use cases, that warrants the insane spending. Not to mention the environmental damage and wide spread theft and copyright infringement required to make it work.
duped•4mo ago
The people funding this seem to believe that firstly text inference and gradient descent can synthesize a program that can operate on information tasks as good or better than humans, secondly that the only way of generating the configuration data for those programs to work is by powering vast farms of processors doing matrix arithmetic but requiring the worlds most complex supply chain tethered to a handful of geopolitically volatile places, thirdly that those farms have power demands comparable to our biggest metropolises, and finally if they succeed, they'll have unlocked economic power amplification that hasn't been seen since James Watt figured out how to move water out of coal mines a bit quicker.

Oh and the really fucky part is half of them just want to get a bit richer, but the other half seem to be in a cult that thinks AI's gross disruption of human economies and our environment is actually the most logically ethical thing to do.

yeasku•4mo ago
I tougth what you wrote was crazy wasteful, then I remember the world runs on js.
pizlonator•4mo ago
Someone should create a tracker that measures the number of bearish AI takes that make the front page of HN each day.
g42gregory•4mo ago
Are we still getting AGI in 2026, per OpenAI?

Based on AGI 2026, they convinced US Government to block high-end GPU sales to China. They said, we only needed 1-2 more years to hold them off. Then AGI and OpenAI/US rules the world. Is this still the plan? /s

If AGI does not materialize in 2026, I think there might be trouble, as China develops alternative GPUs and NVIDIA loses that market.

lomase•4mo ago
Altman says in a few years Chat GPT 8 will solve quantum physics
RajT88•4mo ago
I mean, I'll take it if it comes true.

insert Rick & Morty "Show me what you got" gif here

prewett•4mo ago
"Solve quantum physics" meaning generating closed-form solutions to the Schrodinger equation for atoms of any composition? Of arbitrary molecules? Good luck with that... Even for the hydrogen atom, the textbook said "so it happens that <some polynomial function I'd never heard of before> just so happens to solve this equation", instead of the derivations one would normally expect. I doubt we have even invented the math to solve the equations much above the hydrogen atom, assuming that a closed-form solution is even theoretically possible.

I think Altman has been getting mentored by Musk. I think we'll get full self-driving Teslas before quantum mechanics is "solved", though, and I am not expecting that in the foreseeable future.

lomase•4mo ago
My mistake, he did not say solve quantum physics.

He did say if Chat GTP 8 creates a theory of quantum gravity... I can't... that will mean we have reached AGI.

l1ng0•4mo ago
https://m.youtube.com/watch?v=TMoz3gSXBcY
ChrisArchitect•4mo ago
Related:

Cost of AGI Delusion

https://news.ycombinator.com/item?id=45395661

AI Investment Is Starting to Look Like a Slush Fund

https://news.ycombinator.com/item?id=45393649

windex•4mo ago
A lot of us use it in a very structured manner for code and other areas. It absolutely is value for money. I don't really get what people keep complaining about. I think the complaints mostly come from people trying to embed LLMs and expecting a human like output.

Decision support, coding, and for structured outputs I love it. I know its not human, and i write instructions that are specific to the way it reasons.

surgical_fire•4mo ago
This is sort of useful, and it is how I have been using LLMs.

I don't think the companies betting on AI are burning mountains of cash because they think it will be a moderately useful tool for decision support, coding and such. They are betting this will be "The Future™" in their search of perpetual growth.

maxglute•4mo ago
After seeing kids abuse llms for companionship I think many new gen will grow up shelling for sub if companies decide to wholey gatelock behind paywall. Where this world is heading, its cheaper than therapy. It's going to be indispensible while old men yell at clouds, think cell phone plan not spotify or netflix once companies start squeezing.