frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

We will all work for AGI

https://indiansinai.com/stories/we-will-work-for-agi
13•ajax33•3h ago

Comments

mkdelta221•2h ago
Fascinating article. Everyone knows they will be replaced by AI but nobody wants to talk about it.
pjmlp•1h ago
Worse are the folks claiming how much productive they are with AI tools, without understanding that it means companies will require less of us to do the same job.

Like in many scenarios, they always think the victims will be the other ones.

Traubenfuchs•1h ago
I really hope it won‘t take my job and I am very afraid, nut:

Why hasn‘t it happened yet? Why hasn‘t the job market imploded? What‘s missing? Why do my colleagues, I and my friends still have their bullshit jobs? Why didn‘t my companies output explode through our unlimited claude access? What about all the other companies?

Atomic_Torrfisk•1h ago
Based on what? Do you have data for that or is that just a feeling, or what you want.
maplethorpe•1h ago
> What Moravec was describing was a difference in how skills are stored, not how complex they are. Physical skills are encoded in the body, almost impossible to put into words. But knowledge work, the analysis, the diagnosis, the strategy, the legal argument, is stored in text. Humans wrote it all down. Every framework, every protocol, every insight accumulated across every profession for centuries, captured in documents, papers, books, case files, and reports.

I don't think this is true. Text is a lossy form of communication. There's no way to get the sum of my knowledge from my brain over to your brain purely through text.

Also, anyone who has ever had to deal with incomplete documentation knows that humans did not, in fact, write it all down.

strogonoff•1h ago
All communication is inherently lossy, and text is extremely so. Knowledge, insight, etc., is never captured in its entirety in communication. Indeed, there is no direct contact between human minds, not in the models we currently have.

Communication builds on simplified shared maps over ineffable territory of human experience. It always presents a particular model—a necessarily wrong one (as all models are), good for one purpose but neutral or harmful for another.

However, models and maps is not the only way in which humans attend to reality. Even though it is compelling to talk as if it was the only way—talking is communication, and naturally it likes communicable things—we also have the impossible to convey direct experience. Over the past thousand or two years, as humanity becomes more of an interconnected anthill, this experiencing arguably increasingly takes a backseat to map-driven communication-driven frame of attention, but it still exists and is part of what makes us human.

LLMs, as correctly noted, build only on our communication. What I don’t think is noted, is that this means they build on those (inevitably faulty) models and maps; LLMs fundamentally have no access to the experiencing aspect, and the territory-to-map workflow is inaccessible to them. What happens when wrong maps overstay their welcome?

bamboozled•1h ago
There is a jarring assumption in this article, which is that LLMs are performing much much better then they are. Thy are awesome tools, but they just aren't that great where I'd be replacing my accountant with anything like an LLM and personally, as a software engineer, the more I use these tools, the more I realize I need to understand software better than I ever have before to actually be proficient with these tools. Maybe we're agreeing to some degree because the author seems to think there will still be need for certain skill sets, even with AGI, but I think we're still in the figuring shit out phase.

If any thing they've made my job much much more stressful because I'm just dealing with 10x the amount of code to reason about than before, the expectation to delivery faster is growing, and people are just smashing out code without properly understanding the business problems because of doing implementing a feature is so low.

effable•1h ago
The core idea of ASI arriving before AGI seems to be true: we have already seen that through Chess Programs, LLMs etc.

However what caught my eye and that to me does reflect the lens through which the author sees the world, unless I am completely misunderstanding their point:

"Most of the world's important problems have never been modelled at the precision AI requires to act on them. Pollution, traffic, healthcare, taxation, public infrastructure, water distribution."

Pollution, traffic, healthcare and public infrastructure however are not really problems that require "clever" solutions - rather they are problems of political will, regulating industry and moving to cleaner energy sources. For example, we have known about human caused climate change for decades and carbon emissions are just hitting their peak now.

roysting•49m ago
The irony is that I think the author may have meant granularity, not precision. You could have the highest precision model (not the AI type) of any given topic or domain and not only be totally inaccurate, but being categorically flawed, i.e., you’re not even shooting at the right target.

From his statement it seems what he is really saying though is that it is the granularity of data is insufficient for an AI model to accurately or precisely evaluate a problem and then presumably solve it, assuming there is, let alone a human-acceptable solution.

As I mentioned, you can have the most precisely modeled problem in the world and it won’t make a difference if it’s not accurate, especially since there is a very uncomfortable reality starting to face us, at least in the West, that all the little lies we were told and we perpetuated because we have been trained on them from birth, across generations now, are simply wrong and have polluted our minds to such a degree that many people could never accept if AI told them they’re wrong and everything they believe they know and have known all their life is wrong.

On top of that, it shatters people’s narcissistic self-image of having been the good guy, because accepting what AI tells them is actually the truth means accepting that they were abusive to those who were right all along, meaning they are actually the bad guy.

And if we definitely know anything as good guys, it’s that the majority is always right, because that is what we were taught is the democratic way. The majority is always right and you always have to trust the minority that are experts unless it’s a majority of experts, then you have to trust them too, especially if they are beholden to the minority ruling class! Right? Right!

guillego•1h ago
There might be a really good conclusion in this article but I had to give up halfway through. The LLM-writing chapter after chapter is unbearable, full of short sentences leading into paragraphs that read like LinkedIn posts.

> AlphaFold solved protein structure prediction, a fifty-year problem, not in decades but in a fraction of the time traditional research would have required. Not by thinking like a biologist. By finding patterns at a scale no human could reach. That is a domain detonation. Not progress. A before-and-after. The same logic is now moving through radiology, legal research, financial analysis, drug discovery, software engineering.

If you have good ideas, good insights and good stories, they deserve your own words. If you can't respect your own ideas enough to spend time writing them down and forming them into paragraphs and sentences, why should I respect them any more?

rembal•32m ago
I love the water/ice metaphor, but the author tends to completely ignore the physical world. Example with cardiologist - we all know what happened to the radiologist prediction. Example with defence (or war) becoming mostly a case of having a better AI model: well, try to win without a solid, distributed production capabilities, energy access and safe supply chains, in a geographical disadvantage. Embodiment is coming, but it will require moving a lot of atoms. Also, even in text heavy domains, a lot of knowledge is not written down, often of purpose (especially in legal), and that's the juicy part...

IBM Announces Strategic Collaboration with Arm

https://newsroom.ibm.com/2026-04-02-ibm-announces-strategic-collaboration-with-arm-to-shape-the-f...
62•bonzini•1h ago•22 comments

Bringing Clojure programming to Enterprise (2021)

https://blogit.michelin.io/clojure-programming/
50•smartmic•2h ago•8 comments

Artemis II Launch Day Updates

https://www.nasa.gov/blogs/missions/2026/04/01/live-artemis-ii-launch-day-updates/
934•apitman•17h ago•797 comments

Gone (Almost) Phishin'

https://ma.tt/2026/03/gone-almost-phishin/
34•luu•2d ago•14 comments

Email obfuscation: What works in 2026?

https://spencermortensen.com/articles/email-obfuscation/
142•jaden•7h ago•40 comments

Mercor says it was hit by cyberattack tied to compromise LiteLLM

https://techcrunch.com/2026/03/31/mercor-says-it-was-hit-by-cyberattack-tied-to-compromise-of-ope...
48•jackson-mcd•1d ago•15 comments

Steam on Linux Use Skyrocketed Above 5% in March

https://www.phoronix.com/news/Steam-On-Linux-Tops-5p
404•hkmaxpro•7h ago•185 comments

Quantum computing bombshells that are not April Fools

https://scottaaronson.blog/?p=9665
180•Strilanc•10h ago•60 comments

EmDash – A spiritual successor to WordPress that solves plugin security

https://blog.cloudflare.com/emdash-wordpress/
577•elithrar•18h ago•425 comments

A new C++ back end for ocamlc

https://github.com/ocaml/ocaml/pull/14701
184•glittershark•11h ago•15 comments

New laws to make it easier to cancel subscriptions and get refunds

https://www.bbc.co.uk/news/articles/cvg0v36ek2go
35•chrisjj•1h ago•6 comments

DRAM pricing is killing the hobbyist SBC market

https://www.jeffgeerling.com/blog/2026/dram-pricing-is-killing-the-hobbyist-sbc-market/
481•ingve•13h ago•411 comments

Telli (YC F24) is hiring engineers, designers, and more [on-site, Berlin]

http://hi.telli.com/join-us
1•sebselassie•3h ago

Show HN: NASA Artemis II Mission Timeline Tracker

https://www.sunnywingsvirtual.com/artemis2/timeline.html
64•AustinDev•7h ago•14 comments

Fast and Gorgeous Erosion Filter

https://blog.runevision.com/2026/03/fast-and-gorgeous-erosion-filter.html
168•runevision•2d ago•16 comments

Built a cheap DIY fan controller because my motherboard never had working PWM

https://www.himthe.dev/blog/msi-forgot-my-fans
26•bobsterlobster•2d ago•8 comments

Subscription bombing and how to mitigate it

https://bytemash.net/posts/subscription-bombing-your-signup-form-is-a-weapon/
163•homelessdino•6h ago•109 comments

Show HN: Git bayesect – Bayesian Git bisection for non-deterministic bugs

https://github.com/hauntsaninja/git_bayesect
284•hauntsaninja•4d ago•41 comments

The story of Britain's oldest sweet, the Pontefract Cake (2019)

https://www.bbc.com/travel/article/20190710-the-strange-story-of-britains-oldest-sweet
5•thomassmith65•1d ago•0 comments

What Gödel Discovered (2020)

https://stopa.io/post/269
59•qnleigh•2d ago•9 comments

Significant Raise of Reports

https://lwn.net/Articles/1065620/
3•stratos123•1h ago•1 comments

AI for American-produced cement and concrete

https://engineering.fb.com/2026/03/30/data-center-engineering/ai-for-american-produced-cement-and...
194•latchkey•17h ago•113 comments

Reverse Engineering Crazy Taxi, Part 2

https://wretched.computer/post/crazytaxi2
46•wgreenberg•2d ago•3 comments

Ask HN: Who is hiring? (April 2026)

242•whoishiring•19h ago•208 comments

Show HN: Dull – Instagram Without Reels, YouTube Without Shorts (iOS)

https://getdull.app
93•kasparnoor•13h ago•72 comments

Signing data structures the wrong way

https://blog.foks.pub/posts/domain-separation-in-idl/
106•malgorithms•14h ago•46 comments

Weather.com/Retro

https://weather.com/retro/
219•typeofhuman•9h ago•39 comments

The revenge of the data scientist

https://hamel.dev/blog/posts/revenge/
144•hamelsmu•4d ago•28 comments

SpaceX files to go public

https://www.nytimes.com/2026/04/01/technology/spacex-ipo-elon-musk.html
321•nutjob2•16h ago•440 comments

StepFun 3.5 Flash is #1 cost-effective model for OpenClaw tasks (300 battles)

https://app.uniclaw.ai/arena?tab=costEffectiveness&via=hn
159•skysniper•18h ago•75 comments