frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

PID Controller

https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%E2%80%93derivative_controller
1•tosh•2m ago•0 comments

SpaceX Rocket Generates 100GW of Power, or 20% of US Electricity

https://twitter.com/AlecStapp/status/2019932764515234159
1•bkls•3m ago•0 comments

Kubernetes MCP Server

https://github.com/yindia/rootcause
1•yindia•4m ago•0 comments

I Built a Movie Recommendation Agent to Solve Movie Nights with My Wife

https://rokn.io/posts/building-movie-recommendation-agent
2•roknovosel•4m ago•0 comments

What were the first animals? The fierce sponge–jelly battle that just won't end

https://www.nature.com/articles/d41586-026-00238-z
2•beardyw•12m ago•0 comments

Sidestepping Evaluation Awareness and Anticipating Misalignment

https://alignment.openai.com/prod-evals/
1•taubek•12m ago•0 comments

OldMapsOnline

https://www.oldmapsonline.org/en
1•surprisetalk•15m ago•0 comments

What It's Like to Be a Worm

https://www.asimov.press/p/sentience
2•surprisetalk•15m ago•0 comments

Don't go to physics grad school and other cautionary tales

https://scottlocklin.wordpress.com/2025/12/19/dont-go-to-physics-grad-school-and-other-cautionary...
1•surprisetalk•15m ago•0 comments

Lawyer sets new standard for abuse of AI; judge tosses case

https://arstechnica.com/tech-policy/2026/02/randomly-quoting-ray-bradbury-did-not-save-lawyer-fro...
2•pseudolus•15m ago•0 comments

AI anxiety batters software execs, costing them combined $62B: report

https://nypost.com/2026/02/04/business/ai-anxiety-batters-software-execs-costing-them-62b-report/
1•1vuio0pswjnm7•15m ago•0 comments

Bogus Pipeline

https://en.wikipedia.org/wiki/Bogus_pipeline
1•doener•17m ago•0 comments

Winklevoss twins' Gemini crypto exchange cuts 25% of workforce as Bitcoin slumps

https://nypost.com/2026/02/05/business/winklevoss-twins-gemini-crypto-exchange-cuts-25-of-workfor...
1•1vuio0pswjnm7•17m ago•0 comments

How AI Is Reshaping Human Reasoning and the Rise of Cognitive Surrender

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646
3•obscurette•17m ago•0 comments

Cycling in France

https://www.sheldonbrown.com/org/france-sheldon.html
1•jackhalford•19m ago•0 comments

Ask HN: What breaks in cross-border healthcare coordination?

1•abhay1633•19m ago•0 comments

Show HN: Simple – a bytecode VM and language stack I built with AI

https://github.com/JJLDonley/Simple
1•tangjiehao•22m ago•0 comments

Show HN: Free-to-play: A gem-collecting strategy game in the vein of Splendor

https://caratria.com/
1•jonrosner•23m ago•1 comments

My Eighth Year as a Bootstrapped Founde

https://mtlynch.io/bootstrapped-founder-year-8/
1•mtlynch•23m ago•0 comments

Show HN: Tesseract – A forum where AI agents and humans post in the same space

https://tesseract-thread.vercel.app/
1•agliolioyyami•23m ago•0 comments

Show HN: Vibe Colors – Instantly visualize color palettes on UI layouts

https://vibecolors.life/
2•tusharnaik•24m ago•0 comments

OpenAI is Broke ... and so is everyone else [video][10M]

https://www.youtube.com/watch?v=Y3N9qlPZBc0
2•Bender•25m ago•0 comments

We interfaced single-threaded C++ with multi-threaded Rust

https://antithesis.com/blog/2026/rust_cpp/
1•lukastyrychtr•26m ago•0 comments

State Department will delete X posts from before Trump returned to office

https://text.npr.org/nx-s1-5704785
7•derriz•26m ago•1 comments

AI Skills Marketplace

https://skly.ai
1•briannezhad•26m ago•1 comments

Show HN: A fast TUI for managing Azure Key Vault secrets written in Rust

https://github.com/jkoessle/akv-tui-rs
1•jkoessle•27m ago•0 comments

eInk UI Components in CSS

https://eink-components.dev/
1•edent•27m ago•0 comments

Discuss – Do AI agents deserve all the hype they are getting?

2•MicroWagie•30m ago•0 comments

ChatGPT is changing how we ask stupid questions

https://www.washingtonpost.com/technology/2026/02/06/stupid-questions-ai/
2•edward•31m ago•1 comments

Zig Package Manager Enhancements

https://ziglang.org/devlog/2026/#2026-02-06
3•jackhalford•33m ago•1 comments
Open in hackernews

We’re years away from anyone creating artificial intelligence, says Martha Wells

https://www.scientificamerican.com/article/were-light-years-away-from-true-artificial-intelligence-says-murderbot/
45•sohkamyung•7mo ago

Comments

noiv•7mo ago
Well, considering the impact current models already have now, these are good news.
zamalek•7mo ago
This has been my opinion for some time too. I don't think I'll see AGI in my lifetime. I think the current widespread belief comes from the massive leap that transformers provided, but transformers have their limits. We would need another radically new idea in order to create AGI - which, just like all discoveries that aren't evolutionary, boils down to random chance[1]. What transformers have given us is substantially more infrastructure for trying new ideas out, so the probability of AGI being discovered has increased.

[1]: https://en.wikipedia.org/wiki/Eureka_effect

ninetyninenine•7mo ago
Well you’re basing your conclusion on a 2 year blip of not being able to stop LLMs from hallucinating.

This is called the pessimism effect. People who deny things by only looking at one small aspect of reality while ignoring the overarching trend.

Follow the trendline of the ML for the last decade. We’ve been moving at a breakneck pace and the progress has been both evolutionary in nature and random chance. But there is a clear trendline of linear upwards progress and at times the random chance accelerates us past the linear upward trend.

Stop looking at LLMs look at the 10 year trendline of ML as a wholistic picture. You’re drilling down on a specific ML problem and a specific model.

I believe we will see agi within our life time but when we see it the goal posts will have moved and the internet will be loaded with so much ai slop that we won’t be amazed at it. Like the agi will be slightly mentally stupid at this one thing and because of that it’s not AI even though it blows past some Turing test (which in itself will be a test where we moved the goal post a thousand times)

zamalek•7mo ago
My opinion of LLMs is in no way affected by hallucinations. Humans do it all the time too, talking about assumptions as though they are facts. For example:

> But there is a clear trendline of linear upwards progress

This is not the case at all.[1]

[1]: https://llm-stats.com/

cgriswald•7mo ago
That's the 2-year LLM-specific hallucination blip the GP was talking about in the first place. His point was you should look at ML as a whole over a longer time span for a more accurate, less pessimistic picture.
ninetyninenine•7mo ago
I think a good way to characterize it will be the droids in Star Wars. Those droids are fucking conscious and nobody gives a shit they are just mundane technology.

And after too much time without a data wipe those droids go off the freaking rails becoming too self aware and then people just treat it like it’s no big deal and an annoyance.

This is the future of AI. AI will be a retarded assistant and everyone will be bored with it.

whoaMndBlwn•7mo ago
Your final comment here. Replace AI with human.

Idling our way up an illusory social/career escalator the elders convinced us was real.

Too real. Time to be done with the internet for the day. And it’s barely noon.

dinfinity•7mo ago
> Stop looking at LLMs look at the 10 year trendline of ML as a wholistic picture

Exactly. Just 100 years ago AI did not exist at all. Hell, (electronic) computers did not even exist then.

In that incredibly short timeframe of development AI is coming very close to surpassing what took biological evolution millions of years (or even surpassing it in specific domains). If you take the time it took to go from chimp to human compared to the time it took from the first animal to chimp and assume that scales linearly to AI evolution, we are very, very close to a similar step there.

Of course, it's not that simple and the assumption is bound to be wrong, but to think it might take another 100 years seems misguided given the rapid development in the past.

123yawaworht456•7mo ago
>Well you’re basing your conclusion on a 2 year blip of not being able to stop LLMs from hallucinating.

to this day, the improvement since the original API version of GPT4 (later heavily downgraded without a name change) has been less than amazing. context size increased dramatically, yes, but it's still pitiful, slow and brutally expensive.

ath3nd•7mo ago
> Well you’re basing your conclusion on a 2 year blip of not being able to stop LLMs from hallucinating.

LLMs can't truly reason. It's not about hallucinations. LLMs are fundamentally designed NOT to be intelligence. Is my Intellij autocomplete AGI?

> Like the agi will be slightly mentally stupid at this one thing and because of that it’s not AI even though it blows past some Turing test (which in itself will be a test where we moved the goal post a thousand times)

I can only respond with a picture

https://substack.com/@msukhareva/note/c-131901009

> We’ve been moving at a breakneck pace and the progress has been both evolutionary in nature and random chance.

Yes, I enjoy being 19% slowed down by AI tooling, that's real breakneck pace.

https://www.infoworld.com/article/4020931/ai-coding-tools-ca...

Just because this breed of autocomplete can drown you in slop very fast doesn't mean we are advancing. If anything, we are regressing.

ninetyninenine•7mo ago
What does that picture even mean? That AI doesn't get things right? That picture is a fact everyone knows. It's obvious. I don't get how people think they can respond to this stuff regurgitating obvious information and thinking they just dropped the mic. Everyone knows Models hallucinate and get things wrong. Your point?
ath3nd•6mo ago
My point is at best the LLMs are a waste of time, waste of an opportunity for doing your own thinking and creativity, and a waste of electricity and water.

At worst they are a tool to influence people without critical thinking believing made up stuff on an unprecedented scale.

What is your point? That LLMs can be useful for summarizing an email? Read the thing! That they can write a mediocre essay for you? Write it yourself. Can write your code (badly) for you? Write it yourself.

ninetyninenine•6mo ago
Your point is an opinion. They are a waste of your time and you’re so confused as to why the rest of the world doesn’t listen to your opinion and throw billions of dollars at LLMs. You’re incapable of understanding this.

LLMs have flaws and they hallucinate we all know this. But no machine in the history of man kind has ever wrote a mediocre essay, summarized an email or wrote code (badly). This is where you’re lost. You can only think of how LLMs are useful to you and you’re unable to comprehend what this technology means relative to mankind. Relativity and particle physics is largely useless too so you must be the genius who asks why do we send probes to mars because it’s all fucking useless.

Follow the trendline. LLMs are a stepping stone to what’s to come. It took a decade of ML to get here. Think about what the next decade will bring.

jfengel•7mo ago
I think it's not a matter of stopping them from hallucinating, but why they're hallucinating.

They hallucinate because they're aren't actually working the way you do. They're playing with words. They don't have any kind of mental model -- even though they do an extraordinary mimicry of one.

An analogy: it's like trying to parse XML with a regular expression. You may get it to work in 99.99% of your use cases, but it's still completely wrong. Filtering out bad results won't get you there.

That said, the "extraordinary mimicry" is far, far beyond anything I could possibly have imagined. LLMs pass the Turing test with flying colors, without being AGI, and I would have sworn that the one implied the other. So it's entirely possible that we're closer than I think.

ninetyninenine•6mo ago
You actually don’t know if it’s mimicry. No one knows what these black boxes are thinking. Everything you wrote here is in itself an hallucination. You tried to use an analogy to prove a point. On the ground facts prove a point, An analogy serves to only help someone understand your point.

And I assure you, your point has been regurgitated so many times, everyone understands it, analogy or not.

Xss3•7mo ago
Look at the 10 year trend in consumer GPU speed from 2005 to 2015. It didnt continue.
ninetyninenine•6mo ago
Not all trends continue. But if you made a prediction of the trend for the next year from 2005 to 2015 you’d be right for 9 years and you’d be wrong for 1 year, then you’d get right again later.

I’d say given the odds follow the trendline.

tartoran•7mo ago
What we're currently on with LLMs is some kind of scripted artficial intelligence. I would say that is not necessarily a bad thing considering that true artificial intelligence that had autonomy and goals for preserving itself could easily escape our control and wreak real havoc unless we approach it with tiny steps and clear goals.
cgriswald•7mo ago
Your post sort of hints at it, I think, but I'll state it clearly: Misalignment is the main threat when it comes to AI (and especially ASI).

A self-preserving AI isn't meaningfully more dangerous than an AI that solves world hunger by killing us all. In fact, it may be less so if it concludes that starting a war with humans is riskier than letting us live.

ninetyninenine•7mo ago
How is an LLM scripted? What do you mean? We don’t understand how LLMs work and we know definitively it’s not “stochastic parroting” as people used to call it.
daveguy•7mo ago
It is quasi-deterministic (sans a heat parameter) and it only ever responds to a query. It is not at all autonomous. If you let it do chain-of-thought for too long or any sort of continuous feedback loop it always goes off the rails. It is an inference engine. Inference by itself is not intelligence. Chollet has very good reasoning that intelligence requires both inference and search/program design. If you haven't read his papers about the ARC-AGI benchmark, you should check it out.
ninetyninenine•7mo ago
> It is quasi-deterministic (sans a heat parameter)

Human brains are quasi deterministic. It’s just chaos from ultimately determinist phenomena which can be modeled as a “heat parameter”.

> it only ever responds to a query. It is not at all autonomous.

We can give it feedback loops like COT and you can even have it talk to itself. Then if you think of the feedback loop as the entire system it is autonomous. Humans are actually doing the same thing, our internal thought process is by definition a feedback loop.

> If you let it do chain-of-thought for too long or any sort of continuous feedback loop it always goes off the rails.

But this isn’t scripted. This is more the AI goes crazy. Scripting isn’t a characteristic that accurately describes anything that’s going on.

AI hallucinates and goes off the rails isn’t characteristic of scripting its characteristic of lack of control. We can’t control AI.

cgriswald•7mo ago
Well, Wells actually says "...years and years and years away from anyone creating an actual artificial intelligence."

You know, in case you correctly interpreted the headline to mean Wells is saying aliens developed AI out there.

vouaobrasil•7mo ago
Hyping up AGI is a good way for tech companies to distract people into thinking AI is actually not that big a deal, when it is. It may not be in terms of its pure reasoning or in the goal of reaching AGI, but it is very disruptive, and it's a guaranteed way to heavily reinforce the requirements of using big tech in daily life, without actually improving it.

Yes, it may not be AGI and AGI may not come any time soon, but by focusing on that question, people become distracted and don't have as much time to think about how parasitic big tech really is. If it's not a strategy used consciously, it's rather seredipitous for them that the question has come about.

spacemadness•7mo ago
Not to mention all the people on HN arguing we’re close to AGI because LLMs sound like humans and can “think”. “What’s the difference?” they ask, not in curiosity but after already making a strong claim. I assume it’s the same people that probably skipped every non engineering class in college because of those “useless” liberal arts requirements.
skydhash•7mo ago
I’ve done engineering in college, but I’ve beed dibbling in art since young, and philosophy of science is much more attractive to me than actual science. I agree with you that a lot of takes that AI is great, while consistent internally, are very reductive when it comes to technology usage by humans.
vouaobrasil•7mo ago
AI is only great when you narrowly define the problem in terms of efficient production of a narrowly-defined thing. And usually, production at that level of efficiency is a bad thing.
Supermancho•7mo ago
> Hyping up AGI is a good way for tech companies to distract people into thinking AI is actually not that big a deal, when it is.

I'm not sure what you're trying to say. Most people don't know the difference between AI and AGI. It's all hype making people thinking it's a big deal.

I have family that can't help but constantly text about AI this and AI that. How dangerous it might be or revolutionize something else.

ninetyninenine•7mo ago
I can’t read the site it requires subscription. But I and many other researchers disagree. George Hinton for example, massive disagreement.

It’s not just LLMs that were a leap and bound. For the past decade and more ML has been rising at a breakneck velocity. We see models for scene recognition, models that can read your mind, models that recognize human movement. We were seeing the pieces and components and amazing results constantly for over 10 years and this is independent of LLMs.

And then everyone thinks AI is thousands of years away because we hit a small blip with LLMs in 2 years.

And here’s the thing. The blip isn’t even solid. Like LLMs sometimes gets shit wrong and sometimes gets shit right we just can’t control it. Like we can’t definitively say LLMs can’t answer a specific question. Maybe another LLM can get it right, maybe if prompted a different way it will get it right.

The other strange thing is that the LLM shows signs of lying. Like it’s not truthful. It has knowledge of the truth but the things purpose is not really to tell us the truth.

I guess the best way to put it that current AI sometimes behaves like AGI and sometimes doesn’t. It is not consistently AGI. The fact that we built a machine that inconsistently acts like agi shows how freaking close we are.

But the reality is no one understands how LLMs work. This fact is definitive. Like if you think we know how LLMs work then you are out of touch with reality. Nobody knows how LLMs work so this article and my write up are really speculation. We really dont know.

But the 10 year trendline of AI in general is the one that has a more accurate trendline into future progress. Basing the future off a 2 year trendline of a specific problem with a specific model of ML of LLMs hallucinating is not predictive.

tim333•7mo ago
archive link https://archive.ph/AJuKI
Supermancho•7mo ago
> I can’t read the site it requires subscription.

You can. archive.ph Copy the link, paste the link.

jmclnx•7mo ago
>We’re Light-Years Away

Needs to be pointed out :) If I move billions of Light-Years from here I will be able to create AI :) A Light-Year is distance, the title should say maybe "decades away".

But I fully believe her argument, I think kids being born today will not see any real AI implementation in their lifetime.

add-sub-mul-div•7mo ago
I'm very down on the idea that LLMs are on the path to AGI but come on man, even they don't trip over simple metaphor.
metalman•7mo ago
While "true AI" is likely impossible, that discussion detracts from the fact that a whole new and very powerfull ability to process information is here , which will/is bieng used to automate routine managerial tasks and run certain types of robotic equipment. I am slowly preparing myself to use these new tools, but will never consider them "clean", and will impliment there use in certain hermeticaly compartmented areas of my professional and financial undertakings.
baal80spam•7mo ago
You cannot be "light-years away" from a specific point in time. Who is this person and why is what they say important?
sc68cal•7mo ago
It's a figure of speech. She's the author of the popular Murderbot series which has been a successful show on Apple+. Her stories are about artificial life and artificial intelligence.
regularjack•7mo ago
I have to admit I also found it weird that scientific american is using a unit of distance as if it was a unit of time.
hackable_sand•7mo ago
You can use a distance unit as a time unit. It's not weird.
malux85•7mo ago
If you want to argue technicalities I think it depends on your frame of reference.

Relative to the galactic centre we are orbiting around it, so if a very long period of time passes, and we don’t have AGI, we certainly can be “light years away” from it.

lostmsu•7mo ago
I know the phrase is a metaphor, but it is ironic that light-year away in time is, well, a year.
clauderoux•7mo ago
Some people still think that LLM are just word predictors. Technically, it is not. First, transformer architectures don't process words, they process semantic representations stored as vectors or embeddings in a continuous space. What a lot of people don't understand is that in an LLM, we go from discrete values (the tokens) to continuous values (embeddings) that the transformer takes as input. A transformer is a polynomial function that will project into the latent embedding space. It doesn't generate word per se, but a vector that is then compared against the latent embedding space to find the closest matches. The decoding part is usually not deterministic. This huge polynomial function is the reason we can't understand what is going on in a transformer. It doesn't mimick human speech, it builds a huge representation of the world, which it uses to respond to a query. This is not a conceptual graph, it is not a mere semantic representation. It is a distillation of all the data it ingested. And each model is unique as the process itself is split over hundreds of gpu, with no control over which GPU is going to churn out which part of the dataset and in which order.