frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Flux Kontext Image editing tests

https://www.flickspeed.ai/canvas/public/6871319e239a5c68830ee64f
1•taherchhabra•50s ago•1 comments

How to Interview AI Engineers

https://blog.promptlayer.com/the-agentic-system-design-interview-how-to-evaluate-ai-engineers/
1•jzone3•2m ago•1 comments

Can Performant LLMs Be Ethical? Quantifying the Impact of Web Crawling Opt-Outs

https://arxiv.org/abs/2504.06219
1•layer8•3m ago•0 comments

Creating a Website from Obsidian

https://lwgrs.bearblog.dev/creating-a-website-from-obsidian/
1•speckx•3m ago•0 comments

Talking Postgres with Shireesh Thota, Microsoft CVP

https://talkingpostgres.com/episodes/how-i-got-started-leading-database-teams-with-shireesh-thota/transcript
1•clairegiordano•5m ago•0 comments

Pasilalinic-Sympathetic Compass

https://en.wikipedia.org/wiki/Pasilalinic-sympathetic_compass
1•frabert•5m ago•0 comments

Ask HN: Advice for someone choosing a college path

2•spacebuffer•7m ago•1 comments

Chinese TV uses AI to translate broadcasts to sign language. It's not going well

https://www.theregister.com/2025/07/10/china_ai_sign_language_translation/
1•xbmcuser•7m ago•0 comments

Do Longevity Drugs Work?

https://www.economist.com/science-and-technology/2025/06/20/do-longevity-drugs-work
1•bookofjoe•10m ago•1 comments

I created an open source AI first Kanban tool

https://vibecodementor.net/kanban
1•wavh•13m ago•1 comments

Bela Gem Brings Ultra-Low Latency Audio to PocketBeagle 2

https://www.beagleboard.org/blog/2025-07-10-bela-gem-brings-ultra-low-latency-audio-to-pocketbeagle-2
1•ofalkaed•13m ago•0 comments

Hunting Russian Spies in Norway's 'Spy Town' [video]

https://www.youtube.com/watch?v=KcVxl08XYzQ
1•mgl•14m ago•0 comments

I'm more proud of these 128 kilobytes than anything I've built since

https://medium.com/@mikehall314/im-more-proud-of-these-128-kilobytes-than-anything-i-ve-built-since-53706cfbdc18
2•mikehall314•15m ago•0 comments

Once-in-a-Generation Copper Trade Upends a $250B Market

https://www.bloomberg.com/news/features/2025-07-11/trump-s-copper-tariffs-deadline-marks-end-of-once-in-a-generation-trade
1•mgl•16m ago•1 comments

SSPL is BAD

https://ssplisbad.com/
2•lr0•18m ago•1 comments

Krafton slams ex-Subnautica 2 execs – who now say they're suing

https://www.theverge.com/news/704606/subnautica-2-delay-krafton-unknown-worlds-bonus
2•mrkeen•20m ago•0 comments

Show HN: Prepin just launched 15 interview categories for mock interviews

1•OlehSavchuk•22m ago•0 comments

Stages of Adoption

https://www.robertotonino.com/adoption
1•RobTonino•22m ago•0 comments

A New Kind of AI Model Lets Data Owners Take Control

https://www.wired.com/story/flexolmo-ai-model-lets-data-owners-take-control/
1•CharlesW•22m ago•0 comments

xAI seeks up to $200B valuation in next fundraising

https://www.ft.com/content/25aab987-c2a1-4fca-8883-38a617269b68
2•mfiguiere•31m ago•0 comments

Synthetic renewable methane production via reactive CO2 capture and conversion

https://www.sciencedirect.com/science/article/pii/S2949790625001041
1•PaulHoule•36m ago•0 comments

Solar became EU's largest source of electricity in June 2025

https://ember-energy.org/latest-insights/solar-is-eus-biggest-power-source-for-the-first-time-ever/
1•dotcoma•36m ago•0 comments

New AWS Free Tier Launching July 15

https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/free-tier.html
1•firstSpeaker•37m ago•0 comments

Bujo.nvim – bullet journal accessible from anywhere

https://github.com/timhugh/bujo.nvim
1•timhugh•40m ago•1 comments

Placing Functions

https://blog.yoshuawuyts.com/placing-functions/
2•todsacerdoti•40m ago•0 comments

Moonshotai/Kimi-K2-Instruct

https://simonwillison.net/2025/Jul/11/kimi-k2/
2•nickthegreek•42m ago•0 comments

A Match Made in the Heavens: The Surveillance State and the "New Space" Economy

https://www.techpolicy.press/a-match-made-in-the-heavens-the-surveillance-state-and-the-new-space-economy/
1•gnabgib•43m ago•0 comments

Rsyslog Goes AI First – A New Chapter Begins

https://www.rsyslog.com/rsyslog-goes-ai-first-a-new-chapter-begins/
3•lhoff•43m ago•1 comments

I asked AI how to lose weight

https://healthpalai.netlify.app
1•GainTrains•46m ago•1 comments

Introduction to Digital Filters

https://ccrma.stanford.edu/~jos/filters/
2•ofalkaed•46m ago•0 comments
Open in hackernews

We're light-years away from true artificial intelligence, says martha wells

https://www.scientificamerican.com/article/were-light-years-away-from-true-artificial-intelligence-says-murderbot/
37•sohkamyung•6h ago

Comments

noiv•5h ago
Well, considering the impact current models already have now, these are good news.
zamalek•5h ago
This has been my opinion for some time too. I don't think I'll see AGI in my lifetime. I think the current widespread belief comes from the massive leap that transformers provided, but transformers have their limits. We would need another radically new idea in order to create AGI - which, just like all discoveries that aren't evolutionary, boils down to random chance[1]. What transformers have given us is substantially more infrastructure for trying new ideas out, so the probability of AGI being discovered has increased.

[1]: https://en.wikipedia.org/wiki/Eureka_effect

ninetyninenine•4h ago
Well you’re basing your conclusion on a 2 year blip of not being able to stop LLMs from hallucinating.

This is called the pessimism effect. People who deny things by only looking at one small aspect of reality while ignoring the overarching trend.

Follow the trendline of the ML for the last decade. We’ve been moving at a breakneck pace and the progress has been both evolutionary in nature and random chance. But there is a clear trendline of linear upwards progress and at times the random chance accelerates us past the linear upward trend.

Stop looking at LLMs look at the 10 year trendline of ML as a wholistic picture. You’re drilling down on a specific ML problem and a specific model.

I believe we will see agi within our life time but when we see it the goal posts will have moved and the internet will be loaded with so much ai slop that we won’t be amazed at it. Like the agi will be slightly mentally stupid at this one thing and because of that it’s not AI even though it blows past some Turing test (which in itself will be a test where we moved the goal post a thousand times)

zamalek•4h ago
My opinion of LLMs is in no way affected by hallucinations. Humans do it all the time too, talking about assumptions as though they are facts. For example:

> But there is a clear trendline of linear upwards progress

This is not the case at all.[1]

[1]: https://llm-stats.com/

ninetyninenine•4h ago
I think a good way to characterize it will be the droids in Star Wars. Those droids are fucking conscious and nobody gives a shit they are just mundane technology.

And after too much time without a data wipe those droids go off the freaking rails becoming too self aware and then people just treat it like it’s no big deal and an annoyance.

This is the future of AI. AI will be a retarded assistant and everyone will be bored with it.

whoaMndBlwn•3h ago
Your final comment here. Replace AI with human.

Idling our way up an illusory social/career escalator the elders convinced us was real.

Too real. Time to be done with the internet for the day. And it’s barely noon.

dinfinity•3h ago
> Stop looking at LLMs look at the 10 year trendline of ML as a wholistic picture

Exactly. Just 100 years ago AI did not exist at all. Hell, (electronic) computers did not even exist then.

In that incredibly short timeframe of development AI is coming very close to surpassing what took biological evolution millions of years (or even surpassing it in specific domains). If you take the time it took to go from chimp to human compared to the time it took from the first animal to chimp and assume that scales linearly to AI evolution, we are very, very close to a similar step there.

Of course, it's not that simple and the assumption is bound to be wrong, but to think it might take another 100 years seems misguided given the rapid development in the past.

123yawaworht456•2h ago
>Well you’re basing your conclusion on a 2 year blip of not being able to stop LLMs from hallucinating.

to this day, the improvement since the original API version of GPT4 (later heavily downgraded without a name change) has been less than amazing. context size increased dramatically, yes, but it's still pitiful, slow and brutally expensive.

ath3nd•1h ago
> Well you’re basing your conclusion on a 2 year blip of not being able to stop LLMs from hallucinating.

LLMs can't truly reason. It's not about hallucinations. LLMs are fundamentally designed NOT to be intelligence. Is my Intellij autocomplete AGI?

> Like the agi will be slightly mentally stupid at this one thing and because of that it’s not AI even though it blows past some Turing test (which in itself will be a test where we moved the goal post a thousand times)

I can only respond with a picture

https://substack.com/@msukhareva/note/c-131901009

> We’ve been moving at a breakneck pace and the progress has been both evolutionary in nature and random chance.

Yes, I enjoy being 19% slowed down by AI tooling, that's real breakneck pace.

https://www.infoworld.com/article/4020931/ai-coding-tools-ca...

Just because this breed of autocomplete can drown you in slop very fast doesn't mean we are advancing. If anything, we are regressing.

jfengel•27m ago
I think it's not a matter of stopping them from hallucinating, but why they're hallucinating.

They hallucinate because they're aren't actually working the way you do. They're playing with words. They don't have any kind of mental model -- even though they do an extraordinary mimicry of one.

An analogy: it's like trying to parse XML with a regular expression. You may get it to work in 99.99% of your use cases, but it's still completely wrong. Filtering out bad results won't get you there.

That said, the "extraordinary mimicry" is far, far beyond anything I could possibly have imagined. LLMs pass the Turing test with flying colors, without being AGI, and I would have sworn that the one implied the other. So it's entirely possible that we're closer than I think.

tartoran•5h ago
What we're currently on with LLMs is some kind of scripted artficial intelligence. I would say that is not necessarily a bad thing considering that true artificial intelligence that had autonomy and goals for preserving itself could easily escape our control and wreak real havoc unless we approach it with tiny steps and clear goals.
cgriswald•4h ago
Your post sort of hints at it, I think, but I'll state it clearly: Misalignment is the main threat when it comes to AI (and especially ASI).

A self-preserving AI isn't meaningfully more dangerous than an AI that solves world hunger by killing us all. In fact, it may be less so if it concludes that starting a war with humans is riskier than letting us live.

ninetyninenine•4h ago
How is an LLM scripted? What do you mean? We don’t understand how LLMs work and we know definitively it’s not “stochastic parroting” as people used to call it.
daveguy•4h ago
It is quasi-deterministic (sans a heat parameter) and it only ever responds to a query. It is not at all autonomous. If you let it do chain-of-thought for too long or any sort of continuous feedback loop it always goes off the rails. It is an inference engine. Inference by itself is not intelligence. Chollet has very good reasoning that intelligence requires both inference and search/program design. If you haven't read his papers about the ARC-AGI benchmark, you should check it out.
ninetyninenine•4h ago
> It is quasi-deterministic (sans a heat parameter)

Human brains are quasi deterministic. It’s just chaos from ultimately determinist phenomena which can be modeled as a “heat parameter”.

> it only ever responds to a query. It is not at all autonomous.

We can give it feedback loops like COT and you can even have it talk to itself. Then if you think of the feedback loop as the entire system it is autonomous. Humans are actually doing the same thing, our internal thought process is by definition a feedback loop.

> If you let it do chain-of-thought for too long or any sort of continuous feedback loop it always goes off the rails.

But this isn’t scripted. This is more the AI goes crazy. Scripting isn’t a characteristic that accurately describes anything that’s going on.

AI hallucinates and goes off the rails isn’t characteristic of scripting its characteristic of lack of control. We can’t control AI.

cgriswald•5h ago
Well, Wells actually says "...years and years and years away from anyone creating an actual artificial intelligence."

You know, in case you correctly interpreted the headline to mean Wells is saying aliens developed AI out there.

vouaobrasil•5h ago
Hyping up AGI is a good way for tech companies to distract people into thinking AI is actually not that big a deal, when it is. It may not be in terms of its pure reasoning or in the goal of reaching AGI, but it is very disruptive, and it's a guaranteed way to heavily reinforce the requirements of using big tech in daily life, without actually improving it.

Yes, it may not be AGI and AGI may not come any time soon, but by focusing on that question, people become distracted and don't have as much time to think about how parasitic big tech really is. If it's not a strategy used consciously, it's rather seredipitous for them that the question has come about.

spacemadness•4h ago
Not to mention all the people on HN arguing we’re close to AGI because LLMs sound like humans and can “think”. “What’s the difference?” they ask, not in curiosity but after already making a strong claim. I assume it’s the same people that probably skipped every non engineering class in college because of those “useless” liberal arts requirements.
skydhash•4h ago
I’ve done engineering in college, but I’ve beed dibbling in art since young, and philosophy of science is much more attractive to me than actual science. I agree with you that a lot of takes that AI is great, while consistent internally, are very reductive when it comes to technology usage by humans.
vouaobrasil•4h ago
AI is only great when you narrowly define the problem in terms of efficient production of a narrowly-defined thing. And usually, production at that level of efficiency is a bad thing.
Supermancho•4h ago
> Hyping up AGI is a good way for tech companies to distract people into thinking AI is actually not that big a deal, when it is.

I'm not sure what you're trying to say. Most people don't know the difference between AI and AGI. It's all hype making people thinking it's a big deal.

I have family that can't help but constantly text about AI this and AI that. How dangerous it might be or revolutionize something else.

ninetyninenine•4h ago
I can’t read the site it requires subscription. But I and many other researchers disagree. George Hinton for example, massive disagreement.

It’s not just LLMs that were a leap and bound. For the past decade and more ML has been rising at a breakneck velocity. We see models for scene recognition, models that can read your mind, models that recognize human movement. We were seeing the pieces and components and amazing results constantly for over 10 years and this is independent of LLMs.

And then everyone thinks AI is thousands of years away because we hit a small blip with LLMs in 2 years.

And here’s the thing. The blip isn’t even solid. Like LLMs sometimes gets shit wrong and sometimes gets shit right we just can’t control it. Like we can’t definitively say LLMs can’t answer a specific question. Maybe another LLM can get it right, maybe if prompted a different way it will get it right.

The other strange thing is that the LLM shows signs of lying. Like it’s not truthful. It has knowledge of the truth but the things purpose is not really to tell us the truth.

I guess the best way to put it that current AI sometimes behaves like AGI and sometimes doesn’t. It is not consistently AGI. The fact that we built a machine that inconsistently acts like agi shows how freaking close we are.

But the reality is no one understands how LLMs work. This fact is definitive. Like if you think we know how LLMs work then you are out of touch with reality. Nobody knows how LLMs work so this article and my write up are really speculation. We really dont know.

But the 10 year trendline of AI in general is the one that has a more accurate trendline into future progress. Basing the future off a 2 year trendline of a specific problem with a specific model of ML of LLMs hallucinating is not predictive.

tim333•4h ago
archive link https://archive.ph/AJuKI
Supermancho•4h ago
> I can’t read the site it requires subscription.

You can. archive.ph Copy the link, paste the link.

jmclnx•4h ago
>We’re Light-Years Away

Needs to be pointed out :) If I move billions of Light-Years from here I will be able to create AI :) A Light-Year is distance, the title should say maybe "decades away".

But I fully believe her argument, I think kids being born today will not see any real AI implementation in their lifetime.

add-sub-mul-div•3h ago
I'm very down on the idea that LLMs are on the path to AGI but come on man, even they don't trip over simple metaphor.
metalman•4h ago
While "true AI" is likely impossible, that discussion detracts from the fact that a whole new and very powerfull ability to process information is here , which will/is bieng used to automate routine managerial tasks and run certain types of robotic equipment. I am slowly preparing myself to use these new tools, but will never consider them "clean", and will impliment there use in certain hermeticaly compartmented areas of my professional and financial undertakings.
baal80spam•3h ago
You cannot be "light-years away" from a specific point in time. Who is this person and why is what they say important?
sc68cal•3h ago
It's a figure of speech. She's the author of the popular Murderbot series which has been a successful show on Apple+. Her stories are about artificial life and artificial intelligence.
regularjack•3h ago
I have to admit I also found it weird that scientific american is using a unit of distance as if it was a unit of time.