frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

IG Follower Export Tool – Export Instagram Data – Instracker.io

https://instracker.io
1•redzhu•2m ago•0 comments

How to speed up peer review: make applicants mark one another

https://www.nature.com/articles/d41586-025-02090-z
1•rntn•3m ago•0 comments

AWS API Billing, Credits, and Users

1•mazy1998•3m ago•1 comments

Show HN: I built a local STT tool to brain-dump into Claude Code

https://github.com/primaprashant/hns
1•primaprashant•4m ago•0 comments

Scientists Genetically Engineer Tobacco Plants to Pump Out a Popular Cancer Drug

https://singularityhub.com/2025/06/30/scientists-genetically-engineer-tobacco-plants-to-pump-out-a-popular-cancer-drug/
1•Brajeshwar•4m ago•0 comments

The remarkable tale of how humans nearly didn't conquer the world

https://www.newscientist.com/article/2484740-the-remarkable-tale-of-how-humans-nearly-didnt-conquer-the-world/
1•Brajeshwar•5m ago•0 comments

Make a GenAI Chatbot Using GraphRAG with SurrealDB and LangChain

https://surrealdb.com/blog/make-a-genai-chatbot-using-graphrag-with-surrealdb-langchain
1•handfuloflight•5m ago•0 comments

Quantum spin Hall effect in magnetic graphene

https://www.nature.com/articles/s41467-025-60377-1
1•bookofjoe•5m ago•0 comments

Ask HN: How are you doing code reviews in the era of GenAI?

1•lenerdenator•6m ago•0 comments

'The Most Humbling Thing I've Ever Seen': Ford CEO on China's Car Industry

https://insideevs.com/news/764318/ford-ceo-china-evs-humbled/
1•toomuchtodo•9m ago•0 comments

Official Heroku MCP exploit lets attackers hijack app ownership via GET request

https://www.tramlines.io/blog/heroku-mcp-exploit
3•coderinsan•9m ago•0 comments

Exception Handling in Rustc_codegen_cranelift

https://tweedegolf.nl/en/blog/157/exception-handling-in-rustc-codegen-cranelift
1•Bogdanp•10m ago•0 comments

Tell HN: Microsoft abruptly pulls plug on startup lifeline program

4•phren0logy•11m ago•0 comments

Show HN: Single binary web apps with Rust and Svelte

https://github.com/Hugo-Dz/single-binary-saas
2•HugoDz•12m ago•0 comments

Show HN: Aqueduct AI Gateway – A Self-Hostable AI Gateway Without the "Taxes"

https://github.com/TU-Wien-dataLAB/aqueduct
1•meffmadd•12m ago•0 comments

Dengue Severity Linked to Genetic Ancestry's Influence on Immune Response

https://www.genengnews.com/topics/infectious-diseases/dengue-severity-linked-to-genetic-ancestrys-influence-on-immune-response/
1•gmays•12m ago•0 comments

Self-Assembling 3D-Printed Shapes Through Biomimetic Mechanical Interlocking

https://www.mdpi.com/2313-7673/10/6/400
1•PaulHoule•14m ago•0 comments

The place of humans in machine society

https://trevorklee.substack.com/p/the-place-of-humans-in-machine-society
1•klevertree1•16m ago•0 comments

Breadboard Wristwatch

https://hackaday.io/project/175697-breadboard-wristwatch
2•downboots•18m ago•1 comments

Ageing is linked to inflammation – but only in the industrialized world

https://www.nature.com/articles/d41586-025-02085-w
2•pseudolus•20m ago•1 comments

PlanetScale for Postgres

https://planetscale.com/blog/planetscale-for-postgres
6•adocomplete•21m ago•0 comments

Do we need a new social media build on generative AI?

1•imwoody•24m ago•2 comments

Has AI made "learn to code" obsolete?

https://www.freethink.com/artificial-intelligence/learn-to-code
2•daviducolo•24m ago•1 comments

ORMs are criticized for the wrong reasons

https://www.getlago.com/blog/orms-are-criticized-for-the-wrong-reasons
2•FinnLobsien•26m ago•0 comments

Show HN: Road Quality App in Oxford, UK, Based on Street View Imagery

https://philippopien.users.earthengine.app/view/oxford-uk-road-quality
1•deepvoltaire•27m ago•0 comments

Ask HN: Who is hiring? (July 2025)

10•whoishiring•28m ago•26 comments

Ask HN: Freelancer? Seeking freelancer? (July 2025)

4•whoishiring•28m ago•12 comments

Ask HN: Who wants to be hired? (July 2025)

4•whoishiring•28m ago•19 comments

MotherDuck launches managed DuckLake service

https://motherduck.com/blog/announcing-ducklake-support-motherduck-preview/
2•ryguyrg•28m ago•2 comments

I Shipped a macOS App Built by Claude Code

https://www.indragie.com/blog/i-shipped-a-macos-app-built-entirely-by-claude-code
1•indragie•28m ago•0 comments
Open in hackernews

The average chess players of Bletchley Park and AI research in Britain

https://blogs.bl.uk/science/2025/06/the-average-chess-players-of-bletchley-park-and-ai-research-in-britain.html
30•salonium_•4h ago

Comments

PaulRobinson•2h ago
It's strange today to remember that playing chess well was seen as a great marker of AI, but today we consider it much less so.

I thought Turing's Test would be a good barometer of AI, but in today's World of mountains of AI slop fooling more and more people, and ironically there being software that is better at solving CAPTCHAs than humans, I'm not so sure.

Add into the mix that there are reports of people developing psychological disorders when exposed deeply to LLMs, I'm not sure they are good replacements for therapists (ELIZA, ah, what a thought), and they seem - even with a lot of investment in agentic workflows and getting a lot of context into GraphRAG or wiring up MCP - to be good at helping experts get a bit faster, not replace experts. And that's not software development specific - it seems to be the case across all domains of expertise.

So what are we now chasing for? What's the test for AGI?

It's definitely not playing games well, like we thought, or pretending to be human, or even being useful to a human. What is it, then?

pvg•2h ago
playing chess well was seen as a great marker of AI

Was it? Alpha-beta pruning is from 1957 they had a decent idea chess of what human-beating computer chess would be like and that it probably wasn't some pathway to Turing-test-beating AI.

dandellion•2h ago
How about: the ability to independently implement ways to manipulate the local environment for their own benefit or self-preservation?
zmgsabst•2h ago
I’d argue we have AGI, at the level of a child; now we’re debating further steps, such as adult AGI and super intelligence.

But because AI is not like us, we have different results at different stages — eg, they’ve been better at arithmetic for a hundred years, games for twenty, and slowly are climbing up other domains.

nyrikki•32m ago
Any discussion about AGI requires a written definition of the term to have a reasonable discussion.

What we have now matches what many of the popular texts would call "Narrow AI", which is limited to specific tasks like speech recognition or playing chess, or mixtures of those.

Traditionally AGI represents a more aspirational goal, machines that could theoretically perform any intellectual task a human can do.

Under that definition we aren't close, and we will actually need new math to even hope to reach that goal.

Obviously individuals concepts of what 'AGI' differ, as well as their motivations for choosing one.

But the traditional hopeful mnomics concept of AGI is known to be unreachable without discoveries that upend what we think are hard limits today.

Machines being better at arithmetic, the ties from to the limits of algorithms is actually the source of the limits.

The work of Turing, Gödel, Tarski, Markov, Rice etc... is where that claim is coming from IMHO

Fortunately there is a lot of practical utility without AGI, but our industries use of aspirational mnomics is almost guaranteed to disappoint the rest of the world.

Scarblac•2h ago
That can only be decided in hindsight. By the time everybody agrees that the system is clearly generally intelligent, it will have been for ages already. It will already be far more intelligent than even very smart humans.

But I think general problem solving is a part of it. Coming up with its own ideas for possible solutions rather than what it generalized from a training set, and being able to try them out and iterate. In an environment it wasn't specifically designed for by humans.

(not claiming most humans can do that)

zmgsabst•2h ago
Are you saying most humans aren’t generally intelligent, by your definition?
Scarblac•1h ago
Humans are very different from computers. In particular there are some things that computers are vastly better at (computation, memory, etc), and humans are optimized for surviving in their biological environment, not necessarily for general intelligence.

I think asking of an AGI to do what humans do is asking a submarine to swim. It's not very useful.

So I think that when we have useful computer AGI, it will be much better at it than humans.

You already see that even with say ChatGPT -- it's not expert level, but the knowledge it does have is way way wider than any human's. If we get something that's as smart as humans, it will probably still be as widely applicable.

And why even try, otherwise? We already have human intelligence.

pyman•2h ago
I have a similar philosophical question:

My dog doesn't know what I do for a living, and he has no concept of how intelligent I am. So if we're limited by our own intelligence, how would we ever recognise or measure the intelligence of an AI that's more advanced than us?

If an AI surpasses us, not just in memory or calculation but in reasoning, self-reflection, and abstraction, how would we even know?

officehero•1h ago
Wittgenstein's lion
dale_glass•1h ago
We could test it. We know with certainty that computers play far better chess than we do.

How do we know? Play a game with the computer, and see who wins.

There's no reason why we can't apply the same logic elsewhere. Set up a testable scenario, see who wins.

card_zero•57m ago
Either the alleged super-intelligence affects us in some way, directly or indirectly by altering things we can detect about the world/universe, in which case we can ultimately detect it, or else it doesn't, in which case it might as well belong to a separate universe, not only in terms of our perception but objectively too.

The error here is thinking that dogs understand anything.

Retric•46m ago
Some dogs can respond to “bring me my slippers” and go get them in another room, a concrete task that’s still difficult for robots today.

With dogs it’s less a question of intelligence but communication something more intelligent AI is unlikely to have a problem with.

card_zero•17m ago
OK, it might be a cultural thing. Do dogs probe the secrets of the world around them, with all that barking, even a little? Is it that they're in an early phase and will eventually advance to do more with stones than lick them sometimes?

What would our being baffled by a super-intelligence look like? Maybe some effect like dark matter. It would make less sense the more we found out about it, and because it's on a level beyond our comprehension, it would never add up. And the lack of apparent relevance to a super-intelligence's doings would be expected, because it's beyond our comprehension.

But this is silly and resembles apologies for God based on his being ineffable. So there's a way to avoid difficult questions like "what is his motivation" and "does he feel like he needs praise" because you can't eff him, not even a little. Then anything incomprehensible becomes evidence for God, or super-intelligence. We'd have to be pretty damn frustrated with things we don't understand before this looks true.

But that still doesn't work, because we're not supposed to be able to even suspect it exists. So even that much interaction with us is too much. In fact this "what if" question undermines itself from the start, because it represents the start of comprehension of the incomprehensible thing it posits.

TheOtherHobbes•32m ago
Dogs certainly do understand things. Dogs and cats have a theory of mind and can successfully manipulate and trick their owners - and each other.

Our perceptions are shaped by our cognitive limitations. A dog doesn't know what the Internet is, and completely lacks the cognitive capacity to understand it.

An ASI would almost certainly develop some analogous technology or ability, and it would be completely beyond us.

That does NOT mean we would notice we were being affected by that technology.

Advertising and manufactured addictions make people believe external manipulations are personal choices. An ASI would probably find similar manipulations trivial.

But it might well be capable of more complex covert manipulations we literally can't imagine.

card_zero•5m ago
Dogs certainly do not understand things. Do they enquire? What are some good dog theories? They have genetic theories. We breed theories into them.
gadders•31m ago
https://fluent.pet/pages/getting-started-with-talking-button...
gadders•33m ago
It can have a fight with Nagel's Bat.
captainbland•2h ago
I think with the Turing test, it's turned out to be a fuzzier line than expected. People are to various degrees learning LLM tells even as they improve. So what might have passed the Turing test in 2020 might not today. Similarly it seems to be a case that conversations with LLMs often start better than they end, even today - so an LLM might pass a short turing test but fail a very long one that goes into hundreds of rounds.
kranke155•1h ago
We’ve clearly passed the Turing test I think. I can’t think of many ways I’d be able to detect an LLM reliably, if it was coded to just act as a person talking to me on discord.
iamflimflam1•2h ago
> It's strange today to remember that playing chess well was seen as a great marker of AI, but today we consider it much less so.

It was seen as so difficult to do that research should be abandoned.

Projects in category B were held to be failures. One important project, that of "programming and building a robot that would mimic human ability in a combination of eye-hand co-ordination and common-sense problem solving", was considered entirely disappointing. Similarly, chess playing programs were no better than human amateurs. Due to the combinatorial explosion, the run-time of general algorithms quickly grew impractical, requiring detailed problem-specific heuristics.

The report stated that it was expected that within the next 25 years, category A would simply become applied technologies engineering, C would integrate with psychology and neurobiology, while category B would be abandoned.

https://en.wikipedia.org/wiki/Lighthill_report

rjsw•1h ago
The linked post points out that it is a low-cost area of research and you don't need to explain the context to a reviewer.
jltsiren•2h ago
Tests can only show that something is not AGI. If you want to show that a system is AGI, you must wait for expert consensus. That means adding new tests and dropping old ones, as our understanding of intelligence improves. If something is truly AGI, people will eventually run out of plausible objections.
nemomarx•1h ago
I suppose doing useful research becomes the next target?

that's what the exponential lift off people want right

codeulike•1h ago
If you look at the stuff Turing was writing in the 1950s its fascinating because he really saw the potential of what computation was going to be able to do. There was a paradigm shift in thinking about possibilities here that he grasped in the very early days.

https://www.cs.ox.ac.uk/activities/ieg/e-library/sources/t_a...

It would be amazing to go and fetch Turing with a time machine and bring him to our time. Show him an iPhone, his face on the UK £50 note, and Wikipedia's list of https://en.wikipedia.org/wiki/List_of_openly_LGBTQ_heads_of_...