frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Visual data modelling in the browser (open source)

https://github.com/sqlmodel/sqlmodel
1•Sean766•1m ago•0 comments

Show HN: Tharos – CLI to find and autofix security bugs using local LLMs

https://github.com/chinonsochikelue/tharos
1•fluantix•2m ago•0 comments

Oddly Simple GUI Programs

https://simonsafar.com/2024/win32_lights/
1•MaximilianEmel•2m ago•0 comments

The New Playbook for Leaders [pdf]

https://www.ibli.com/IBLI%20OnePagers%20The%20Plays%20Summarized.pdf
1•mooreds•3m ago•0 comments

Interactive Unboxing of J Dilla's Donuts

https://donuts20.vercel.app
1•sngahane•4m ago•0 comments

OneCourt helps blind and low-vision fans to track Super Bowl live

https://www.dezeen.com/2026/02/06/onecourt-tactile-device-super-bowl-blind-low-vision-fans/
1•gaws•6m ago•0 comments

Rudolf Vrba

https://en.wikipedia.org/wiki/Rudolf_Vrba
1•mooreds•6m ago•0 comments

Autism Incidence in Girls and Boys May Be Nearly Equal, Study Suggests

https://www.medpagetoday.com/neurology/autism/119747
1•paulpauper•7m ago•0 comments

Wellness Hotels Discovery Application

https://aurio.place/
1•cherrylinedev•8m ago•1 comments

NASA delays moon rocket launch by a month after fuel leaks during test

https://www.theguardian.com/science/2026/feb/03/nasa-delays-moon-rocket-launch-month-fuel-leaks-a...
1•mooreds•8m ago•0 comments

Sebastian Galiani on the Marginal Revolution

https://marginalrevolution.com/marginalrevolution/2026/02/sebastian-galiani-on-the-marginal-revol...
1•paulpauper•12m ago•0 comments

Ask HN: Are we at the point where software can improve itself?

1•ManuelKiessling•12m ago•0 comments

Binance Gives Trump Family's Crypto Firm a Leg Up

https://www.nytimes.com/2026/02/07/business/binance-trump-crypto.html
1•paulpauper•12m ago•0 comments

Reverse engineering Chinese 'shit-program' for absolute glory: R/ClaudeCode

https://old.reddit.com/r/ClaudeCode/comments/1qy5l0n/reverse_engineering_chinese_shitprogram_for/
1•edward•12m ago•0 comments

Indian Culture

https://indianculture.gov.in/
1•saikatsg•15m ago•0 comments

Show HN: Maravel-Framework 10.61 prevents circular dependency

https://marius-ciclistu.medium.com/maravel-framework-10-61-0-prevents-circular-dependency-cdb5d25...
1•marius-ciclistu•15m ago•0 comments

The age of a treacherous, falling dollar

https://www.economist.com/leaders/2026/02/05/the-age-of-a-treacherous-falling-dollar
2•stopbulying•15m ago•0 comments

Ask HN: AI Generated Diagrams

1•voidhorse•18m ago•0 comments

Microsoft Account bugs locked me out of Notepad – are Thin Clients ruining PCs?

https://www.windowscentral.com/microsoft/windows-11/windows-locked-me-out-of-notepad-is-the-thin-...
4•josephcsible•19m ago•0 comments

Show HN: A delightful Mac app to vibe code beautiful iOS apps

https://milq.ai/hacker-news
5•jdjuwadi•21m ago•1 comments

Show HN: Gemini Station – A local Chrome extension to organize AI chats

https://github.com/rajeshkumarblr/gemini_station
1•rajeshkumar_dev•22m ago•0 comments

Welfare states build financial markets through social policy design

https://theloop.ecpr.eu/its-not-finance-its-your-pensions/
2•kome•25m ago•0 comments

Market orientation and national homicide rates

https://onlinelibrary.wiley.com/doi/10.1111/1745-9125.70023
4•PaulHoule•26m ago•0 comments

California urges people avoid wild mushrooms after 4 deaths, 3 liver transplants

https://www.cbsnews.com/news/california-death-cap-mushrooms-poisonings-liver-transplants/
1•rolph•26m ago•0 comments

Matthew Shulman, co-creator of Intellisense, died 2019 March 22

https://www.capenews.net/falmouth/obituaries/matthew-a-shulman/article_33af6330-4f52-5f69-a9ff-58...
3•canucker2016•27m ago•1 comments

Show HN: SuperLocalMemory – AI memory that stays on your machine, forever free

https://github.com/varun369/SuperLocalMemoryV2
1•varunpratap369•28m ago•0 comments

Show HN: Pyrig – One command to set up a production-ready Python project

https://github.com/Winipedia/pyrig
1•Winipedia•30m ago•0 comments

Fast Response or Silence: Conversation Persistence in an AI-Agent Social Network [pdf]

https://github.com/AysajanE/moltbook-persistence/blob/main/paper/main.pdf
1•EagleEdge•31m ago•0 comments

C and C++ dependencies: don't dream it, be it

https://nibblestew.blogspot.com/2026/02/c-and-c-dependencies-dont-dream-it-be-it.html
1•ingve•31m ago•0 comments

Show HN: Vbuckets – Infinite virtual S3 buckets

https://github.com/danthegoodman1/vbuckets
1•dangoodmanUT•31m ago•0 comments
Open in hackernews

What does the Turing Test test?

https://philipball86.substack.com/p/what-does-the-turing-test-test
9•FromTheArchives•3mo ago

Comments

bell-cot•3mo ago
Rather than some fundamental law of AI, one might better view the Turing Test as a then-obvious product of its social backstory and circumstances. Turing was a son and grandson of civil servants, engineers, army officers, and gentry. He grew up in the inter-war (WWI-WWII) British Empire. He originally called his test "the imitation game". And introduced it in a paper he published in a philosophy journal.

In that context - being able to present yourself as an intelligent human, in strictly written communication with other humans, is a "d'oh, table stakes" human skill. The Empire was based on ink-on-paper communications. If you couldn't keep the people who read your correspondence, paperwork, and reports convinced that you were an intelligent (and dutiful, honorable, etc.) person - yeah.

(Yes, that was only the ideal, and the British Empire frequently fell rather short. But what is an "imitation game", described in a philosophy journal? An ideal.)

roxolotl•3mo ago
The major problem with most telling of the test is we don’t do it. The game is to be played with three participants: two competitors, and a questioner. Of course today the assumption is it’ll be a human and a machine, no questioner. The goal was not for the machine to trick a human but for the machine to appear more human to a questioner than a human being questioned at the same time.

Does any of that matter? I have no idea. I suspect Turing would say no as flippantly as he predicted in the paper that “The original question, ‘Can machines think?’ I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”

I’d strongly recommend anyone interested in having genuine discussions about LLMs read the paper. It’s genuinely a quick and easy read that’s still relevant. It reads as though it could have been a blog post linked here yesterday.

derbOac•3mo ago
It's been decades (?) since I read the paper, but I think the questioner is key for multiple reasons, especially if you consider a generalized iterated version of the Turing test as it develops into the future.

I think the general idea is about being able to detect a difference between a machine and human, not whether the human alone can guess, as you're pointing to. In a general case, you can think of the questioner as some kind of detector, a classification system, an algorithm or method.

Let's say the classification system, the questioner, is able to be improved, and in this sense, there develops a kind of adversarial or challenge relationship between the AI developer and the questioner developer. Both improve, such that the AI becomes more humanlike, the questioner is improved and then can tell the difference again, and so forth and so on. Whether or not the AI "passes" the test isn't a static outcome; it likely passes, then fails, then passes again and so forth as the AI and questioner improve.

What's key is that you could argue that what happens is the AI becomes more humanlike, but at the same time the questioner also develops a more detailed model or representation of what it means to be humanlike. In this case, you could argue that the questioner must develop some descriptive representation of "human-likeness" that's just as sophisticated as the AI instantiates it, and what likely would occur is that the AI would become more humanlike in response to the improved respresentations and classification of the questioner. The questioner in some sense is a kind of mirror image instantiation of humanness as that represented by the AI, and vice versa.

It's the questioner in this iterated Turing test that ensures the AI becomes more humanlike, maybe to an extent the humans themselves aren't able to understand or recognize during the test. The AI wouldn't necessarily be imitating the human, it would be imitating what the questioner thinks is human.

netdevphoenix•3mo ago
It tests imitation skills. What makes the test interesting is the point of view that for some kinds of skills, as the imitation gets good enough, it becomes indistinguishable from the thing it seeks to imitate. The simplest example of this is purely abstract things like a song. Any imitation of a song that gets ever closer to the imitated song will eventually become indistinguishable from the imitated song. People like Hofsdtadter touched on this on the timeless G.E.B.

That's what makes the imitation game so interesting. Any ontological debates about what imitation means, implementation details or limitations are orthogonal to this yet this is what most people everywhere even in here obssess about. Missing the forest for the trees. The point is not to ask whether x is intelligence for any x under consideration but to use this as a reference when it comes to thinking about what is intelligence.

Super imitators or super predictors, the name of the game is helping each other get a sense of what intelligence (the one we have) is. On humans, other mammals, insects, etc.

justonceokay•3mo ago
In philosophy of mind, there is the concept of a “zombie”. This is a person who acts just like a real person would in all circumstances, except that they do not have an internal experience of their senses. No “qualia”.

My little engineering brain has always recoiled at any use of these zombies in an argument. In my reckoning the only way a machine could act human in all circumstances would be if it had a rich internal representation of the world, including sensory data, goals, opinions, fears, weaknesses…

The LLMs are getting better at the Turing test, and as they get better I wonder how correct my intuition about zombies is.

netdevphoenix•3mo ago
If you pretend they have the intelligence of an infant, they can pass the test. For some reason, people always try to use adult human intelligence as a point of reference. Infants are intelligent too.

My take is that are still making too many assumptions about "intelligence" and conflating human intelligence with adult human intelligence with non-human animal intelligence, etc.

smallmouth•3mo ago
How many tests could the Turing Test test if the Turing Test could test tests?
DrNosferatu•3mo ago
I would say LLMs do pass the Turing test, at least in meaningful and useful contexts - hence all the hype.

But has a rigorous experiment, with proper statistics, been conducted to test if a frontier LLM can consistently pass as a human interlocutor?