frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•10m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
3•o8vm•12m ago•0 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•13m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•26m ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•29m ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
1•helloplanets•31m ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•39m ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•41m ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•42m ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•42m ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
1•basilikum•45m ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•46m ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•50m ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
3•throwaw12•52m ago•1 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•52m ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•53m ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•55m ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•58m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•1h ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•1h ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•1h ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•1h ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•1h ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
2•lifeisstillgood•1h ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•1h ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•1h ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•1h ago•1 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•1h ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
2•ramenbytes•1h ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•1h ago•0 comments
Open in hackernews

Reasons to Not Use ChatGPT

https://stallman.org/chatgpt.html
67•sonderotis•4mo ago

Comments

sonderotis•4mo ago
Interesting take from richard.
baggy_trough•4mo ago
All these people that go "it just predicts words" seem to be very certain that the brain does something else.
sonderotis•4mo ago
actually it does. We do not predict words lol.
baggy_trough•4mo ago
I find your certainty to be unwarranted.
rmwaite•4mo ago
Then what do we do? lol.
BizarroLand•4mo ago
We understand the meaning that we wish to convey and then intelligently choose the best method that we have at our disposal to communicate that.

LLMs find the most likely next word based on its billions of previously scanned word combinations and contexts. It's an entirely different process.

satvikpendem•4mo ago
How do you know we understand and LLMs don't? To an outsider they look the same. Indeed, that is the point of solipsism.
BizarroLand•4mo ago
Because unlike a human brain, we can actually read the whitepaper on how the process works.

They do not "think", they "language", i.e. large language model.

satvikpendem•4mo ago
What is thinking and why do you think that LLM ingesting content is not also reading? Clearly they're absorbing some sort of information from text content, aka reading.
BizarroLand•4mo ago
I think you don't understand how llms work. They run on math, the only parallel between an llm and a human is the output.
satvikpendem•4mo ago
Are you saying we don't run on math? How much do you know of how the brain functions?

This sort of Socratic questioning shows that no one truly can answer them because no one actually knows about the human mind, or how to distinguish or even define intelligence.

baggy_trough•4mo ago
So do neurons.
9x39•4mo ago
How does this intelligence work? Can you explain how 'meaning' is expressed in neurons, or whatever it is that makes up consciousness?

I don't think we know. Or if we have theories, the error bars are massive.

>LLMs find the most likely next word based on its billions of previously scanned word combinations and contexts. It's an entirely different process.

How is that different than using one's learned vocabulary?

falcor84•4mo ago
Well, I failed to predict the "lol" at the end of your comment; touché.
theo1996•4mo ago
It does something else yes.
throw-the-towel•4mo ago
Even if it does, that's not very relevant. The airplane does not imitate the bird, yet it very much does fly.
simianwords•4mo ago
What a horrible take from someone who used to be competent. I find that it’s usually the hardcore OSS and hardware adjacent types to be ideological about AI.
simianparrot•4mo ago
And people will keep ignoring Stallman at their peril. But if you understand how the technology works, you also know he's right. If you think he isn't, you either don't understand or you don't _want_ to understand because your job depends on it.
9x39•4mo ago
He's sort of right but in an unspecific way.

"people should not trust systems that mindlessly play with words to be correct in what those words mean"

Yes, but this applies to any media channel or just other human minds. It's an admonition to think critically about all incoming signals.

"users cannot get a copy of it"

Can't get a copy of my interlocutor's mind, either, for careful verification. Shall I retreat to my offline cave and ruminate deeply with only my own thoughts and perhaps a parrot?

>you also know he's right. If you think he isn't, you either don't understand or you don't _want_ to understand because your job depends on it.

He can't keep getting away with this!

simianparrot•4mo ago
> Yes, but this applies to any media channel or just other human minds.

You can hold a person responsible, first and foremost. But I am so tired of this strawman argument; it's unfalsifiable but also stupid because if you interact with real people, you immediately know the difference between people and these language models. And if you can't I feel sorry for you, because that's more than likely a mental illness.

So no I can't "prove" that people aren't also just statistical probability machines and that every time you ask someone to explain their thought process they're not just bullshitting, because no, I can't know what goes on in their brain nor measure it. And some people do bullshit. But I operate in the real world with real people every day and if they _are_ just biological statistical probability machines, then they're a _heck_ of a lot more advanced than the synthetic variety. So much so that I consider them wholly different, akin to the difference between a simple circuit with a single switch vs. the SoC of a modern smartphone.

9x39•4mo ago
I actually agree with you that LLMs are so rigid and shallow as even a typical person appears as an ocean to them in a conversation.

I just think Stallman is this broken-clock purist that offered no specific practical advice in this case. I’d be more interested in what he thinks in LLMs one-shotting humans with their tokens (LLM psychopathy?) as they come on the scene worldwide.

ryanjshaw•4mo ago
I don’t have the luxury of listening to him. I would be much less effective at my job compared to my competitors in the job market if I didn’t use ChatGPT, regardless of whether it’s open source software or meets his definition of intelligence.
theo1996•4mo ago
Extremely based and to the point. Its ridiculous how all these comment somehow disagree with him, they are not inteligent systems, its justa regression function run on words or pixel data
baobun•4mo ago
> all these comments

All 2 of them! Way to gauge the crowd sentiment.

falcor84•4mo ago
Can you please offer a measurable definition of intelligence that you would put good money on not being cracked by AI in a decade?
alganet•4mo ago
What if I said that the ability to move the goalpost is the real trick?

Machines started to hold up casual conversation well, so we came up with more clever examples of how to make it hallucinate, which made it look dumb again. We're surprisingly good and fast at it.

You're trying to cap that to a decade, or a specific measure. It serves no other purpose than to force one to make a prediction mistake, which is irrelevant to the intelligence discussion.

falcor84•4mo ago
I think I understand what you're saying, but disagree with the implication. If anything, I'm actually impressed by how the development of AI seems to me be making it more and more difficult for us to move the goalpoasts.

There obviously still are many opportunities for us to make fun of the capability of GenAI, but it's getting harder to come up with the "clever" (as you said) prompt. They mostly don't add supernumerary fingers any more, and generally don't make silly arithmetic mistakes on a single prompt. We need to look for more complex and longer-time-horizon tasks to make them fail, and in many situations, the tasks are as likely to trip up a human as they would an AI.

Indeed your comment reminded me of Plato's Dialogues, which mostly involve Socrates intentionally trying to trip up his conversation partner in a contradiction. Reading these didn't ever make me feel that Socrates's partner is not intelligent or really has a deep underlying issue in their mental model, but rather that Socrates (at least as written up by Plato) is very clever and good at rhetoric. Same in regards to AI - I don't see our ability to make them fail as illustrating a lack of intelligence, just that in some ways we are more intelligent or have more relevant experience.

And if you're concerned about making a prediction and all you can fall back of on is a "I know it when I see it" argument, then to me that is as strong a signal as can be that there's no hard line separating between artificial intelligence and human intelligence.

alganet•4mo ago
I can move the goalpost without relying on hallucinations or the ability to make fancy rethoric. Just make it about energy consumption, and the whole thing looks dumb again.

Humans can do these amazing things (like learning multiple languages) on a very tight energy budget. LLMs need millions of hours of training to deliver subpar results. If you consider the amount of resources poured into it, it's not that impressive.

If someone needs a measure and a prediction, let's make it then. LLMs will not surpass humans, given both are provided with the same energy budget, in a century. That means I am confident that, given the same energy budget as a human, it will take more than 100 years of development (I think it's more, but I'm being safe) to come up with something that can be trained to fool someone in a conversation.

Can you understand the energy argument from the intelligence perspective? This thing is big, dumb and wasteful. It just have more time (by cheating) to cover its bases. It can do some tricks and fool some people, but it's a whole different thing, and it is reasonable to not call it intelligent.

niam•4mo ago
Whether LLMs are "intelligent" seems a wholly uninteresting distinction, resembling the internet ceremony surrounding whether a hotdog is a sandwich.

There's probably very interesting discussion to be had about hotdogs and LLMs, but whether they're sandwiches or intelligent isn't a useful proxy to them.

saulpw•4mo ago
I disagree completely. Many people take for granted that the expression of intelligence/competence is the same as actual intelligence/competence, and many people are acting accordingly. But a simulacrum is definitively NOT the thing itself. When you trust fake intelligence, especially as a way to indulge mental laziness, your own faculties atrophy, and then in short order you can't even tell the difference between a real intelligence bomb and a dumb empty shell that has the word "intelligent" written on it.
marcellus23•4mo ago
What are your thoughts on the Chinese room thought experiment?
saulpw•4mo ago
See my other comment above. Language manipulation is not sufficient for intelligence and understanding. There is no one in the Chinese Room who understands the questions and answers; there is no understanding in the system; there is no understanding at all.
niam•4mo ago
I'm not even taking for granted what it means. Can you define it in a way that your neighbor will independently arrive at? It's an incredibly lossy container for whatever meaning that people will want to pack it with, moreso than for other words.

Is a hotdog a simulacrum of a sandwich? Or a fake sandwich? I have no clue and don't care because it doesn't meaningfully inform me of the utility of the thing.

An LLM might be "unintelligent" but I can't model what you think the consequences of that are. I'd skip the formalities and just talk about those instead.

saulpw•4mo ago
It sounds like you are [dis]interested in a philosophical discussion about epistemology. So it seems that you've skipped the inquiry yourself and have short-circuited to "don't care". Which is kind of "utilitarian". For other perspectives[0]:

> The school of skepticism questions the human ability to attain knowledge, while fallibilism says that knowledge is never certain. Empiricists hold that all knowledge comes from sense experience, whereas rationalists believe that some knowledge does not depend on it. Coherentists argue that a belief is justified if it coheres with other beliefs. Foundationalists, by contrast, maintain that the justification of basic beliefs does not depend on other beliefs. Internalism and externalism debate whether justification is determined solely by mental states or also by external circumstances.

For my part, I do believe that there is non-propositional knowledge. That a person can look at a set of facts/experiences/inputs and apply their mind towards discerning knowledge (or "truth"), or at least the relative probability of knowledge being true. That while this discernment and knowledge might be explained or justified verbally and logically, the actual discernment is non-verbal. And, for sure, correctness is not even essential--a person may discern that the truth is unknowable from the information at their disposal, and they may even discern incorrectly! But there is some mental process that can actually look behind the words to its "meaning" and then apply its own discernment to that meaning. (Notably this does not merely aggregating everyone else's discernment!) This is "intelligence", and it is something that humans can do, even if many of us often don't even apply this faculty ourselves.

From discussions on HN and otherwise I gather this is what people refer to by "world-modeling". So my discernment is that language manipulation is neither necessary nor sufficient for intelligence--though it may be necessary to communicate more abstract intelligence. What LLM/AGI proponents are arguing is that language manipulation is sufficient for intelligence. This is a profound misunderstanding of intelligence, and one that should not be written off with a blithe and unexamined "but who knows what intelligence is anyway".

[0] https://en.wikipedia.org/wiki/Epistemology

niam•4mo ago
I'm not discounting the philosophy, just the language.

I don't mean to sound blithe. If I do, it's not out of indifference but out of active determination that these kinds of terminological boundary disputes quickly veer into pointlessness. They seldom inform us of anything other than how we choose to use words.

p0w3n3d•4mo ago
The ChatGPT has a few self-awareness modules, it can even behave based on its certainty. Please see the Adrej Karpathy's video on it.

This is the breakthrough we went beyond. There's no going back now. There is also a reasoning now in the LLM

sanbor•4mo ago
Interesting points! Maybe a better term is LLMs (BTW smart phones are not smart and people don’t seem to be confused). I agree with being dependent and sending so much data to those servers. I would mention there is a version of ChatGPT you can run locally[1].

[1] https://openai.com/index/introducing-gpt-oss/

esbranson•4mo ago
Consciousness, in Zoltan Torey's[1] model, is the brain's layered, language-enabled off-line mechanism that reflects on its own sensory endogram, generating self-aware, internally guided behavior.[2] The off-line mechanism generates mental alternatives, which are then "run past the brainstem, which then makes the selection." Nice little accessible book.[3]

> Taking “computer” first, we find that this alleged source of machine-generated consciousness is not what it is cracked up to be. It is a mere effigy, an entity in name only. It is no more than a cleverly crafted artifact, one essentially indistinguishable from the raw material out of which it is manufactured.[2]

[1] https://en.wikipedia.org/wiki/Zoltan_Torey

[2] https://mitpress.mit.edu/9780262527101/the-conscious-mind/

[3] https://search.worldcat.org/title/887744728

dudeinjapan•4mo ago
By this logic, most human brains are bullshit generators too. Some humans even have a complete and utter disregard for the truth. (One such human happens to own Truth Social.)
rpjt•4mo ago
It's true that lots of people don't seem to recognize objective truth, or just don't want to admit it. Perception is reality.
visarga•4mo ago
Richard makes a distinction between human understanding and AI indifference to truth. But isn't that what half the country is doing a.t.m? And more philosophically, we can't know the Truth because we rely on leaky abstractions all the way.

AI models are subject to user satisfaction and sustained usage, the models also have a need to justify their existence, not just us. They are not that "indifferent", after multiple iterations the external requirement becomes internalized goal. Cost is the key - it costs to live, and it costs to execute AI. Cost becomes valence.

I see it like a river - water carves the banks, and banks channel the water, you can't explain one without the other, in isolation. So are external constraints and internal goals.

satvikpendem•4mo ago
Another day, another example of the AI Effect in action:

> "The AI effect" refers to a phenomenon where either the definition of AI or the concept of intelligence is adjusted to exclude capabilities that AI systems have mastered. This often manifests as tasks that AI can now perform successfully no longer being considered part of AI, or as the notion of intelligence itself being redefined to exclude AI achievements.[4][2][1] Edward Geist credits John McCarthy for coining the term "AI effect" to describe this phenomenon.[4] The earliest known expression of this notion (as identified by Quote Investigator) is a statement from 1971, "AI is a collective name for problems which we do not yet know how to solve properly by computer", attributed to computer scientist Bertram Raphael.[5]

> McCorduck calls it an "odd paradox" that "practical AI successes, computational programs that actually achieved intelligent behavior were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the 'failures', the tough nuts that couldn't yet be cracked."[6] It is an example of moving the goalposts.[7]

I wonder how many more times I'll have to link this page until people stop repeating it.

[0] https://en.wikipedia.org/wiki/AI_effect

falcor84•4mo ago
Leaving alone Stallman's extreme take, present day LLMs and other generative systems are absolutely still being referred to by society as AI, and I don't see this changing any time soon, so what does this say about the AI effect?
mandown2308•4mo ago
From my understanding what Stallman says is that LLMs don't "understand" what they're saying. They do a probabilistic search of the most appropriate letter (say) that has had come after another letter in the text (or any media) they have been trained on, and they place it similar in resemblance in the text that they produce. This is largely (no pun) dependent on existing data that is there in the world today, and the more the data that LLMs can work through, the better they get at predicting. (Hence the big data center shops today.)

But the limitation is that it cannot "imagine" (as in "imagination is more important than knowledge" by Einstein, who worked on a knowledge problem using imagination, but with the same knowledge resources as his peers.) In this video [1], Stallman talks about his machine trying to understand the "phenomenon" of a physical mechanism, which enables it to "deduce" next steps. I suppose he means it was not doing a probabilistic search on a large dataset to know what should have come next (which makes it human-knowledge dependent), essentially rendering it to an advanced search engine but not AI.

[1] https://youtu.be/V6c7GtVtiGc?si=fhkG2ZA-nsQgrVwm

MathMonkeyMan•4mo ago
It doesn't understand anything. Yet if you prompt it with a question about what it understands, its output is consistent with something that understands.

Text in, text out. The question is how much a sequence of tokens captures what we think a mind is. "It" ceases to exist when we stop giving it a prompt, if "it" even exists. Whether you consider something "AI" says more about what you think a mind is than anything about the software.

AstroBen•4mo ago
His argument misses the point.. I don't particularly care if it's intelligent or understands anything. My question is does it help with what I'm trying to do

As for it being closed source and kept at arms length? Sure.. and if it's taken away or the value proposition changes, I stop using it

My freedom comes from having the ability to switch if needed, not from intentionally making myself less effective. There is no lock in

alganet•4mo ago
> I don't particularly care if it's intelligent or understands anything. My question is does it help with what I'm trying to do

So, he's right? All you care is that it helps you, so it doesn't matter if it's called "artificial intelligence" or not. It doesn't matter for you, and it matters for him (and lots of other people), so let's change the name to "artificial helper", what do you think? Looks like a win-win scenario.

If that's really the point (that it helps you, and intelligence doesn't matter), let's remove the intelligence from the name.

AstroBen•4mo ago
Well I don't agree with him saying these are a reasons not to use it
alganet•4mo ago
That's fine. You must understand that some people will not agree with you either, right? That's how it works. We don't even have to explain why, but it's a common courtesy.

Think this way: it's still a win-win no matter what. What Stallman is saying is that there would be no reason not to use ChatGPT if it was free (you are able to get a copy of the source and build it yourself) and not called AI. If you change those two things, then it's Stallman compliant.

That's totally doable. It would still be the exact same program you use today and helps you, and it would also now be immune to those two criticism points (whether it is intelligent or not and what's under the hood).

AstroBen•4mo ago
How would it be doable to make them open? I think this is a fundamentally different thing than LibreOffice vs Excel. These things are incredibly expensive to train and run, and doing it as a FOSS project for anyone to clone and run locally means they'd never make their investment back

Open models exist but they're not very useful compared to the latest. Hopefully that'll change but who knows

alganet•4mo ago
That's not my problem to solve.

Maybe by the time they break even, it will be obvious how to earn money as an AI company. Today, it isn't, and it has nothing to do with being open or not.

benrapscallion•4mo ago
Someone should start a StallmanGPT that writes regular blogposts on “Don’t use <popular software or website>”. See if readers can tell those apart from the real website.