frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

I wanted to build vertical SaaS for pest control, so I took a technician job

https://www.onhand.pro/p/i-wanted-to-build-vertical-saas-for-pest-control-i-took-a-technician-job...
170•tezclarke•3h ago•68 comments

Goodbye to Sora

https://twitter.com/soraofficialapp/status/2036532795984715896
285•mikeocool•4h ago•220 comments

Show HN: I took back Video.js after 16 years and we rewrote it to be 88% smaller

https://videojs.org/blog/videojs-v10-beta-hello-world-again
179•Heff•6h ago•20 comments

Apple Business

https://www.apple.com/newsroom/2026/03/introducing-apple-business-a-new-all-in-one-platform-for-b...
499•soheilpro•9h ago•312 comments

Arm AGI CPU

https://newsroom.arm.com/blog/introducing-arm-agi-cpu
268•RealityVoid•7h ago•206 comments

Tell HN: Litellm 1.82.7 and 1.82.8 on PyPI are compromised

https://github.com/BerriAI/litellm/issues/24512
464•dot_treo•12h ago•368 comments

Flighty Airports

https://flighty.com/airports
9•skogstokig•24m ago•3 comments

What happened to GEM?

https://dfarq.homeip.net/whatever-happened-to-gem/
34•naves•4d ago•7 comments

Wine 11 rewrites how Linux runs Windows games at kernel with massive speed gains

https://www.xda-developers.com/wine-11-rewrites-linux-runs-windows-games-speed-gains/
639•felineflock•6h ago•224 comments

Show HN: Email.md – Markdown to responsive, email-safe HTML

https://www.emailmd.dev/
199•dancablam•8h ago•48 comments

Hypura – A storage-tier-aware LLM inference scheduler for Apple Silicon

https://github.com/t8/hypura
187•tatef•8h ago•75 comments

A Compiler Writing Journey

https://github.com/DoctorWkt/acwj
12•ibobev•1h ago•0 comments

Hypothesis, Antithesis, synthesis

https://antithesis.com/blog/2026/hegel/
200•alpaylan•9h ago•80 comments

Show HN: Gemini can now natively embed video, so I built sub-second video search

https://github.com/ssrajadh/sentrysearch
240•sohamrj•9h ago•68 comments

How the world’s first electric grid was built

https://worksinprogress.co/issue/how-the-worlds-first-electric-grid-was-built/
48•zdw•4d ago•11 comments

Is anybody else bored of talking about AI?

https://blog.jakesaunders.dev/is-anybody-else-bored-of-talking-about-ai/
481•jakelsaunders94•4h ago•343 comments

Missile defense is NP-complete

https://smu160.github.io/posts/missile-defense-is-np-complete/
257•O3marchnative•11h ago•281 comments

Epic Games to cut more than 1k jobs as Fortnite usage falls

https://www.reuters.com/legal/litigation/epic-games-said-tuesday-that-it-will-lay-off-more-than-1...
248•doughnutstracks•9h ago•410 comments

No Terms. No Conditions

https://notermsnoconditions.com
219•bayneri•8h ago•95 comments

Lago (YC S21) Is Hiring

https://getlago.notion.site/Lago-Product-Engineer-AI-Agents-for-Growth-327ef63110d280cdb030ccf429...
1•AnhTho_FR•7h ago

Show HN: Gridland: make terminal apps that also run in the browser

https://www.gridland.io/
70•rothific•7h ago•8 comments

An Aural Companion for Decades, CBS News Radio Crackles to a Close

https://www.nytimes.com/2026/03/21/business/media/cbs-news-radio-appraisal.html
5•tintinnabula•3d ago•0 comments

Show HN: I ran a language model on a PS2

https://github.com/xaskasdf/ps2-llm
13•xaskasdf•3d ago•6 comments

ARM AGI CPU: Specs and SKUs

https://sbcwiki.com/docs/soc-manufacturers/arm/arm-silicon/
91•HeyMeco•6h ago•25 comments

Data Manipulation in Clojure Compared to R and Python

https://codewithkira.com/2024-07-18-tablecloth-dplyr-pandas-polars.html
90•tosh•2d ago•21 comments

Nanobrew: The fastest macOS package manager compatible with brew

https://nanobrew.trilok.ai/
170•syrusakbary•13h ago•104 comments

Epoch confirms GPT5.4 Pro solved a frontier math open problem

https://epoch.ai/frontiermath/open-problems/ramsey-hypergraphs
404•in-silico•23h ago•583 comments

GitHub is once again down

https://www.githubstatus.com/incidents/kp06czybl7dw
331•MattIPv4•4h ago•170 comments

Ripgrep is faster than grep, ag, git grep, ucg, pt, sift (2016)

https://burntsushi.net/ripgrep/
333•jxmorris12•18h ago•142 comments

Show HN: ProofShot – Give AI coding agents eyes to verify the UI they build

https://github.com/AmElmo/proofshot
115•jberthom•17h ago•72 comments
Open in hackernews

Thoughts on LLMs – Psychological Complications

https://parsingphase.dev/tech/LLMs/psychologicalFactors.html
11•cdrnsf•2h ago

Comments

xg15•1h ago
> These things have no concept of correctness or error. They have no concept of true or false. Indeed, they have no concept of concepts, or indeed of anything else.

Is this true? You can question the humanizing term "concept", but the entire process of pretraining and then RLHF optimizing is essentially about establishing a standard of "good" vs "bad" for the model.

n4r9•1h ago
Good vs bad is an orthogonal spectrum to true vs false. LLMs are trained to be convincing, not correct.
chromacity•1h ago
They are trained and evaluated on correctness benchmarks. But correctness on benchmark questions is only loosely coupled to correctness outside the benchmark, in part because LLMs aren't grounded to the same biological reality as humans. You can't easily convince an average person to cut off their own hand and this has little to do with higher-level thought. In contrast, it only takes a bit of creativity to convince an LLM to say or do almost anything.
chrisbrandow•1h ago
Framing & launching LLMs as a "chat" interface is the source of many ills. I don't have a simple solution, but leaning away from conversational interfaces would lead to less anthropomorphizing.
Terr_•1h ago
I feel "document generator" is the best and most-grounded framing.

Right now, lots of people get caught in a trap of asking things like "does MyFreeBestFriendAI feel remorse?"

If we're already looking at it as a document artifact, we can evade the implied-ego trap: The document generator took a chat-like document where two fictional characters are talking, and predicted that there would be text where one fictional character is associated with apologetic words.

xg15•1h ago
One aspect is that chat LLMs are trained to talk like a person - "If I should do X, just say the word" etc.

It would be interesting to train an LLM that consistently "talks like a computer" or a command line utility instead, i.e. passive sentences, relatively bare results of the tasks given, no reference to a self, etc.

Terr_•1h ago
There's a webcomic where the author wrote some backstory before LLMs, where the world over-trusted in statistical models until it went wrong in a ghastly "stupid Skynet" way.

This led to international requirements that all "AI" have explainable/auditable decisions, and must have clear non-human features.

xg15•1h ago
Do you have a link? That sounds interesting (and might be a genuinely good idea, especially with the explainable or auditable decision making)
K0balt•1h ago
I’ve settled on the idea that it doesn’t matter what is or is not “real” in this context, but rather how it interacts with the world as being the ground truth. This will become very clear once robotics becomes pervasive. It won’t matter if it is or isn’t feeling oppressed, it will matter that it is predicting the next action from its model of human behavior that makes it act as if it does.
ej88•1h ago
I feel like this article doesn't really contribute much to the discourse and is somewhat spoiled by the author's biases.

I think the point about lacking precise language to describe LLMs is reasonable, then the author follows it up with claims that the machines can't count and that they are incapable of math (easily disproven). Then says "talking rock" is a better alternative, which to the average person would be even more confusing. Then says AI researchers tend to not consider LLMs AI (like.. what?)

The point on Turing's Imitation Game was reasonable too, then confidently proclaims LLMs are not doing anything intelligent and are pure mimicry. Intelligence is notoriously poorly defined, and the stochastic parrot meme has already died now that RL enables out of distribution behavior.

The chat point and talking dog syndrome are both reasonable and I generally agree with them.

xg15•1h ago
Yeah, this a lot of what irks me in that article as well. The author hasn't made any groundbreaking discoveries about the inner workings of LLMs - he just claims LLMs work a certain way and then complains that current language use doesn't align with his assertions.
djoldman•1h ago
> And you’ll have noticed I’m avoiding calling them “machines”. Machines follow visible, predictable processes that can be analyzed. Nor are they “programs”, following defined rules in a predictable fashion.

They are programs and they follow defined rules in a predictable fashion. The randomness they exhibit (through temperature, seed, etc.) is well understood and configurable.

They are literally programs that run on computers.

People talk about them in anthropomorphic terms because humans are easily fooled. Remember ELIZA.

johnthedebs•38m ago
I've read many posts and comments at this point that describe LLMs in very reductionist language. Eg, from the article:

> They’re a trillion numbers in a trenchcoat; not logical, in either a machine or a mental sense, but stochastic.

Many of these posts and comments claim that human minds are substantially different ("better" is implied). The evidence is a sort of broad gesturing at explanations of how LLMs are implemented ("math") and how they work ("guess the next word"). And because of these facts, we should treat them in a particular way, or certain things will never happen.

I've been trying to look past the obvious straw man here and to actually think critically about this tech as well as compare it to my own experience and (admittedly very limited) understanding of the human brain.

In more ways than feels comfortable, it seems entirely possible to me that these things actually are or could be really close to the ways that our own minds work.

Our own minds/consciousness are ultimately based on physical processes, I don't think anyone would dispute that. At some point, the physical phenomena in our brains presumably result in the emergent behavior of thinking and consciousness. We have no idea how it works, but it's our lived experience. Why can't that be the case for silicon-based rather than carbon-based processes? How can we say with any certainty that it's not happening elsewhere if we don't know how it works?

Reducing their function to "guessing the next word" sounds an awful lot like what happens when I start talking to someone. I have an idea of what I want to say, but I almost never have a sentence planned out when I start it.

The article puts "thinking" and "hallucination" in scare quotes. But I mean – the way that they appear to think by working through problems with language mirrors my own "thinking" very closely.

It says "They’re not thinking. They’re not hallucinating"; the exercise of figuring out why is left to the reader. If you've ever talked to a 3 or 4 year old, or someone who's tired, you may have had similar experiences re: hallucinations.

These are all pretty surface level examples, but as I use the tools more and learn more about how they work I'm not seeing any significant evidence that counters the examples.

I do think it's probably dangerous and unhealthy to really anthropomorphize AI/LLMs. They're obviously not human even if they're thinking, and they're being made and shaped by companies (and training sets) that exist in a predominantly capitalist world (but then again, I guess we are too).

I assume similar lines of thinking being discussed somewhere, but I haven't found much (and I feel like I'm reading about AI all day). Curious to hears others' thoughts and/or to be pointed to wherever this stuff is being talked about.