frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Legal Eyes – Turn casual text into legalese with one click

https://www.legal-eyes.ai/
1•jamsey•49s ago•0 comments

You need a kitchen slide rule

https://entropicthoughts.com/kitchen-slide-rule
1•kqr•52s ago•0 comments

FastAPI Cloud

https://fastapicloud.com/
1•shahargl•57s ago•0 comments

I reverse-engineered a bug in my PPO agent that gave it a 9x performance boost

https://theprincipledagent.com/2025/08/26/forensic-rl-investigating-a-surprisingly-successful-bug...
1•wmaxlees•1m ago•1 comments

Compress10MB – Online Video Compressor

https://compress10mb.com/
1•hordekle•1m ago•0 comments

How to run LLMs on PC at home using Llama.cpp

https://www.theregister.com/2025/08/24/llama_cpp_hands_on/
1•ibobev•3m ago•0 comments

What's Next for Kotlin Multiplatform and Compose Multiplatform

https://blog.jetbrains.com/kotlin/2025/08/kmp-roadmap-aug-2025/
1•iBelieve•3m ago•0 comments

Yes, AI is affecting employment. Here's the data

https://www.adpresearch.com/yes-ai-is-affecting-employment-heres-the-data/
2•toomuchtodo•4m ago•0 comments

GPT5 is the best coding LLM because other LLMs admit it?

1•adinhitlore•5m ago•0 comments

Grok 2.5 has not been open-sourced

https://www.zdnet.com/article/no-grok-2-5-has-not-been-open-sourced-heres-how-you-can-tell/
1•CrankyBear•7m ago•0 comments

Wan-S2V: Audio-Driven Cinematic Video Generation

https://humanaigc.github.io/wan-s2v-webpage/
1•diggan•7m ago•0 comments

Learning Deep Representations of Data Distributions

https://ma-lab-berkeley.github.io/deep-representation-learning-book/
1•seanlane•8m ago•0 comments

Silicon Valley is pouring millions into pro-AI PACs to sway midterms

https://techcrunch.com/2025/08/25/silicon-valley-is-pouring-millions-into-pro-ai-pacs-to-sway-mid...
2•sailfast•8m ago•0 comments

CDC scaled back a surveillance program for foodborne illnesses

https://www.nbcnews.com/health/health-news/cdc-quietly-scaled-back-surveillance-program-foodborne...
1•pseudolus•8m ago•0 comments

Multiplayer Word Game in the Browser

https://royale.circuitsgame.com/
1•samanbb•9m ago•1 comments

Show HN: I Built a Privacy First Clipboard History Manager for macOS

1•ajmayafi•9m ago•0 comments

AI Killed My Job: Translators

https://www.bloodinthemachine.com/p/ai-killed-my-job-translators
2•speckx•10m ago•0 comments

Ask HN: What are you working on (August 2025)?

4•iryndin•11m ago•1 comments

Show HN: Cosmic AI Platform – Build and deploy CMS sites using natural language

https://www.cosmicjs.com/blog/introducing-the-cosmic-ai-platform
1•tonyspiro•15m ago•0 comments

It's good to quiver under the bar

https://kupajo.com/its-good-to-quiver-under-the-bar
2•kolyder•15m ago•0 comments

Show HN: Ginormous News, AI generated daily global news briefings from radio

https://ginormous.news/
1•julianchr•15m ago•0 comments

Hasselblad X2D ii 100C

https://www.hasselblad.com/x-system/x2d-ii-100c/
2•azhenley•16m ago•1 comments

Am Fear Liath MòR

https://en.wikipedia.org/wiki/Am_Fear_Liath_M%C3%B2r
1•makaimc•17m ago•0 comments

How Palm Springs learned to love its wind turbines

https://www.washingtonpost.com/climate-environment/2025/08/24/palm-springs-wind-turbines-tourism/
1•bookofjoe•20m ago•1 comments

Attorneys General to AI Companies: You Will 'Answer for It' If You Harm Children

https://www.404media.co/44-attorneys-general-to-ai-chatbot-companies-open-letter/
1•latexr•20m ago•0 comments

Framework Laptop 16. Upgraded!

https://frame.work/ro/en/laptop16?tab=whats-new
25•susanthenerd•21m ago•1 comments

How to squeeze Space into Time [video]

https://www.youtube.com/watch?v=8JuWdXrCmWg
1•IMTDb•22m ago•0 comments

LLM Speed Up Breakthrough?

https://arxiv.org/abs/2508.15884
2•bilsbie•22m ago•0 comments

Free Access to Frontier Coding LLMs: 5M Tokens/Day of Claude Sonnet 4 and More

https://github.com/inmve/free-ai-coding
1•codeclimber•23m ago•1 comments

Show HN: A more composable rate limiter for Go

https://github.com/clipperhouse/rate
1•mwsherman•23m ago•0 comments
Open in hackernews

Semantic drift: when AI gets the facts right but loses the meaning

1•realitydrift•2h ago
Most LLM benchmarks measure accuracy and coherence, but not whether the intended meaning survives. I’ve been calling this gap fidelity, the preservation of purpose and nuance. Has anyone else seen drift like effects in recursive generations or eval setups?

Comments

Mallowram•2h ago
What is intended meaning when words are arbitrary and meaning is always relative to arbitrariness?
realitydrift•2h ago
That’s a fair point. Words themselves are arbitrary symbols, but meaning isn’t only in the symbols. It’s in the intent behind them and the use they’re put to.

For example, if I say “the meeting is at 3pm” and a model rewrites it as “planning sessions are important,” the words are fine, the grammar is fine, but the purpose (to coordinate time) has been lost. That’s the gap I’m calling fidelity: whether the output still serves the same function, even if the surface form changes.

Mallowram•1h ago
There is no intent in words in and of themselves. Intent always comes from something specific tied to neural syntax, which is lost in words. That's an illusion. There is intensionality, which is different, which is what you're actually talking about. Intensionality is vague, it's not meaningful without context. The problem automating words is AI can't solve the conduit metaphor, which is the idea the words alone code meaning. They can't. This is the achilles heel of AI.
realitydrift•1h ago
I agree. Words don’t carry intent by themselves. Intention is always embedded in use. That’s why I frame fidelity as about whether a system’s continuation still serves the same human purpose. The “conduit metaphor” you mention is exactly the trap: treating words as if they inherently encode meaning. Models fall into this because they optimize surface probabilities rather than checking whether the function of the exchange was preserved.
Mallowram•1h ago
In 1973 Basil Bernstein studied how UK scores in math defied class boundaries while scores in reading comprehension/essays stayed tied to them, and he developed a theory with Halliday that language embeds so much that we cannot decipher easily: dominance, status, control, land-centering, gender/mate-selection, and much more, that they assumed language was far more a social system of primate-simian signaling rather than what Shannon and classical linguists took as "communication". My guess is, LLMs are really unresolvable revelations of these hidden nuances. In a way, LLMs demand a specific language, which doesn't exist, in order to function.
realitydrift•1h ago
That’s a really interesting reference. Bernstein and Halliday were basically pointing out that language is never just propositional, it’s always smuggling social structure with it. That’s exactly why drift matters: when an LLM compresses or rewrites, it isn’t just shifting words, it’s rebalancing those embedded cues of power, context, and purpose. Humans keep the “extra baggage” because it carries meaning beyond the literal. Models optimize it away. That gap between statistical surface and lived function is what I’ve been calling semantic drift.
docsorb•2h ago
You're touching upon a very nuanced point of identifying the true "intent" behind these words. Do you think the way these models are trained should be different to correctly map the potential intent vs the true meaning?

like your example, "the meeting is at 3pm", _we got enough time_ intends something else with "the meeting is at 3pm" _where the hell are you?_ intends something else. It is not so obvious to get that intent without a lot of context (like, time, environment, emotion etc.)

realitydrift•1h ago
Exactly. That’s the hard part. Meaning is often carried less by the literal words and more by context (time, environment, emotion, shared knowledge). My point with fidelity is that current benchmarks don’t check whether outputs preserve that function in context. An AI can echo surface words but miss the intended role: coordination, reassurance, accountability. And that’s where drift shows up.