frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The AI Talent War Is for Plumbers and Electricians

https://www.wired.com/story/why-there-arent-enough-electricians-and-plumbers-to-build-ai-data-cen...
1•geox•53s ago•0 comments

Show HN: MimiClaw, OpenClaw(Clawdbot)on $5 Chips

https://github.com/memovai/mimiclaw
1•ssslvky1•1m ago•0 comments

I Maintain My Blog in the Age of Agents

https://www.jerpint.io/blog/2026-02-07-how-i-maintain-my-blog-in-the-age-of-agents/
1•jerpint•1m ago•0 comments

The Fall of the Nerds

https://www.noahpinion.blog/p/the-fall-of-the-nerds
1•otoolep•3m ago•0 comments

I'm 15 and built a free tool for reading Greek/Latin texts. Would love feedback

https://the-lexicon-project.netlify.app/
1•breadwithjam•6m ago•1 comments

How close is AI to taking my job?

https://epoch.ai/gradient-updates/how-close-is-ai-to-taking-my-job
1•cjbarber•6m ago•0 comments

You are the reason I am not reviewing this PR

https://github.com/NixOS/nixpkgs/pull/479442
2•midzer•8m ago•1 comments

Show HN: FamilyMemories.video – Turn static old photos into 5s AI videos

https://familymemories.video
1•tareq_•9m ago•0 comments

How Meta Made Linux a Planet-Scale Load Balancer

https://softwarefrontier.substack.com/p/how-meta-turned-the-linux-kernel
1•CortexFlow•9m ago•0 comments

A Turing Test for AI Coding

https://t-cadet.github.io/programming-wisdom/#2026-02-06-a-turing-test-for-ai-coding
2•phi-system•9m ago•0 comments

How to Identify and Eliminate Unused AWS Resources

https://medium.com/@vkelk/how-to-identify-and-eliminate-unused-aws-resources-b0e2040b4de8
2•vkelk•10m ago•0 comments

A2CDVI – HDMI output from from the Apple IIc's digital video output connector

https://github.com/MrTechGadget/A2C_DVI_SMD
2•mmoogle•11m ago•0 comments

CLI for Common Playwright Actions

https://github.com/microsoft/playwright-cli
3•saikatsg•12m ago•0 comments

Would you use an e-commerce platform that shares transaction fees with users?

https://moondala.one/
1•HamoodBahzar•13m ago•1 comments

Show HN: SafeClaw – a way to manage multiple Claude Code instances in containers

https://github.com/ykdojo/safeclaw
2•ykdojo•17m ago•0 comments

The Future of the Global Open-Source AI Ecosystem: From DeepSeek to AI+

https://huggingface.co/blog/huggingface/one-year-since-the-deepseek-moment-blog-3
3•gmays•17m ago•0 comments

The Evolution of the Interface

https://www.asktog.com/columns/038MacUITrends.html
2•dhruv3006•19m ago•1 comments

Azure: Virtual network routing appliance overview

https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-routing-appliance-overview
2•mariuz•19m ago•0 comments

Seedance2 – multi-shot AI video generation

https://www.genstory.app/story-template/seedance2-ai-story-generator
2•RyanMu•22m ago•1 comments

Πfs – The Data-Free Filesystem

https://github.com/philipl/pifs
2•ravenical•26m ago•0 comments

Go-busybox: A sandboxable port of busybox for AI agents

https://github.com/rcarmo/go-busybox
3•rcarmo•27m ago•0 comments

Quantization-Aware Distillation for NVFP4 Inference Accuracy Recovery [pdf]

https://research.nvidia.com/labs/nemotron/files/NVFP4-QAD-Report.pdf
2•gmays•27m ago•0 comments

xAI Merger Poses Bigger Threat to OpenAI, Anthropic

https://www.bloomberg.com/news/newsletters/2026-02-03/musk-s-xai-merger-poses-bigger-threat-to-op...
2•andsoitis•28m ago•0 comments

Atlas Airborne (Boston Dynamics and RAI Institute) [video]

https://www.youtube.com/watch?v=UNorxwlZlFk
2•lysace•29m ago•0 comments

Zen Tools

http://postmake.io/zen-list
2•Malfunction92•31m ago•0 comments

Is the Detachment in the Room? – Agents, Cruelty, and Empathy

https://hailey.at/posts/3mear2n7v3k2r
2•carnevalem•31m ago•1 comments

The purpose of Continuous Integration is to fail

https://blog.nix-ci.com/post/2026-02-05_the-purpose-of-ci-is-to-fail
1•zdw•33m ago•0 comments

Apfelstrudel: Live coding music environment with AI agent chat

https://github.com/rcarmo/apfelstrudel
2•rcarmo•34m ago•0 comments

What Is Stoicism?

https://stoacentral.com/guides/what-is-stoicism
3•0xmattf•35m ago•0 comments

What happens when a neighborhood is built around a farm

https://grist.org/cities/what-happens-when-a-neighborhood-is-built-around-a-farm/
1•Brajeshwar•35m ago•0 comments
Open in hackernews

Longform text has become iconic — almost like an emoji

https://www.monarchwadia.com/pages/LlmsAreTelepathy.html
3•monarchwadia•7mo ago

Comments

monarchwadia•7mo ago
Full text:

I've noticed a fundamental shift in how I engage with longform text — both in how I use it and how I perceive its purpose.

Longform content used to be something you navigated linearly, even when skimming. It was rich with meaning and nuance — each piece a territory to be explored and inhabited. Reading was a slow burn, a cognitive journey. It required attention, presence, patience.

But now, longform has become iconic — almost like an emoji. I treat it less as a continuous thread to follow, and more as a symbolic object. I copy and paste it across contexts, often without reading it deeply. When I do read, it's only to confirm that it’s the right kind of text — then I hand it off to an LLM-powered app like ChatGPT.

Longform is interactive now. The LargeLanguageModels is a responsive medium, giving tactile feedback with every tweak. Now I don't treat text as a finished work, but as raw material — tone, structure, rhythm, vibes — that I shape and reshape until it feels right. Longform is clay and LLMs are the wheel that lets me mould it.

This shift marks a new cultural paradigm. Why read the book when the LLM can summarize it? Why write a letter when the model can draft it for you? Why manually build a coherent thought when the system can scaffold it in seconds?

The LLM collapses the boundary between form and meaning. Text, as a medium, becomes secondary — even optional. Whether it’s a paragraph, a bullet list, a table, or a poem, the surface format is interchangeable. What matters now is the semantic payload — the idea behind the words. In that sense, the psychology and capability of the LLM become part of the medium itself. Text is no longer the sole conduit for thought — it’s just one of many containers.

And in this way, we begin to inch toward something that feels more telepathic. Writing becomes less about precisely articulating your ideas, and more about transmitting a series of semantic impulses. The model does the rendering. The wheel spins. You mold. The sentence is no longer the unit of meaning — the semantic gesture is.

It’s neither good nor bad. Just different. The ground is unmistakably shifting. I almost titled this page "Writing Longform Is Now Hot. Reading Longform Is Now Cool." because, in McLuhanesque terms, the poles have reversed. Writing now requires less immersion — it’s high-definition, low-participation. Meanwhile, reading longform, in a world of endless summaries and context-pivoting, asks for more. It’s become a cold medium.

There’s a joke: “My boss used ChatGPT to write an email to me. I summarized it and wrote a response using ChatGPT. He summarized my reply and read that.” People say: "See? Humans are now just intermediaries for LLMs to talk to themselves."

But that’s not quite right.

It’s not that we’re conduits for the machines. It’s that the machines let us bypass the noise of language — and get closer to pure semantic truth. What we’re really doing is offloading the form of communication so we can focus on the content of it.

And that, I suspect, is only the beginning.

Soon, OpenAI, Anthropic, and others will lean into this realization — if they haven’t already — and build tools that let us pivot, summarize, and remix content while preserving its semantic core. We'll get closer and closer to an interface for meaning itself. Language will become translucent. Interpretation will become seamless.

It’s a common trope to say humans are becoming telepathic. But transformer models are perhaps the first real step in that direction. As they evolve, converting raw impulses — even internal thoughtforms — into structured communication will become less of a challenge and more of a given.

Eventually, we’ll realize that text, audio, and video are just skins — just surfaces — wrapped around the same thing: semantic meaning. And once we can capture and convey that directly, we’ll look back and see that this shift wasn’t about losing language, but about transcending it.