frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Evolution of the Interface

https://www.asktog.com/columns/038MacUITrends.html
1•dhruv3006•1m ago•0 comments

Azure: Virtual network routing appliance overview

https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-routing-appliance-overview
1•mariuz•1m ago•0 comments

Seedance2 – multi-shot AI video generation

https://www.genstory.app/story-template/seedance2-ai-story-generator
1•RyanMu•5m ago•1 comments

Πfs – The Data-Free Filesystem

https://github.com/philipl/pifs
1•ravenical•8m ago•0 comments

Go-busybox: A sandboxable port of busybox for AI agents

https://github.com/rcarmo/go-busybox
1•rcarmo•9m ago•0 comments

Quantization-Aware Distillation for NVFP4 Inference Accuracy Recovery [pdf]

https://research.nvidia.com/labs/nemotron/files/NVFP4-QAD-Report.pdf
1•gmays•10m ago•0 comments

xAI Merger Poses Bigger Threat to OpenAI, Anthropic

https://www.bloomberg.com/news/newsletters/2026-02-03/musk-s-xai-merger-poses-bigger-threat-to-op...
1•andsoitis•10m ago•0 comments

Atlas Airborne (Boston Dynamics and RAI Institute) [video]

https://www.youtube.com/watch?v=UNorxwlZlFk
1•lysace•11m ago•0 comments

Zen Tools

http://postmake.io/zen-list
1•Malfunction92•13m ago•0 comments

Is the Detachment in the Room? – Agents, Cruelty, and Empathy

https://hailey.at/posts/3mear2n7v3k2r
1•carnevalem•13m ago•0 comments

The purpose of Continuous Integration is to fail

https://blog.nix-ci.com/post/2026-02-05_the-purpose-of-ci-is-to-fail
1•zdw•16m ago•0 comments

Apfelstrudel: Live coding music environment with AI agent chat

https://github.com/rcarmo/apfelstrudel
1•rcarmo•16m ago•0 comments

What Is Stoicism?

https://stoacentral.com/guides/what-is-stoicism
3•0xmattf•17m ago•0 comments

What happens when a neighborhood is built around a farm

https://grist.org/cities/what-happens-when-a-neighborhood-is-built-around-a-farm/
1•Brajeshwar•17m ago•0 comments

Every major galaxy is speeding away from the Milky Way, except one

https://www.livescience.com/space/cosmology/every-major-galaxy-is-speeding-away-from-the-milky-wa...
2•Brajeshwar•17m ago•0 comments

Extreme Inequality Presages the Revolt Against It

https://www.noemamag.com/extreme-inequality-presages-the-revolt-against-it/
2•Brajeshwar•18m ago•0 comments

There's no such thing as "tech" (Ten years later)

1•dtjb•18m ago•0 comments

What Really Killed Flash Player: A Six-Year Campaign of Deliberate Platform Work

https://medium.com/@aglaforge/what-really-killed-flash-player-a-six-year-campaign-of-deliberate-p...
1•jbegley•19m ago•0 comments

Ask HN: Anyone orchestrating multiple AI coding agents in parallel?

1•buildingwdavid•20m ago•0 comments

Show HN: Knowledge-Bank

https://github.com/gabrywu-public/knowledge-bank
1•gabrywu•26m ago•0 comments

Show HN: The Codeverse Hub Linux

https://github.com/TheCodeVerseHub/CodeVerseLinuxDistro
3•sinisterMage•27m ago•2 comments

Take a trip to Japan's Dododo Land, the most irritating place on Earth

https://soranews24.com/2026/02/07/take-a-trip-to-japans-dododo-land-the-most-irritating-place-on-...
2•zdw•27m ago•0 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
38•bookofjoe•27m ago•13 comments

BookTalk: A Reading Companion That Captures Your Voice

https://github.com/bramses/BookTalk
1•_bramses•28m ago•0 comments

Is AI "good" yet? – tracking HN's sentiment on AI coding

https://www.is-ai-good-yet.com/#home
3•ilyaizen•29m ago•1 comments

Show HN: Amdb – Tree-sitter based memory for AI agents (Rust)

https://github.com/BETAER-08/amdb
1•try_betaer•30m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
2•anhxuan•30m ago•0 comments

Show HN: Seedance 2.0 Release

https://seedancy2.com/
2•funnycoding•31m ago•0 comments

Leisure Suit Larry's Al Lowe on model trains, funny deaths and Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
1•thelok•31m ago•0 comments

Towards Self-Driving Codebases

https://cursor.com/blog/self-driving-codebases
1•edwinarbus•31m ago•0 comments
Open in hackernews

Some People Can't See Mental Images. The Consequences Are Profound

https://www.newyorker.com/magazine/2025/11/03/some-people-cant-see-mental-images-the-consequences-are-profound
6•fortran77•3mo ago

Comments

fortran77•3mo ago
https://archive.ph/u6nMp
Marshferm•3mo ago
A great summation of aphantasia. Next stop will be quantifying imagination.

A hint language may not be the gateway to intelligence, the path is imagery and glyphs as action references.

spacedcowboy•3mo ago
It’s funny, this could be me. Also a PhD in physics, also thought “the mind’s eye” was a euphemism until pretty late in life.

I’ve written about this his before …

It's funny because I have no mind's eye, and I definitely consider it an advantage. I genuinely thought it was a euphemism until I was about 20, drunk, and surrounded by friends at college, playing a game in the student bar and the "mind's eye" thing came up. They couldn't believe I was serious. I couldn't believe they were serious... For a while at least.

My mind works on rules, not imagery. If I am asked to "not think of an elephant in a room", I (of course) immediately think of an elephant in a room, but it's not a visual picture - it's relationships between room and elephant (does it touch the walls, the space around it, does it press the light-switch on, can the door open if it opens inwards, ...) It's the concept of an elephant in a room. There's no visual.

Similarly, I don't know my right from my left - instead I have a rule in my head that I run through virtually instantaneously "I write with my right". That then distinguishes for me which is which. If someone gives me directions "first right, second left, right by the pub and next right" I run through that rule for the first instance, and then I have the concept of "not-right" for the "second left" bit. It gets "cached" for a while, and then drops out.

So where's the advantage ? I can consciously build these rules up into complicated (well, more complicated than people expect) structures of relationships and "work them". It's not like running an orrery backwards and forwards, but it's the best analogy I can give. I can see boundary conditions and faults well before others do - and often several complex states away from the starting conditions. I'm often called into meetings just to "run this by you" because I can see issues further down the line than most. I'm still subject to garbage-in-garbage-out, but it's still something of a super-power.

I'm told I sort of gaze into the middle distance, and then I blink, come back, and say something like "the fromble will interact with the gizmo if the grabbet conflicts with the womble during second-stage init when the moon is waning". Someone goes off and writes a test and almost all the time (hey, I'm human) I'm correct.

Mental modelling is what I gain from a lack of visualisation. I think of it as literally building castles in the sky, except the sky isn't spatial, it's relational.

ghm2180•3mo ago
Fascinating. I do wonder if people with this condition are self selective in some way on how they do things. I also wonder if these conditions describe the level of success they have in a specific field vs a control group.

e.g. as described in the article the example of Phds/Researchers to lean more heavily towards abstractions and rules.