frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

Enough AI copilots, we need AI HUDs

https://www.geoffreylitt.com/2025/07/27/enough-ai-copilots-we-need-ai-huds
167•walterbell•5h ago

Comments

keyle•3h ago
> anyone serious about designing for AI should consider non-copilot form factors that more directly extend the human mind.

Aren't auto-completes doing exactly this? It's not a co-pilot in the sense of a virtual human, but already more in the direction of a HUD.

Sure you can converse with LLMs but you can also clearly just send orders and they eagerly follow and auto-complete.

I think what the author might be trying to express in a quirky fashion, is that AI should work alongside us, looking in the same direction as we are, and not being opposite to us at the table, staring at each other's and arguing. We'll have true AI when they'll be doing our bidding, without any interaction from us.

gklitt•3h ago
Author here. Yes, I think the original GitHub Copilot autocomplete UI is (ironically) a good example of a HUD! Tab autocomplete just becomes part of your mental flow.

Recent coding interfaces are all trending towards chat agents though.

It’s interesting to consider what a “tab autocomplete” UI for coding might look like at a higher level of abstraction, letting you mold code in a direct-feeling way without being bogged down in details.

samfriedman•3h ago
On this topic, can anyone find a document I saw on HN but can no longer locate? A historical computing essay, it was presented in a plaintext (monospaced) text page. It outlined a computer assistant and how it should feel to use. The author believed it should be unobtrusive, something that pops into awareness when needed and then gets back out of the way. I don't believe any of the references in TFA are what it was.
thehappypm•2h ago
Designing Calm Technology?
cadamsdotcom•3h ago
Love the idea & spitballing ways to generalize to coding..

Thought experiment: as you write code, an LLM generates tests for it & the IDE runs those tests as you type, showing which ones are passing & failing, updating in real time. Imagine 10-100 tests that take <1ms to run, being rerun with every keystroke, and the result being shown in a non-intrusive way.

The tests could appear in a separated panel next to your code, and pass/fail status in the gutter of that panel. As simple as red and green dots for tests that passed or failed in the last run.

The presence or absence and content of certain tests, plus their pass/fail state, tells you what the code you’re writing does from an outside perspective. Not seeing the LLM write a test you think you’ll need? Either your test generator prompt is wrong, or the code you’re writing doesn’t do the things you think they do!

Making it realtime helps you shape the code.

Or if you want to do traditional TDD, the tooling could be reversed so you write the tests and the LLM makes them pass as soon as you stop typing by writing the code.

hnthrowaway121•2h ago
Yes the reverse makes much more sense to me. AI help to spec out the software & then the code has an accepted definition of correctness. People focus on this way less than they should I think
callc•1h ago
Humans writing the test first and LLM writing the code is much better than the reverse. And that is because tests are simply the “truth” and “intention” of the code as a contract.

When you give up the work of deciding what the expected inputs and outputs of the code/program is you are no longer in the drivers seat.

JimDabell•1h ago
> When you give up the work of deciding what the expected inputs and outputs of the code/program is you are no longer in the drivers seat.

You don’t need to write tests for that, you need to write acceptance criteria.

ThunderSizzle•1h ago
As in, a developer would write something in e.g. gherkin, and AI would automatically create the matching unit tests and the production code?

That would be interesting. Of course, gherkin tends to just be transpiled into generated code that is customized for the particular test, so I'm not sure how AI can really abstract it away too much.

kamaal•55m ago
All of this at the end reduces to a simple fact at the end of the discussion.

You need some of way of precisely telling AI what to do. As it turns out there is only that much you can do with text. Come to think of it, you can write a whole book about a scenery, and yet 100 people will imagine it quite differently. And still that actual photograph would be totally different compared to the imagination of all those 100 people.

As it turns out if you wish to describe something accurately enough, you have to write mathematical statements, in other words statements that reduce to true/false answers. We could skip to the end of the discussion here, and say you are better of either writing code directly or test cases.

This is just people revisiting logic programming all over again.

JimDabell•46m ago
I’m talking higher level than that. Think about the acceptance criteria you would put in a user story. I’m specifically responding to this:

> When you give up the work of deciding what the expected inputs and outputs of the code/program is you are no longer in the drivers seat.

You don’t need to personally write code that mechanically iterates over every possible state to remain in the driver’s seat. You need to describe the acceptance criteria.

kamaal•1h ago
>>Humans writing the test first and LLM writing the code is much better than the reverse.

Isn't that logic programming/Prolog?

You basically write the sequence of conditions(i.e tests in our lingo) that have to be true, and the compiler(now AI) generates code for your.

Perhaps there has to be a relook on how Logic programming can be done in the modern era to make this more seamless.

cjonas•36m ago
Then do you need tests to validate your tests are correct, otherwise the LLM might just generate passing code even if the test is bad? Or write code that games the system because it's easier to hardcode an output value then to do the actual work.

There probably is a setup where this works well, but the LLM and humans need to be able to move across the respective boundaries fluidly...

Writing clear requirements and letting the AI take care of the bulk of both sides seems more streamlined and productive.

jawns•3h ago
The author gives an example of a HUD: an AI that writes a debugger program to enable the human developer to more easily debug code.

But why should it be only the human developer who benefits? What if that debugger program becomes a tool that AI agents can use to more accurately resolve bugs?

Indeed, why can't any programming HUD be used by AI tools? If they benefit humans, wouldn't they benefit AI as well?

I think we'll be pretty quickly at the point where AI agents are more often than not autonomously taking care of business, and humans only need to know about that work at critical points (like when approvals are needed). Once we're there, the idea that this HUD concept should be only human-oriented breaks down.

hi_hi•2h ago
Doesn't it all come down to "what is the ideal interface for humans to deal with digital information"?

We're getting more and more information thrown at us each day, and the AIs are adding to that, not reducing it. The ability to summarise dense and specialist information (I'm thinking error logs, but could be anything really) just means more ways for people to access and view that information who previously wouldn't.

How do we, as individuals, best deal with all this information efficiently? Currently we have a variety of interfaces, websites, dashboards, emails, chat. Are all these necessary anymore? They might be now, but what about the next 10 years. Do I even need to visit a companies website if can get the same information from some single chat interface?

The fact we have AIs building us websites, apps, web UI's just seems so...redundant.

sipjca•2h ago
yep I think this is the fundamental question as well, everything else is intermediate
guardiang•2h ago
Every human is different, don't generalize the interface. Dynamically customize it on the fly.
moomoo11•2h ago
I like the smartphone. It’s honestly perfect and underutilized.
AlotOfReading•1h ago
Websites were a way to get authoritative information about a company, from that company (or another trusted source like Wikipedia). That trust is powerful, which is why we collectively spent so much time trying to educate users about the "line of death" in browsers, drawing padlock icons, chasing down impersonator sites, mitigating homoglyph attacks, etc. This all rested on the assumption that certain sites were authoritative sources of information worth seeking out.

I'm not really sure what trust means in a world where everyone relies uncritically on LLM output. Even if the information from the LLM is usually accurate, can I rely on that in some particularly important instance?

hi_hi•7m ago
You raise a good point, and one I rarely see discussed.

I still believe it fundamentally comes down to an interface issue, but how trust gets decoupled from the interface (as you said, the padlock shown in the browser and certs to validate a website source), thats an interesting one to think about :-)

roywiggins•2h ago
> The agentic option is a “copilot” — a virtual human who you talk with to get help flying the plane. If you’re about to run into another plane it might yell at you “collision, go right and down!”

Planes do actually have this now. It seems to work okay:

https://en.m.wikipedia.org/wiki/Traffic_collision_avoidance_...

gklitt•1h ago
Author here. I thought about including TCAS in the article but it felt too tangential…

You’re right that there’s a voice alert. But TCAS also has a map of nearby planes which is much more “HUD”! So it’s a combo of both approaches.

(Interestingly it seems that TCAS may predate Weiser’s 1992 talk)

droideqa•2h ago
I want a link to the GitHub for this[0] which he linked to. Makes Prolog quite interesting.

[0]: https://www.geoffreylitt.com/2024/12/22/making-programming-m...

thepuglor•2h ago
I'd rather flip this around and be in a fully immersive environment watching agents do things in such a way that I am able to interject and guide in realtime. How do we build that, and build it in such a way that the content and delivery of my guidance becomes critical to what they learn? The best teacher gets the best AI students.
jpm_sd•2h ago
This is how ship's AI is depicted in The Expanse (TV series) and I think it's really compelling. Quiet and unobtrusive, but Alex can ask the Rocinante to plot a new course or display the tactical situation and it's fast, effective and effortlessly superhuman with no back-talk or unnecessary personality.

Compare another sci-fi depiction taken to the opposite extreme: Sirius Cybernetics products in the Hitchhikers Guide books. "Thank you for making a simple door very happy!"

kova12•2h ago
aren't we missing the point that co-pilot is there in case pilot gets incapacitated?
melagonster•43m ago
The pilot is the copilot today. These computer can handle most of the tasks automatically.
benjaminwootton•2h ago
I think there is a third and distinct model which is AI that runs in the background autonomously amd over a long period and pushes things to you.

It can detect situations intelligently, do the filtering, summarisation of what’s happening and possibly a recommendation.

This feels a lot more natural to me, especially in a business context when you want to monitor for 100 situations about thousands of customers.

ares623•2h ago
It should have a paperclip mascot
stan_kirdey•1h ago
needed to find a kids’ orthodontist. made a tiny voice agent: feed it numbers, it calls, asks about price/availability/insurance, logs the gist.

it kind of worked. the magic was the smallest UI around it:

- timeline of dials + retries

- "call me back" flags

- when it tried, who picked up

- short summaries with links to the raw transcript

once i could see the behavior, it stopped feeling spooky and started feeling useful.

so yeah, copilots are cool, but i want HUDs: quiet most of the time, glanceable, easy to interrupt, receipts for every action.

perching_aix•1h ago
Been thinking about something similar, from fairly grounded ideas like letting a model autogenerate new features with their own name, keybind and icon, all the way to silly ideas, like letting a model synthesize arbitrary shader code and just letting it do whatever within the viewport. Think the entire UI being created on the fly specifically for the task you're working on, constantly evolving in mesh with your workflow habits. Now if only I went beyond being an idea man...
caleblloyd•1h ago
The reason we are not seeing this in mainstream software may also be due to cost. Paying for tokens on every interaction means paying to use the app. Upfront development may actually be cheaper, but the incremental cost per interaction could cost much more in the long term, especially if the software is used frequently and has a long lifetime.

As the cost of tokens goes down, or commodity hardware can handle running models capable of driving these interactions, we may start to see these UIs emerge.

perching_aix•1h ago
Oh yeah, I was 100% thinking in terms of local models.
ag2s•1h ago
A very relevant article from lesswrong, titled Cyborgism https://www.lesswrong.com/s/f2YA4eGskeztcJsqT/p/bxt7uCiHam4Q...
sothatsit•1h ago
AI building complex visualisations for you on-the-fly seems like a great use-case.

For example, if you are debugging memory leaks in a specific code path, you could get AI to write a visualisation of all the memory allocations and frees under that code path to help you identify the problem. This opens up an interesting new direction where building visualisations to debug specific problems is probably becoming viable.

This idea reminds me of Jonathan Blow's recent talk at LambdaConf. In it, he shows a tool he made to visualise his programs in different ways to help with identifying potential problems. I could imagine AI being good at building these. The talk: https://youtu.be/IdpD5QIVOKQ?si=roTcCcHHMqCPzqSh&t=1108

wewewedxfgdf•1h ago
Kind of a weird article because the computer systems that is "invisible" i.e. an integrated part of the flight control systems - is exactly what we have now. He's sort of arguing for .... computer software.

Like, we have HUDs - that's what a HUD is - it's a computer program.

CGamesPlay•24m ago
A HUD is typically non-interactive, which is the core distinction he’s advocating for. The “copilot” responds to your requests, the “HUD” surfaces relevant information passively.
nioj•1h ago
Concurrent posting from 5 hours ago (currently no comments): https://news.ycombinator.com/item?id=44705018
satisfice•1h ago
Yes, this is a non-creepy way of applying AI.
SilverElfin•45m ago
Isn’t that what all the AI browsers like Comet, or the things like Cluey are trying to do
eboynyc32•39m ago
Excited for the next wave of ai innovation.
clbrmbr•38m ago
A thought-provoking analogy!

What comes immediately to mind for me is using embeddings to show closest matches to current cursor position on the right tab for fast jumping to related files.

henriquegodoy•35m ago
Great post! i've been thinking along similar lines about human-AI interfaces beyond the copilot paradigm. I see two major patterns emerging:

Orchestration platforms - Evolution of tools like n8n/Make into cybernetic process design systems where each node is an intelligent agent with its own optimization criteria. The key insight: treat processes as processes, not anthropomorphize LLMs as humans. Build walls around probabilistic systems to ensure deterministic outcomes where needed. This solves massive "communication problems"

Oracle systems - AI that holds entire organizations in working memory, understanding temporal context and extracting implicit knowledge from all communications. Not just storage but active synthesis. Imagine AI digesting every email/doc/meeting to build a living organizational consciousness that identifies patterns humans miss and generates strategic insights.

just explored more about it on my personal blog https://henriquegodoy.com/blog/stream-of-consciousness

ankit219•19m ago
The current paradigm is driven by two factors: one is the reliability of the models and that constraints how much autonomy you can give to an agent. Second is about chat as a medium which everyone went to because ChatGPT became a thing.

I see the value in HUDs, but only when you can be sure output is correct. If that number is only 80% or so, copilots work better so that humans in the loop can review and course correct - the pair programmer/worker. This is not to say we need ai to get to higher levels of correctness inherently, just that systems deployed need to do so before they display some information on HUD.

psychoslave•13m ago
This is missing the addictive/engaging part of a conversational interface for most people out there. Which is in line with the critics highlighted in the fine article.

Just because most people are fond of it doesn't actually mean it improves their life, goals and productivity.

Tesla has signed a $16.5B chip contract with Samsung

https://www.cnbc.com/2025/07/28/samsung-electronics-new-chip-supply-contract.html
1•mfiguiere•11m ago•0 comments

Raspberry Pi 5 Gets a MicroSD Express Hat

https://www.cnx-software.com/2025/07/28/raspberry-pi-5-gets-a-microsd-express-hat/
1•geerlingguy•19m ago•0 comments

Fire Department Overall Run Profile as Reported to the Nfris (2022)

https://www.usfa.fema.gov/statistics/reports/firefighters-departments/fire-department-run-profile-v22i1.html
1•Guid_NewGuid•20m ago•0 comments

The Great Canadian Rights Grab

https://jacobin.com/2025/07/carney-canada-border-bill-trump/
1•colinprince•21m ago•0 comments

Self-host is just waiting for its iPhone moment

https://www.robertmao.com/blog/en/self-hosting-isnt-dead-its-just-waiting-for-its-iphone-moment
1•robmao•24m ago•2 comments

Show HN: The Hand Rolled Assembly Machine

https://hram.dev/indexb.html
1•90s_dev•25m ago•0 comments

Use a Password Manager for All Your Secrets, Not Just Logins

https://www.wsj.com/tech/personal-tech/why-you-should-use-a-password-manager-for-all-your-secrets-not-just-logins-e4c46a8b
1•SegfaultSeagull•35m ago•0 comments

EU and US agree trade deal, with 15% tariffs for European exports to America

https://www.bbc.com/news/articles/cx2xylk3d07o
4•teleforce•38m ago•0 comments

Spider Venom Prevents Tissue Damage After Heart Attack and Stroke

https://www.the-scientist.com/spider-venom-prevents-tissue-damage-after-heart-attack-and-stroke-73204
1•Gaishan•44m ago•0 comments

Gopher2000 – A Gopher server for the next millenium

https://github.com/muffinista/gopher2000
2•rickcarlino•53m ago•0 comments

Show HN: A semantic code search tool for cross-repo context retrieval

https://github.com/hpbyte/h-codex
2•hpbyte•54m ago•0 comments

Tom Lehrer, Performer of Lobachevsky, New Math, and Alma, Goes to Valhalla

https://www.avclub.com/tom-lehrer-obituary
4•hilux•1h ago•0 comments

Go-CDC-chunkers: chunk and deduplicate everything

https://plakar.io/posts/2025-07-11/introducing-go-cdc-chunkers-chunk-and-deduplicate-everything/
1•PaulHoule•1h ago•0 comments

Cyberdeck – Pi Zero 2 W with Xbox 360 Chatpad

https://hackaday.com/2025/07/27/a-very-tidy-handheld-pi-terminal-indeed/
3•kelt•1h ago•0 comments

A change in the Southern Ocean structure can have climate implications

https://www.icm.csic.es/en/news/change-southern-ocean-structure-can-have-climate-implications
1•colinprince•1h ago•0 comments

Show HN: Showcase Your Web and Mobile Apps

https://vibegiz.com/
1•ampedcast•1h ago•0 comments

Ask HN: Do You Block DigitalOcean?

3•sugarpimpdorsey•1h ago•5 comments

Audiobook narrators don't have the same profiles as book authors do

https://goodereader.com/blog/audiobooks/audiobook-narrators-dont-have-the-same-profiles-as-book-authors-do
1•goodereader•1h ago•0 comments

Naval Group Suffers Cyberattack: Hackers Claim Access to French Warship Systems

https://dailysecurityreview.com/security-spotlight/naval-group-suffers-cyberattack-hackers-claim-access-to-french-warship-combat-systems/
3•mmarian•1h ago•0 comments

VPN use surges in UK as new online safety rules kick in

https://www.ft.com/content/356674b0-9f1d-4f95-b1d5-f27570379a9b
27•mmarian•1h ago•4 comments

Treasure Trove (Zeeman Medal Lecture by Brady Haran) [video]

https://www.youtube.com/watch?v=tsZkzxEhpHk
1•celias•1h ago•0 comments

Facial Reconstruction, Nazis, and Siberia: The Story of Mikhail Gerasimov (2011)

https://www.atlasobscura.com/articles/facial-reconstruction-nazis-and-siberia-the-story-of-mikhail-gerasimov
2•georgecmu•1h ago•0 comments

GPS vs. BDS- Iran banned GPS and switched to Chinese BDS

https://lidarandradar.com/gps-vs-beidou/
1•witnessme•1h ago•0 comments

Mitochondria infusions to heal damaged organs

https://knowablemagazine.org/content/article/technology/2025/mitochondrial-therapy-to-treat-damaged-organs
4•Gaishan•1h ago•1 comments

Sunday Podcast Generator

https://sunday-podcast.vercel.app
1•rahuldey•1h ago•3 comments

Crafting a Dice Roller with Three.js and Cannon-Es

https://tympanus.net/codrops/2023/01/25/crafting-a-dice-roller-with-three-js-and-cannon-es/
2•airstrike•1h ago•1 comments

Implement AI Typesetting in Word

https://medium.com/@mapinxuesmail/how-to-implement-ai-typesetting-in-word-7a3b5484232a
1•mapinxue•1h ago•0 comments

New neuroscience study shows the brain emits light through the skull

https://www.psypost.org/fascinating-new-neuroscience-study-shows-the-brain-emits-light-through-the-skull/
2•handfuloflight•1h ago•0 comments

New advanced microscopy method is open-source and open-access

https://phys.org/news/2025-07-microscopy-method-reveals-images-complex.html
3•rudderdev•1h ago•0 comments

Tencent Releases Hunyuan World Model

https://github.com/Tencent-Hunyuan/HunyuanWorld-1.0
3•outrun86•1h ago•0 comments