frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
163•theblazehen•2d ago•48 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
674•klaussilveira•14h ago•202 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
950•xnx•20h ago•552 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
123•matheusalmeida•2d ago•33 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
22•kaonwarb•3d ago•20 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
58•videotopia•4d ago•2 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
232•isitcontent•14h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
225•dmpetrov•15h ago•118 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
332•vecti•16h ago•145 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
495•todsacerdoti•22h ago•243 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
383•ostacke•20h ago•95 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
360•aktau•21h ago•182 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
289•eljojo•17h ago•175 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
413•lstoll•21h ago•279 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
32•jesperordrup•4h ago•16 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
20•bikenaga•3d ago•8 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
18•speckx•3d ago•7 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
64•kmm•5d ago•8 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
91•quibono•4d ago•21 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
258•i5heu•17h ago•196 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
32•romes•4d ago•3 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
44•helloplanets•4d ago•42 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
60•gfortaine•12h ago•26 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1070•cdrnsf•1d ago•446 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
36•gmays•9h ago•12 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
150•vmatsiiako•19h ago•70 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
288•surprisetalk•3d ago•43 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
150•SerCe•10h ago•142 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
186•limoce•3d ago•100 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
73•phreda4•14h ago•14 comments
Open in hackernews

Beyond Text: On-Demand UI Generation for Better Conversational Experiences

https://blog.fka.dev/blog/2025-05-16-beyond-text-only-ai-on-demand-ui-generation-for-better-conversational-experiences/
77•fka•8mo ago

Comments

exe34•8mo ago
I was hoping to do this over IRC but never got around to implementing it. I hate the idea of implementing a whole website/chat system, when they already exist. I'd like to use it for my (currently in-existent) home automation communication.
fka•8mo ago
Perfect home automation never exists.
exe34•8mo ago
We live in an imperfect world.
maxcan•8mo ago
Video isn't loading.
fka•8mo ago
I think it’s because of the video format.

https://x.com/fkadev/status/1923102445799927818?s=46

casey2•8mo ago
If it could have been done it would have by now
fka•8mo ago
You can say this for all kind of inventions and new ideas.
revskill•8mo ago
Startups fo not have enough efforts to impriove ux, that is why we have jira.
utku1337•8mo ago
looks very useful
joshstrange•8mo ago
Related, it’s crazy to me that OpenAI hasn’t already done something like this for Deep Research.

After your initial question, it always follows up asking some clarifying questions, but it’s completely up to the user to format their responses and I always wonder if people are sloppy if the LLM gets confused. It would make much more sense for OpenAI to break out each question and have a dedicated answer box. That way the user’s response can be consistent and there’s less of a chance they make a mistake or forget to answer a question.

fka•8mo ago
OpenAI would implement this within a minute or smth I guess.
wddlz•8mo ago
Sorry for the shameless plug but, we recently published this research on 'Dynamic Prompt Middleware' (https://www.iandrosos.me/images/chiwork25-27.pdf) as a potential approach for this. Basically, based on the user's prompt (and some other bits of context), we generate UX containing prompt refinements for users to quickly select answers to and do the prompting for the user.
fka•8mo ago
Didn't read the paper but sounds like a similar idea.
ics•8mo ago
Very neat paper, thanks for sharing. Being able to interact with a model through, say, Jupyter Notebook in this way would be amazing especially.
aatd86•8mo ago
that's not a very innovative idea or even better UX. I think that the future wil have to do with voice commands and mcps will be the backend, exposing capabilities.
ActionHank•8mo ago
Because we are all going to be in our open planned offices shouting into the void hoping it poops out the app we want?
aatd86•8mo ago
because you really think the AI can predict the perfect UX for human consumption out of the blue instead of simply using human made components?

AI or not won't change these sorts of UI too much.

fka•8mo ago
We don't do most of our jobs with our voice. "Click" interaction is still an important one.
aatd86•8mo ago
there is no benefit in it being AI generated though. There is a closed set of interaction behaviors.

When you want to order a pizza, you won't have to click. Just browse and ask the AI assistant to place an order as you would in a restaurant. Better UX.

fka•8mo ago
Yep, that's why it's "on-demand". With LLMs, you won't need to fill the form, it's an optional interaction makes your UX process better. Please read the post and then comment :) You're possibly commenting on the title.
aatd86•8mo ago
No I read the post. I had actually read it before I think even. But I am not convinced by the on demand part.

Isn't on demand what chat llms already do nowadays btw?

point being that generating visual UI components is easy. chatgpt does it. server driven UI does it.

But multimodal interaction is something else that goes further.

fka•8mo ago
Well, AI might ask you to choose a color. Now, is it better to show a color picker UI or just ask for the name?

You might say naming the color is enough, but in reality, a color picker is the more natural way to interact.

As humans, we don’t communicate only through words. Other forms of interaction matter too.

aatd86•8mo ago
Yes but the AI is not creating these components from zero is it (on demand part)?

It will probably have access to a list of components with their specifications, especially the type of data that the components allow to mutably (or not) represent.

Or respond to a query from a database by presenting a graph automatically.

But the hard part is to turn natural language into a sql query in my opinion. It's not really the choice of data representation which is heavily informed by the data itself (type and value) and doesn't require much inference.

fka•8mo ago
I still do think you haven’t read the post :D
ActionHank•8mo ago
I really believe this is the future.

Conversations are error prone and noisy.

UI distills down the mode of interaction into something defined and well understood by both parties.

Humans have been able to speak to each other for a long time, but we fill out forms for anything formal.

fka•8mo ago
Exactly! LLMs can generate UIs according to user needs. E.g. it can generate simplified or translated ones, on-demand. No need for preset forms or long ones. Just the required ones.
visarga•8mo ago
> Conversations are error prone and noisy.

I thought you'd say not being able to reload the form at a later time from the same URL is bad. This would be a "quantum UI" slightly different every time you load it.

ActionHank•8mo ago
I think that there will be ways to achieve this.

If you look at many of the current innovations around working with llms and agents, they are largely around constraining and tracking context in a structured way. There will likely be emergent patterns for these sorts of things over time, I am implementing my own approach for now with hopefully good abstractions to allow future portability.

aziaziazi•8mo ago
> this is the future

For sure! UIs are also most of the past and present way to interact with a computer, off or online. Even Hacker News - which is mostly text - has some UI for to vote, navigate, flag…

Imagine the mess of a text-field-only interface where you had to type "upvote the upper ActionHank message" or "open the third article’ comments on the front page, the one that talks about On-demand UI generation…" then press enter.

Don’t get me wrong: LLMs are great and it’s fascinating to see experimentations with them. Kudos to the author.

banga•8mo ago
Semantic clarity of written prose is hard, but this approach seems like making it easier for the machines rather than the other way around.
jFriedensreich•8mo ago
I was working on exactly this in gpt 3 days and still believe ad hoc generation of super specifc and contextual relevant UIs will solve a lot of problems and friction that purely textual or speech based conversational interfaces pose especially if the UI elements like sliders provide some form of live feedback of their effect and are possible to scroll back to or pin and make changes anytime.
WillAdams•8mo ago
This always felt like something which the LCARS interface addressed, at least conceptually (though I've never seen an implementation which was more than just a skin).

I'd love to see folks finding the same sort of energy and innovation which was driving early projects such as Momenta and PenPoint and so forth.

bhj•8mo ago
Yes, there’s a video where Michael Okuda (with Adam Savage, I think?) recalls the TNG cast being worried about where to tap, and his response was essentially “you can’t press a wrong button“.
jFriedensreich•8mo ago
thanks for bringing this up, totally forgot the connection even though i looked at it before and also remember the adam savage interview
wddlz•8mo ago
Related to this: Here is some recently published research we did at Microsoft Research on generating UX for prompt refinements based on the user prompt and other context (case study: https://www.iandrosos.me/promptly.html, paper link also in intro).

We found it lowered barriers to providing context to AI, improved user perception of control over AI, and provided users guidance for steering AI interactions.

sheo•8mo ago
I think that the example in the article is not a good usecase for this technology. It would be better, cheaper and less error prone to have prebuilt forms that LLM can call like tools, at least for things like changing shipping address

Shipping forms usually need verification of addresses, sometimes they even include a map

Especially if on the other end data that would be inputted in this form, would be stored in the traditional DB

Much better usecase would be use it in something, that is dynamic by nature. For example, advanced prompt generator for image generation models (sliders for size of objects in a scene; dropdown menus with variants of backgrounds or style, instead of usual lists)

cjcenizal•8mo ago
You make a good point! There are many common input configurations that will come up again and again, as forms and other types on input (like maps as you mentioned). How can we solve for that?

Maybe a solution would look like the server expression a more general intent -- "shipping address", and leaving it to the client to determine the best UI component for capturing that information. Then the server will need to do its own validation of the user's input, perhaps asking for confirmation that it understood correctly.

jmull•8mo ago
This seems much worse than the typical pre-AI mechanism of navigating to and clicking on a "Change Delivery Address" button.

I don't know why you wouldn't develop whatever forms you wanted to support upfront and make them available to the agent (and hopefully provide old-fashioned search). You can still use AI to develop and maintain the forms. Since the output can be used as many times as you want, you can probably use more expensive/capable models to develop the forms rather than cheaper/faster but less capable models that you're probably limited to for customer service.