frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Linux kernel framework for PCIe device emulation, in userspace

https://github.com/cakehonolulu/pciem
77•71bw•4h ago•20 comments

Level S4 solar radiation event

https://www.swpc.noaa.gov/news/g4-severe-geomagnetic-storm-levels-reached-19-jan-2026
482•WorldPeas•16h ago•159 comments

King – man + woman is queen; but why? (2017)

https://p.migdal.pl/blog/2017/01/king-man-woman-queen-why/
39•CGMthrowaway•4d ago•36 comments

The Overcomplexity of the Shadcn Radio Button

https://paulmakeswebsites.com/writing/shadcn-radio-button/
334•dbushell•4h ago•162 comments

Increasing the performance of WebAssembly Text Format parser by 350%

https://blog.gplane.win/posts/improve-wat-parser-perf.html
41•gplane•5d ago•19 comments

I'm Addicted to Being Useful

https://www.seangoedecke.com/addicted-to-being-useful/
7•swah•1h ago•4 comments

Channel3 (YC S25) Is Hiring

https://www.ycombinator.com/companies/channel3/jobs/3DIAYYY-backend-engineer
1•aschiff1•29m ago

Reticulum, a secure and anonymous mesh networking stack

https://github.com/markqvist/Reticulum
235•brogu•12h ago•49 comments

String theory can now describe a universe that has dark energy?

https://www.quantamagazine.org/string-theory-can-now-describe-a-universe-that-has-dark-energy-202...
36•nsoonhui•1h ago•17 comments

Apple testing new App Store design that blurs the line between ads and results

https://9to5mac.com/2026/01/16/iphone-apple-app-store-search-results-ads-new-design/
420•ksec•19h ago•331 comments

x86 prefixes and escape opcodes flowchart

https://soc.me/interfaces/x86-prefixes-and-escape-opcodes-flowchart.html
71•gaul•8h ago•20 comments

What came first: the CNAME or the A record?

https://blog.cloudflare.com/cname-a-record-order-dns-standards/
382•linolevan•19h ago•136 comments

Show HN: IP over Avian Carriers with Quality of Service

https://www.rfc-editor.org/rfc/rfc2549.html
3•mig4ng•1h ago•2 comments

Nanolang: A tiny experimental language designed to be targeted by coding LLMs

https://github.com/jordanhubbard/nanolang
169•Scramblejams•14h ago•127 comments

Scaling long-running autonomous coding

https://simonwillison.net/2026/Jan/19/scaling-long-running-autonomous-coding/
113•srameshc•12h ago•44 comments

The coming industrialisation of exploit generation with LLMs

https://sean.heelan.io/2026/01/18/on-the-coming-industrialisation-of-exploit-generation-with-llms/
168•long•1d ago•116 comments

Notes on Apple's Nano Texture (2025)

https://jon.bo/posts/nano-texture/
198•dsr12•18h ago•107 comments

Giving university exams in the age of chatbots

https://ploum.net/2026-01-19-exam-with-chatbots.html
123•ploum•4h ago•80 comments

3D printing my laptop ergonomic setup

https://www.ntietz.com/blog/3d-printing-my-laptop-ergonomic-setup/
82•kurinikku•12h ago•19 comments

Nova Launcher added Facebook and Google Ads tracking

https://lemdro.id/post/lemdro.id/35049920
283•celsoazevedo•11h ago•126 comments

Kahan on the 8087 and designing Intel's floating point (2016) [video]

https://www.youtube.com/watch?v=L-QVgbdt_qg
31•bananaboy•5d ago•0 comments

British redcoat's lost memoir reveals realities of life as a disabled veteran

https://phys.org/news/2026-01-british-redcoat-lost-memoir-reveals.html
91•wglb•4d ago•87 comments

Prediction markets are ushering in a world in which news becomes about gambling

https://www.theatlantic.com/technology/2026/01/america-polymarket-disaster/685662/
310•krustyburger•1d ago•324 comments

Porsche sold more electrified cars in Europe in 2025 than pure gas-powered cars

https://newsroom.porsche.com/en/2026/company/porsche-deliveries-2025-41516.html
347•m463•11h ago•435 comments

How to be a good conference talk audience member (2022)

https://www.mooreds.com/wordpress/archives/3522
6•mooreds•2d ago•0 comments

The assistant axis: situating and stabilizing the character of LLMs

https://www.anthropic.com/research/assistant-axis
101•mfiguiere•15h ago•15 comments

Face as a QR Code

https://bookofjoe2.blogspot.com/2025/12/your-face-as-qr-code.html
30•surprisetalk•3d ago•6 comments

Understanding ZFS Scrubs and Data Integrity

https://klarasystems.com/articles/understanding-zfs-scrubs-and-data-integrity/
58•zdw•5d ago•29 comments

Targeted Bets: An alternative approach to the job hunt

https://www.seanmuirhead.com/blog/targeted-bets
74•seany62•14h ago•72 comments

The microstructure of wealth transfer in prediction markets

https://www.jbecker.dev/research/prediction-market-microstructure
169•jonbecker•20h ago•154 comments
Open in hackernews

Infinite Tool Use

https://snimu.github.io/2025/05/23/infinite-tool-use.html
83•tosh•8mo ago

Comments

anko•8mo ago
I have been thinking along these lines myself. Most of the time, if we need to calculate things, we'd use a calculator or some code. We wouldn't do it in our head, unless it's rough or small enough. But that's what we ask LLMs to do!

I believe we juggle 7 (plus or minus 2) things in our short term memory. Maybe short term memory could be a tool!

We also don't have the knowledge of the entire internet in our heads, but meanwhile we can still be more effective at strategy/reasoning/planning. Maybe a much smaller model could be used if the only thing it had to do is use tools and have a basic grasp on a language.

dijit•8mo ago
I was once told that we can only hold 7 things in our heads at once, especially smart people might manage 9; this was by a psychologist that I respect- whether its true or not I am not certain. He was using it as an argument to either condense the array of things I was thinking about into smaller decisions, or to make decisions and move on instead of letting them rot my brain.

It was good advice for me.

blixt•8mo ago
Let’s not forget that every round trip with the LLM costs latency (and extra input tokens). We now have parallel tool calls which sometimes works in some models[1]. But it’s great because now a model can say “write these 3 files then read these 2 files” before the time-to-first token latency is incurred once more (not to mention input token cost).

I think LLMs will indirectly move towards being fuzzy VMs that output tokens much like VM instructions so they can prepare multiple conditional branches of tool calling, load/unload useful subprograms, etc. It might not be expressed exactly like that, but I think given how LLMs today are very poor at reusing things in their context window, we will naturally add features that take us in this direction. Also see frameworks like CodeAct[2] etc.

[1] This can be converted to a single tool call with many arguments instead, which you’ll see providers do in their internal tools, but it’s just messier.

[2] https://machinelearning.apple.com/research/codeact

brador•8mo ago
Your only useful purpose is to assign the goal. Everything else is an uppity human getting in the way of a more efficient (and more creative) production system.
rahimnathwani•7mo ago
I'm wondering how we might apply this to the task of writing a novel.

There's an open source tool being developed that is sort of along these lines: https://github.com/raestrada/storycraftr

But:

- it expects the user to be the orchestrator, rather than running fully unattended in a loop, and

- it expects the LLM to output a whole chapter at a time, rather than doing surgical edits: https://github.com/raestrada/storycraftr/blob/b0d80204c93ff1...

(It does use a vector store to help the model get context from the rest of the book, so it doesn't assume everything is in context.)

ksilobman•7mo ago
> Give it access to a full text-editor that is controllable through special text-commands, and see many benefits

I’d like to apply what is being suggested in this post, but it doesn’t make sense to me to have to give an LLM access to a text editor just to write a novel. Isn’t there a better way?

dazzaji•7mo ago
I’m still stuck on the first sentence "An LLM should never output anything but tool calls and their arguments” because it just doesn’t make sense to me.

Tool calling is great, but LLMs are - and should be used as - more than just tool callers. I mean, some tools will have to be other LLMs doing what they’re good at, like writing a novel, summarizing, brainstorming ideas, or explaining complex topics. Tools are useful, but the stuff LLMs actually do is also useful. The basic premise that LLMs should never output anything beyond tools and arguments is leaving most of the value of LLMs on the table.

bsenftner•7mo ago
I think the blog simply does not explain well. Consider the example of a text editor, the "tool calls" are text fragments generated by the LLM then embedded into text editor tool calls that place the generated text fragment into the text editor, performing cuts, pastes, and so on.

FWIW, I've done this and it works incredibly well. It's essentially integrating the LLM into the text editor, and requests of the LLM are more like requests of the text editor directly. The mental model I use is the editor has become an AI Agent itself. I've also done with with spreadsheets, web page editors, various tools in project management software. It's an incredible perspective that works.

dazzaji•7mo ago
Got it, thanks for clarifying! So if I’m understanding you right, you’re saying that all the generative stuff the LLM does—like creating text—basically becomes part of the ‘arguments’ the original post talks about, and then that gets paired with a tool call (like inserting into a text editor, doing edits, etc.). I was focused on the tool call not the argument content aspect of the post.

And it sounds like you’ve had a lot of success with this approach in an impressive variety of application types. May I ask what tooling you usually use for this (eg custom python for each hack? MCP? some agent framework like LangGraph/ADK/etc, other?)

bsenftner•7mo ago
I noticed fairly early that the foundation LLMs have the source code to most FOSS, as well as the developer conversations, the user discussions trying to understand how to use that software, and the documentation too. The foundational models have a good amount of training data of each popular FOSS app, and by examining the code and the developer comments, and then adopting their language style, the LLM practically takes on the persona of the developer. So I spent some time understanding the internal communications of each app, and my 'tool calls' are structured JSON of the internal structures these applications use, and my own code receives these structured outputs and I just replace in the application's running memory. Not quite so blind as I describe, some of the insertion of these data structures is complicated.

In the end, each app is both what it was before, as well as can be driven by prompts. I've also specialized each to have 4 agents that are as I describe, but they each have a different representation of the app's internal data; for example, a word processor has the "content, the document" in HTML/CSS as well as raw text. When one wants to manipulate the text, requests use the HTML/CSS representation, and selections go through a slightly separate logic than a request to be applied to the entire document. When one wants to critically analyze the text, it is ASCII text, no need for the HTML/CSS at all. When one wants to use the document as a knowledge base, outside the editor, that's yet another variant that uses the editor to output a RAG ready representation.

dazzaji•7mo ago
That system would make a tidy startup, especially if tightly integrated with an open source office suite behind the scenes (LibreOffice, OpenOffice, etc) and a generative AI native UX.
dazzaji•7mo ago
* I'd call it "VibeOffice".
ayolisup•7mo ago
A naive approach could be to create an outline, then have an LLM randomly sample a section, supply the surrounding context, rewrite that part, then repeat, ideally alongside human writing. Some sort of continuous revision cycle.
yencabulator•7mo ago
The underlying problem might get solved differently with diffusion.

https://news.ycombinator.com/item?id=44057820

PeterStuer•7mo ago
In theory not being 'locked in' on the early generation track is a potential advantage of diffusion LLM's. In practice it remains to be seen wether they can truly outperform the current standard LLM with heurstics.