frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Mathematics for Computer Science (2018) [pdf]

https://courses.csail.mit.edu/6.042/spring18/mcs.pdf
95•vismit2000•3h ago•10 comments

Surveillance Watch – a map that shows connections between surveillance companies

https://www.surveillancewatch.io
14•kekqqq•48m ago•2 comments

What Happened to WebAssembly

https://emnudge.dev/blog/what-happened-to-webassembly/
109•enz•2h ago•93 comments

How to Code Claude Code in 200 Lines of Code

https://www.mihaileric.com/The-Emperor-Has-No-Clothes/
515•nutellalover•14h ago•179 comments

European Commission issues call for evidence on open source

https://lwn.net/Articles/1053107/
102•pabs3•3h ago•39 comments

Why I left iNaturalist

https://kueda.net/blog/2026/01/06/why-i-left-inat/
188•erutuon•9h ago•96 comments

Sopro TTS: A 169M model with zero-shot voice cloning that runs on the CPU

https://github.com/samuel-vitorino/sopro
236•sammyyyyyyy•13h ago•88 comments

Embassy: Modern embedded framework, using Rust and async

https://github.com/embassy-rs/embassy
210•birdculture•11h ago•86 comments

Hacking a Casio F-91W digital watch (2023)

https://medium.com/infosec-watchtower/how-i-hacked-casio-f-91w-digital-watch-892bd519bd15
91•jollyjerry•4d ago•26 comments

Bose has released API docs and opened the API for its EoL SoundTouch speakers

https://arstechnica.com/gadgets/2026/01/bose-open-sources-its-soundtouch-home-theater-smart-speak...
2317•rayrey•19h ago•345 comments

Richard D. James aka Aphex Twin speaks to Tatsuya Takahashi (2017)

https://web.archive.org/web/20180719052026/http://item.warp.net/interview/aphex-twin-speaks-to-ta...
173•lelandfe•13h ago•54 comments

Do not mistake a resilient global economy for populist success

https://www.economist.com/leaders/2026/01/08/do-not-mistake-a-resilient-global-economy-for-populi...
152•andsoitis•3h ago•153 comments

1M for Non-Specialists: Introduction

https://pithlessly.github.io/1ml-intro
5•birdculture•6d ago•2 comments

The Jeff Dean Facts

https://github.com/LRitzdorf/TheJeffDeanFacts
466•ravenical•21h ago•166 comments

The Unreasonable Effectiveness of the Fourier Transform

https://joshuawise.com/resources/ofdm/
227•voxadam•15h ago•93 comments

Anthropic blocks third-party use of Claude Code subscriptions

https://github.com/anomalyco/opencode/issues/7410
344•sergiotapia•6h ago•275 comments

Photographing the hidden world of slime mould

https://www.bbc.com/news/articles/c9d9409p76qo
27•1659447091•1w ago•5 comments

Mysterious Victorian-era shoes are washing up on a beach in wales

https://www.smithsonianmag.com/smart-news/hundreds-of-mysterious-victorian-era-shoes-are-washing-...
31•Brajeshwar•3d ago•12 comments

AI coding assistants are getting worse?

https://spectrum.ieee.org/ai-coding-degrades
314•voxadam•19h ago•501 comments

Samba Was Written (2003)

https://download.samba.org/pub/tridge/misc/french_cafe.txt
22•tosh•5d ago•15 comments

He was called a 'terrorist sympathizer.' Now his AI company is valued at $3B

https://sfstandard.com/2026/01/07/called-terrorist-sympathizer-now-ai-company-valued-3b/
179•newusertoday•16h ago•218 comments

The No Fakes Act has a “fingerprinting” trap that kills open source?

https://old.reddit.com/r/LocalLLaMA/comments/1q7qcux/the_no_fakes_act_has_a_fingerprinting_trap_t...
127•guerrilla•5h ago•55 comments

Google AI Studio is now sponsoring Tailwind CSS

https://twitter.com/OfficialLoganK/status/2009339263251566902
645•qwertyforce•15h ago•211 comments

Ushikuvirus: Newly discovered virus may offer clues to the origin of eukaryotes

https://www.tus.ac.jp/en/mediarelations/archive/20251219_9539.html
98•rustoo•1d ago•22 comments

Fixing a Buffer Overflow in Unix v4 Like It's 1973

https://sigma-star.at/blog/2025/12/unix-v4-buffer-overflow/
124•vzaliva•15h ago•33 comments

Show HN: macOS menu bar app to track Claude usage in real time

https://github.com/richhickson/claudecodeusage
123•RichHickson•15h ago•44 comments

Logistics Is Dying; Or – Dude, Where's My Mail?

https://lagomor.ph/2026/01/logistics-is-dying-or-dude-wheres-my-mail/
49•ChilledTonic•8h ago•37 comments

Systematically Improving Espresso: Mathematical Modeling and Experiment (2020)

https://www.cell.com/matter/fulltext/S2590-2385(19)30410-2
31•austinallegro•6d ago•8 comments

Making Magic Leap past Nvidia's secure bootchain and breaking Tesla Autopilots

https://fahrplan.events.ccc.de/congress/2025/fahrplan/event/making-the-magic-leap-past-nvidia-s-s...
66•rguiscard•1w ago•16 comments

Pole of Inaccessibility

https://en.wikipedia.org/wiki/Pole_of_inaccessibility
52•benbreen•5d ago•11 comments
Open in hackernews

Smartfunc: Turn Docstrings into LLM-Functions

https://github.com/koaning/smartfunc
70•alexmolas•9mo ago

Comments

shaism•9mo ago
Very cool. I implemented something similar for personal use before.

At that time, LLMs weren't as proficient in coding as they are today. Nowadays, the decorator approach might even go further and not just wrap LLM calls but also write Python code based on the description in the Docstring.

This would incentivize writing unambiguous DocStrings, and guarantee (if the LLMs don't hallucinate) consistency between code and documentation.

It would bring us closer to the world that Jensen Huang described, i.e., natural language becoming a programming language.

psunavy03•9mo ago
People have been talking about natural language becoming a programming language for way longer than even Jensen Huang has been talking about it. Once upon a time, they tried to adapt natural language into a programming language, and they came up with this thing called COBOL. Same idea: "then the managers can code, and we won't need to hire so many expensive devs!"

And now the COBOL devs are retiring after a whole career . . .

pizza•9mo ago
But isn't it actually more like, COBOL lets you talk in COBOL-ese (which is kinda stilted), whereas LLMs let you talk in LLM-ese (which gets a lot closer to actual language)? And then since the skill cap on language is basically infinite, that this becomes a question of how good you are at saying what you want - to the extent it intersects with what the LLM can do.
psunavy03•9mo ago
COBOL was the best attempt that they could get to in the 1960s. It's the entire reason COBOL has things like paragraphs, things end with periods, etc. They wanted as much of an "English-like syntax" as possible.

The reason it looks so odd today is that so much of modern software is instead the intellectual heir of C.

And yeah, the "skill cap" of describing things is theoretically infinite. My point was this has been tried before and we don't yet know how the actual limitations of an LLM come close to that ideal. People have been trying for decades to describe things in English that still ultimately need to be described in code for them to work; that's why the software industry exists in the first place.

lukev•9mo ago
This is the way LLM-enhanced coding should (and I believe will) go.

Treating the LLM like a compiler is a much more scalable, extensible and composable mental model than treating it like a junior dev.

simonw•9mo ago
smartfunc doesn't really treat the LLM as a compiler - it's not generating Python code to fill out the function, it's converting that function into one that calls the LLM every time you call the function passing in its docstring as a prompt.

A version that DID work like a compiler would be super interesting - it could replace the function body with generated Python code on your first call and then reuse that in the future, maybe even caching state on disk rather than in-memory.

hedgehog•9mo ago
I use something similar to this decorator (more or less a thin wrapper around instructor) and have looked a little bit at the codegen + cache route. It gets more interesting with the addition of tool calls, but I've found JSON outputs create quality degradation and reliability issues. My next experiment on that thread is to either use guidance (https://github.com/guidance-ai/guidance) or reimplement some of their heuristics to try to get tool calling without 100% reliance on JSON.
toxik•9mo ago
Isn’t that basically just Copilot but way more cumbersome to use?
nate_nowack•9mo ago
no https://bsky.app/profile/alternatebuild.dev/post/3lg5a5fq4dc...
photonthug•9mo ago
Treating it as a compiler is obviously the way right? Setting aside overhead if you’re using local models.. Either the code gen is not deterministic in which case you risk random breakage or it is deterministic and you decided to delete it anyway and punt on ever changing / optimizing it except for in natural language? Why would anyone prefer either case? Code folding works fine if you just don’t want to look at it ever.

I can see this eventually going in the direction of "bidirectional synchronization" of NL representation and code representation (similar to how jupytext allows you work with notebooks in browser or markdown in editor). But a single representation that's completely NL and deliberately throwing away a code representation sounds like it would be the opposite of productivity..

huevosabio•9mo ago
Yes, that would be indeed very interesting.

I would like to try something like this in Rust: - you use a macro to stub out the body of functions (so you just write the signature) - the build step fills in the code and caches it - on failures the, the build step is allowed to change the function bodies generated by LLMs until it satisfies the test / compile steps - you can then convert the satisfying LLM-generated function bodies into a hard code (or leave it within the domain of "changeable by the llm")

It sandboxes what the LLM can actually alter, and makes the generation happen in an environment where you can check right away if it was done correctly. Being Rust, you get a lot of more verifications. And, crucially, keeps you in the driver's seat.

lukev•9mo ago
Ah, cool, didn't read close enough.

Yeah, I do think that LLMs acting as compilers for super high-level specs (the new "code") is a much better approach than chatting with a bot to try to get the right code written. LLM-derived code should not be "peer" to human-written code IMO; it should exist at some subordinate level.

The fact that they're non-deterministic makes it a bit different from a traditional compiler but as you say, caching a "known good" artifact could work.

hombre_fatal•9mo ago
https://github.com/eeue56/neuro-lingo

You can even pin the last result:

    pinned function main() {
      // Print "Hello World" to the console
    }
vrighter•9mo ago
a compiler has one requirement that llms cannot provide. It has to be robust.
simonw•9mo ago
I really like how this integrates with the schema feature I added to the underlying LLM Python library a few weeks ago: https://simonwillison.net/2025/Feb/28/llm-schemas/#using-sch...
noddybear•9mo ago
Cool! Looks a lot like Tanuki: https://github.com/Tanuki/tanuki.py
nate_nowack•9mo ago
yea its a popular DX at this point: https://blog.alternatebuild.dev/marvin-3x/
miki123211•9mo ago
There's also promptic which wraps litelm, which supports many, many, many more model providers, and it doesn't even need plugins.

Llm is a cool cli tool, but IMO litellm is a better Python library.

simonw•9mo ago
I think LLM's plugin architecture is a better bet for supporting model providers than the way LiteLLM does it.

The problem with LiteLLM's approach is that every model provider needs to be added to the core library - in https://github.com/BerriAI/litellm/tree/main/litellm/llms - and then shipped as a new release.

LLM uses plugins because then there's no need to sync new providers with the core tool. When a new Gemini feature comes out I ship a new release of https://github.com/simonw/llm-gemini - no need for a release of core.

I can wake up one morning and LLM grew support for a bunch of new models overnight because someone else released a plugin.

I'm not saying "LLM is better than LiteLLM" here - LiteLLM is a great library with a whole lot more contributors than LLM, and it's also been fully focused on being a great Python library - LLM has also had more effort invested in the CLI aspect than the Python library aspect so far.

I am confident that a plugin system is a better way to solve this problem generally though.

asadm•9mo ago
I was working on a similar thing but for JS.

Imagine this: It would be cool when these functions essentially boiled down to a distilled tiny model just for that functionality instead of an api call to foundation one.

dheera•9mo ago
I often do the reverse -- have LLMs insert docstrings into large, poorly commented codebases that are hard to understand.

Pasting a piece of code into an LLM with the prompt "comment the shit out of this" works quite well.

simonw•9mo ago
Matheus Pedroni released a really clever plugin for doing that with LLM the other day: https://mathpn.com/posts/llm-docsmith/

You run it like this:

  llm install llm-docsmith
  llm docsmith ./scripts/main.py
And it uses a Python concrete syntax tree (with https://pypi.org/project/libcst/) to apply changes to just the docstrings without risk of editing any other code.
nonethewiser•9mo ago
Funny. I frequently give the LLM the function and ask it to make the doc string.

TBH I find doc strings very tedious to write. I can see how this would be a great specification for an LLM but I dont know that its actually better than a plain text description of the function since LLMs can handle those just fine and they are easier to write.

senko•9mo ago
Many libraries with the same approach suffer the same flaw: can't easily use the same function with different LLMs at runtime (ie. after importing the module where it is defined).

I initially used the same approach in my library, but changed it to explicitly pass the llm object around and in actual production code it's easier/more flexible to use.

Examples (2nd one also with docstring-based llm query and structured answer): https://github.com/senko/think?tab=readme-ov-file#examples

_1tan•9mo ago
Is there something like this but for Java?