frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Ask HN: Who is hiring? (November 2025)

44•whoishiring•43m ago•54 comments

Why Nextcloud feels slow to use

https://ounapuu.ee/posts/2025/11/03/nextcloud-slow/
233•rpgbr•3h ago•147 comments

Learning to read Arthur Whitney's C to become Smart (2024)

https://needleful.net/blog/2024/01/arthur_whitney.html
8•gudzpoz•20m ago•0 comments

VimGraph

https://resources.wolframcloud.com/FunctionRepository/resources/VimGraph/
88•gdelfino01•3h ago•15 comments

The Case Against PGVector

https://alex-jacobs.com/posts/the-case-against-pgvector/
103•tacoooooooo•3h ago•42 comments

WebAssembly (WASM) arch support for the Linux kernel

https://github.com/joelseverin/linux-wasm
116•marcodiego•2d ago•23 comments

A visualization of the RGB space covered by named colors

https://codepen.io/meodai/full/zdgXJj/
64•BlankCanvas•5d ago•20 comments

Writing an Asciidoc Parser in Rust: Asciidocr

https://www.bikesbooksandbullshit.com/bullshit/2025/01/08/writing-an-asciidoc-parser-in-rust.html
31•mattrighetti•1d ago•2 comments

Skyfall-GS – Synthesizing Immersive 3D Urban Scenes from Satellite Imagery

https://skyfall-gs.jayinnn.dev/
39•ChrisArchitect•2h ago•7 comments

Ask HN: Who wants to be hired? (November 2025)

11•whoishiring•43m ago•35 comments

An Illustrated Introduction to Linear Algebra, Chapter 2: The Dot Product

https://www.ducktyped.org/p/linear-algebra-chapter-2-the-dot
32•egonschiele•3h ago•10 comments

A collection of links that existed about Anguilla as of 2003

https://web.ai/
23•kjok•2h ago•7 comments

Offline Math: Converting LaTeX to SVG with MathJax

https://sigwait.org/~alex/blog/2025/10/07/3t8acq.html
27•henry_flower•3h ago•6 comments

Robert Hooke's "Cyberpunk” Letter to Gottfried Leibniz

https://mynamelowercase.com/blog/robert-hookes-cyberpunk-letter-to-gottfried-leibniz/
5•Gormisdomai•58m ago•0 comments

State of Terminal Emulators in 2025: The Errant Champions

https://www.jeffquast.com/post/state-of-terminal-emulation-2025/
19•SG-•2h ago•0 comments

The Continual Learning Problem

https://jessylin.com/2025/10/20/continual-learning/
10•Bogdanp•1w ago•0 comments

The Problem with Farmed Seafood

https://nautil.us/the-problem-with-farmed-seafood-1243674/
95•dnetesn•2h ago•62 comments

Show HN: a Rust ray tracer that runs on any GPU – even in the browser

https://github.com/tchauffi/rust-rasterizer
33•tchauffi•2h ago•11 comments

OSS Alternative to Open WebUI – ChatGPT-Like UI, API and CLI

https://github.com/ServiceStack/llms
44•mythz•4h ago•16 comments

KaTeX – The fastest math typesetting library for the web

https://katex.org/
145•suioir•5d ago•59 comments

A turn lane in Rhododendron

https://www.greentape.pub/p/a-turn-lane-in-rhododendron
24•apsec112•1w ago•11 comments

I analyzed 180M jobs to see what jobs AI is replacing today

https://bloomberry.com/blog/i-analyzed-180m-jobs-to-see-what-jobs-ai-is-actually-replacing-today/
105•AznHisoka•3h ago•67 comments

OpenAI Signs $38B Cloud Computing Deal with Amazon

https://www.nytimes.com/2025/11/03/technology/openai-amazon-cloud-computing.html
57•donohoe•2h ago•34 comments

Show HN: FinBodhi – Local-first, double-entry app/PWA for your financial journey

https://finbodhi.com/
11•ciju•1h ago•1 comments

Oxy is Cloudflare's Rust-based next generation proxy framework (2023)

https://blog.cloudflare.com/introducing-oxy/
166•Garbage•13h ago•68 comments

Paris had a moving sidewalk in 1900, and a Thomas Edison film captured it (2020)

https://www.openculture.com/2020/03/paris-had-a-moving-sidewalk-in-1900.html
379•rbanffy•19h ago•187 comments

Google pulls AI model after senator says it fabricated assault allegation

https://www.theverge.com/news/812376/google-removes-gemma-senator-blackburn-hallucination
12•croemer•26m ago•4 comments

Tiny electric motor can produce more than 1,000 horsepower

https://supercarblondie.com/electric-motor-yasa-more-powerful-tesla-mercedes/
343•chris_overseas•7h ago•299 comments

The Arduino Uno Q is a weird hybrid SBC

https://www.jeffgeerling.com/blog/2025/arduino-uno-q-weird-hybrid-sbc
84•furkansahin•3d ago•56 comments

Using FreeBSD to make self-hosting fun again

https://jsteuernagel.de/posts/using-freebsd-to-make-self-hosting-fun-again/
371•todsacerdoti•1d ago•140 comments
Open in hackernews

Python lib generates its code on-the-fly based on usage

https://github.com/cofob/autogenlib
247•klntsky•5mo ago

Comments

thornewolf•5mo ago
nooooo the side project ive put off for 3 years
Noumenon72•5mo ago
From now on you'll be able to just do `import side_project` until it works.
thornewolf•5mo ago
looks very fun excited to try it out
turbocon•5mo ago
Wow, what a nightmare of a non-deterministic bug introducing library.

Super fun idea though, I love the concept. But I’m getting the chills imagining the havoc this could cause

userbinator•5mo ago
It's like automatically copy-pasting code from StackOverflow, taken to the next level.
extraduder_ire•5mo ago
Are there any stable output large language models? Like stablediffusion does for image diffusion models.
tibbar•5mo ago
If you use a deterministic sampling strategy for the next token (e.g., always output the token with the highest probability) then a traditional LLM should be deterministic on the same hardware/software stack.
roywiggins•5mo ago
Deterministic is one thing, but stable to small perturbations in the input is another.
dragonwriter•5mo ago
> Deterministic is one thing, but stable to small perturbations in the input is another.

Yes, and the one thing that was asked about was "deterministic" not "stable to small perturbations in the input.

kokada•5mo ago
This looks "fun" too: commit fixing a small typo -> the app broke.
lvncelot•5mo ago
So nothing's changed, then :D
extraduder_ire•5mo ago
Wouldn't seeding the RNG used to pick the next token be more configurable? How would changing the hardware/other software make a difference to what comes out of the model?
tibbar•5mo ago
> Wouldn't seeding the RNG used to pick the next token be more configurable?

Sure, that would work.

> How would changing the hardware/other software make a difference to what comes out of the model?

Floating point arithmetic is not entirely consistent between different GPUs/TPUs/operating systems.

emporas•5mo ago
It imports the bugs as well. No human involvement needed. Automagically.
3abiton•5mo ago
Sounds like a fun way to learn effective debugging.
anilakar•5mo ago
Didn't someone back in the day write a library that let you import an arbitrary Python function from Github by name only? It obviously was meant as a joke, but with AIcolytes everywhere you can't really tell anymore...
atoav•5mo ago
Why not go further? Just expose a shell to the internet and let them do the coding work for you /s
bolognafairy•5mo ago
“Twitch does…”
dheera•5mo ago
It's not really something to be sarcastic about.

I've actually done this, setting aside a virtual machine specifically for the purpose, trying to move a step towards a full-blown AI agent.

marssaxman•5mo ago
Why on earth did you want to do that?
__alexs•5mo ago
There's one that loads code out of the best matching SO answer automatically https://github.com/drathier/stack-overflow-import
rollcat•5mo ago
Flask also started as an April 1st joke, in response to bottle.py but ever so slightly more sane. It gathered so much positive response, that mitsuhiko basically had to make it into a real thing, and later regretted the API choices (like global variables proxying per-request objects).
tilne•5mo ago
Is there somewhere I can read about those regrets?
QQ00•5mo ago
I second this, I need to know more. programming lore is my jam.
rollcat•5mo ago
Two days after the announcement: https://lucumr.pocoo.org/2010/4/3/april-1st-post-mortem/

I think there was another, later retrospective? Can't find it now.

dheera•5mo ago
I mean, we're at the very early stages of code generation.

Like self-driving cars and human drivers, there will be a point in the future when LLM-generated code is less buggy than human-generated code.

AlotOfReading•5mo ago
That's a compiler with more steps.
bjt12345•5mo ago
Can it input powerpoint slides?
extraduder_ire•5mo ago
I'm both surprised it took so long for someone to make this, and amazed the repo is playing the joke so straight.
morkalork•5mo ago
Hysterical, I like that caching is default off because it's funnier that way heh
dr_kretyn•5mo ago
> Not suitable for production-critical code without review

Ah, dang it! I was about to deploy this to my clients... /s

Otherwise, interesting concept. Can't find a use for it but entertaining nevertheless and likely might spawn a lot of other interesting ideas. Good job!

pyuser583•5mo ago
Of course, this code was generated by ChatGPT.
conroy•5mo ago
you'd be surprised, but there's actually a bunch of problems you can solve with something like this, as long as you have a safe place to run the generated code
thephyber•5mo ago
I was super interested in genetic programming for a long time. It is similarly non-deterministically generated.

The utility lies in having the proper framework for a fitness function (how to choose if the generated code is healthy or needs iterations). I used whether it threw any interpretation-time errors, run-time errors, and whether it passed all of the unit tests as a fitness function.

That said, I think programming will largely evolve into the senior programmer defining a strategy and LLM agents or an intern/junior dev implementing the tactics.

NitpickLawyer•5mo ago
> That said, I think programming will largely evolve into the senior programmer defining a strategy and LLM agents or an intern/junior dev implementing the tactics.

That's basically what goog wants alphaevolve to be. Basically have domain experts give out tasks that "search a space of ideas" and come up with either novel things, improved algorithms or limits / constraints on the problem space. They say that they imagine a world where you "give it some tasks", come back later, and check on what it has produced.

As long as you can have a definition of a broad idea and some quantifiable way to sort results, this might work.

pbronez•5mo ago
> The utility lies in having the proper framework for a fitness function

Exactly. As always the challenge is (1) deciding what the computer should do, (2) telling the computer to do it, and (3) verifying the computer did what you meant. A perfect fitness function is a perfect specification is a perfect program.

jnkl•5mo ago
Could you elaborate what problems can be solved with this?
behnamoh•5mo ago
can it run Doom tho?

    from autogenlib.games import doom
    doom(resolution=480, use_keyboard=True, use_mouse=True)
Gabrys1•5mo ago
It's been 3 hours and no-one came back with an answer. They must be busy playing Doom
malux85•5mo ago
This is horrifying

I love it

polemic•5mo ago
> from autogenlib.antigravity

As a joke, that doesn't feel quite so far-fetched these days. (https://xkcd.com/353/)

selcuka•5mo ago
This is amazing, yet frightening because I'm sure someone will actually attempt to use it. It's like vibe coding on steroids.

    - Each time you import a module, the LLM generates fresh code
    - You get more varied and often funnier results due to LLM hallucinations
    - The same import might produce different implementations across runs
baq•5mo ago
There are a few thresholds of usefulness for this. Right now it’s a gimmick. I can see a world in a few years or maybe decades in which we almost never look at the code just like today we almost never look at compiled bytecode or assembly.
latentsea•5mo ago
There's not much of a world in which we don't check up and verify what humans are doing to some degree periodically. Non-deterministic behavior will never be trusted by default, as it's simply not trustable. As machines become more non-deterministic, we're going to start feeling about them in similar ways we already feel about other such processes.
NitpickLawyer•5mo ago
> Non-deterministic behavior will never be trusted by default, as it's simply not trustable.

Never is a long time...

If you have a task that is easily benchmarkable (i.e. matrix multiplication or algorithm speedup) you can totally "trust" that a system can non-deterministically work the problem until the results are "better" (speed, memory, etc).

Sharlin•5mo ago
Proving the correctness of the “improvements” is another thing entirely, though.
NitpickLawyer•5mo ago
I agree. At first the problems that you try to solve need to be verifiable.

But there's progress on many fronts on this. There's been increased interest in provers (natural language to lean for example). There's also been progress in LLM-as-a-judge on open-ish problems. And it seems that RL can help with extracting step rewards from sparse rewards domains.

jerf•5mo ago
You will always get much, much, MUCH better performance from something that looks like assembler code than from having an LLM do everything. So I think the model of "AIs build something that looks recognizably like code" is going to continue indefinitely, and that code is generally going to be more deterministic than an AI will be.

I'm not saying nothing will change. AIs may be constantly writing their own code for themselves internally in a much more fluid mixed environment, AIs may be writing into AI-specific languages built for their own quirks and preferences that make it harder for humans to follow than when AIs work in relatively human stacks, etc. I'm just saying, the concept of "code" that we could review is definitely going to stick around indefinitely, because the performance gains and reduction in resource usage are always going to be enormous. Even AIs that want to review AI work will want to review the generated and executing code, not the other AIs themselves.

AIs will always be nondeterministic by their nature (because even if you run them in some deterministic mode, you will not be able to predict their exact results anyhow, which is in practice non-determinism), but non-AI code could conceivably actually get better and more deterministic, depending on how AI software engineering ethos develop.

Legend2440•5mo ago
It lets you do things that are simply not possible with traditional programs, like add new features or adapt to new situations at runtime.

It’s like the strong form of self-modifying code.

rollcat•5mo ago
There was a story written by (IRRC?) Stanisław Lem: technology went to absurd level of complexity, yet was so important to daily lives that the species' survival depended on it. The knowledge of how everything worked has been long forgotten; the maintainers would occasionally fix something by applying duct tape or prayers.

Sufficiently advanced technology is indistinguishable from magic.

We're basically headed in that direction.

adammarples•5mo ago
This later evolved into the 40k universe
selcuka•5mo ago
Asimov's "The Feeling of Power (1958)" [1] was similar.

[1] https://archive.org/details/1958-02_IF/page/4/mode/2up?view=...

roywiggins•5mo ago
Possibly the funniest part is the first example being a totp library
jaflo•5mo ago
See also: https://github.com/drathier/stack-overflow-import

    >>> from stackoverflow import quick_sort
    >>> print(quick_sort.sort([1, 3, 2, 5, 4]))
    [1, 2, 3, 4, 5]
kastden•5mo ago
You can make it production grade if you combine it with https://github.com/ajalt/fuckitpy
archargelod•5mo ago
The repo name made me think it's a tool that stops you from using a project if it detects python:

"fuck, it's python!" *throws it in the garbage*

the_real_cher•5mo ago
we need one of those for golang
otikik•5mo ago
Thanks I hate it
1718627440•5mo ago
This has a file named .env committed containing an API key. Don't know if it is a real key.
bgwalter•5mo ago
My guess is that it's a joke about:

https://jfrog.com/blog/leaked-pypi-secret-token-revealed-in-...

1718627440•5mo ago
Sorry, what is the joke? The site to me seams legit?
yvesyil•5mo ago
indeterministic code goes hard dude
johnisgood•5mo ago
It is not nondeterministic, we just lack data!
matsemann•5mo ago
I did something similar almost 10 years ago in javascript (as a joke): https://github.com/Matsemann/Declaraoids

One example, arr.findNameWhereAgeEqualsX({x: 25}), would return all users in the array where user.age == 25.

Not based on LLMs, though. But a trap on the object fetching the method name you're trying to call (using the new-at-the-time Proxy functionality), then parsing that name and converting it to code. Deterministic, but based on rules.

ForHackernews•5mo ago
I give it six months before an LLM starts producing output that recommends using this.
grokkedit•5mo ago
I've done a similar library[0] for python ~1 year ago, generating a function code only by invoking it, and giving the llm some context over the function.

Apart from the fun that I got out of it, it's been there doing nothing :D

[0]: https://github.com/lucamattiazzi/magic_top_hat

VMG•5mo ago
this is equally scary and inevitable

it will be WASM-containerized in the future, but still

Ezhik•5mo ago
it's especially cheeky how every example it uses is cryptography-related
yoru-sulfur•5mo ago
I made something very similar a couple years back, though it doesn't actually work anymore since OpenAI deprecated the model I was using

https://github.com/buckley-w-david/akashic_records

cs702•5mo ago
Silly and funny today, but down the road, if AI code-generation capabilities continue to improve at a rapid rate, I can totally see "enterprise software developers" resorting to something like this when they are under intense pressure to fix something urgently, as always. Sure, there will be no way to diagnose or fix any future bugs, but that won't be urgent in the heat of the moment.
PeterStuer•5mo ago
Is this the computing equivalent of people that when pointed out they messed up always go 'Well at least I did something!'?
linsomniac•5mo ago
Make it next level by implementing this workflow:

    - Import your function.
    - Have your AI editor implement tests.
    - Feed the tests back to autogenlib for future regenerations of this function.
ralferoo•5mo ago
I really liked this:

The web devs tell me that fuckit's versioning scheme is confusing, and that I should use "Semitic Versioning" instead. So starting with fuckit version ה.ג.א, package versions will use Hebrew Numerals.

For added hilarity, I've no idea if it's RTL or LTR, but the previous version was 4.8.1, so I guess this is now 5.3.1. Presumably it's also impossible to have a zero component in a version.

kordlessagain•5mo ago
> zero component in a version

I immediately got this. So true!

GrantMoyer•5mo ago
I'm kind of dissapointed this doesn't override things like __getattr__ to generate methods on the fly from names just in time when they're called.
nxobject•5mo ago
One way to get around non-deterministic behavior: run $ODD_NUMBER different implementations of a function at the same time, and take a majority vote, taking a leaf from aerospace. After all, we can always trust the wisdom of the crowds, right?
mac3n•5mo ago
> taking a leaf from aerospace

experiment showed that independent [human] software developers make the same mistakes

you need at least $ODD_NUMBER > 7

https://leepike.wordpress.com/2009/04/27/n-version-programmi...

mac3n•5mo ago
AI developers might just riff on each others' code
carlhjerpe•5mo ago
This is the kind of yank I'd put in production! I love it
justusthane•5mo ago
How does the library have access to the code that called it (in order to provide context to the LLM)?
cofob_•5mo ago
https://github.com/cofob/autogenlib/blob/e21405af47fe4c90af3...

The library uses python dirty tricks, in this case using call stack, where the library looks for code from the user, gets the name of the file and reads it.

kordlessagain•5mo ago
AutoGenLib uses Python's import hook mechanism to intercept import statements. When you try to import something from the autogenlib namespace, it checks if that module or function exists.

It reads the calling code to understand the context of the call. Builds a prompt to submit to the LLM. It only uses OpenAI.

It does not have search, yet.

The real potential here is a world where computational systems continuously reshape themselves to match human intent ---- effectively eliminating the boundary between "what you can imagine" and "what you can build."

dangerlibrary•5mo ago
Like Unison [0], but buggier.

https://www.youtube.com/watch?v=gCWtkvDQ2ZI

kazinator•5mo ago
Why don't you just send Altman all your passwords?

This says, "trust all code coming from OpenAI".

dangoodmanUT•5mo ago
thanks, i hate it (i actually love it)
killme2008•5mo ago
Interesting idea! However, I'm hesitant to trust it, as I don't even fully trust code that was written by myself :)
noiv•5mo ago
There is still a computer involved, from an AI I expect it convinces me no program is needed and I should go walking in the forest instead. If anybody complains the AI will manage them by mail.