frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: PaySentry – Open-source control plane for AI agent payments

https://github.com/mkmkkkkk/paysentry
1•mkyang•1m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
1•ShinyaKoyano•11m ago•0 comments

The Crumbling Workflow Moat: Aggregation Theory's Final Chapter

https://twitter.com/nicbstme/status/2019149771706102022
1•SubiculumCode•15m ago•0 comments

Pax Historia – User and AI powered gaming platform

https://www.ycombinator.com/launches/PMu-pax-historia-user-ai-powered-gaming-platform
2•Osiris30•16m ago•0 comments

Show HN: I built a RAG engine to search Singaporean laws

https://github.com/adityaprasad-sudo/Explore-Singapore
1•ambitious_potat•22m ago•0 comments

Scams, Fraud, and Fake Apps: How to Protect Your Money in a Mobile-First Economy

https://blog.afrowallet.co/en_GB/tiers-app/scams-fraud-and-fake-apps-in-africa
1•jonatask•22m ago•0 comments

Porting Doom to My WebAssembly VM

https://irreducible.io/blog/porting-doom-to-wasm/
1•irreducible•22m ago•0 comments

Cognitive Style and Visual Attention in Multimodal Museum Exhibitions

https://www.mdpi.com/2075-5309/15/16/2968
1•rbanffy•24m ago•0 comments

Full-Blown Cross-Assembler in a Bash Script

https://hackaday.com/2026/02/06/full-blown-cross-assembler-in-a-bash-script/
1•grajmanu•29m ago•0 comments

Logic Puzzles: Why the Liar Is the Helpful One

https://blog.szczepan.org/blog/knights-and-knaves/
1•wasabi991011•40m ago•0 comments

Optical Combs Help Radio Telescopes Work Together

https://hackaday.com/2026/02/03/optical-combs-help-radio-telescopes-work-together/
2•toomuchtodo•45m ago•1 comments

Show HN: Myanon – fast, deterministic MySQL dump anonymizer

https://github.com/ppomes/myanon
1•pierrepomes•51m ago•0 comments

The Tao of Programming

http://www.canonical.org/~kragen/tao-of-programming.html
1•alexjplant•52m ago•0 comments

Forcing Rust: How Big Tech Lobbied the Government into a Language Mandate

https://medium.com/@ognian.milanov/forcing-rust-how-big-tech-lobbied-the-government-into-a-langua...
2•akagusu•53m ago•0 comments

PanelBench: We evaluated Cursor's Visual Editor on 89 test cases. 43 fail

https://www.tryinspector.com/blog/code-first-design-tools
2•quentinrl•55m ago•2 comments

Can You Draw Every Flag in PowerPoint? (Part 2) [video]

https://www.youtube.com/watch?v=BztF7MODsKI
1•fgclue•1h ago•0 comments

Show HN: MCP-baepsae – MCP server for iOS Simulator automation

https://github.com/oozoofrog/mcp-baepsae
1•oozoofrog•1h ago•0 comments

Make Trust Irrelevant: A Gamer's Take on Agentic AI Safety

https://github.com/Deso-PK/make-trust-irrelevant
6•DesoPK•1h ago•3 comments

Show HN: Sem – Semantic diffs and patches for Git

https://ataraxy-labs.github.io/sem/
1•rs545837•1h ago•1 comments

Hello world does not compile

https://github.com/anthropics/claudes-c-compiler/issues/1
34•mfiguiere•1h ago•20 comments

Show HN: ZigZag – A Bubble Tea-Inspired TUI Framework for Zig

https://github.com/meszmate/zigzag
3•meszmate•1h ago•0 comments

Metaphor+Metonymy: "To love that well which thou must leave ere long"(Sonnet73)

https://www.huckgutman.com/blog-1/shakespeare-sonnet-73
1•gsf_emergency_6•1h ago•0 comments

Show HN: Django N+1 Queries Checker

https://github.com/richardhapb/django-check
1•richardhapb•1h ago•1 comments

Emacs-tramp-RPC: High-performance TRAMP back end using JSON-RPC instead of shell

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•todsacerdoti•1h ago•0 comments

Protocol Validation with Affine MPST in Rust

https://hibanaworks.dev
1•o8vm•1h ago•1 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
5•gmays•1h ago•0 comments

Show HN: Zest – A hands-on simulator for Staff+ system design scenarios

https://staff-engineering-simulator-880284904082.us-west1.run.app/
1•chanip0114•1h ago•1 comments

Show HN: DeSync – Decentralized Economic Realm with Blockchain-Based Governance

https://github.com/MelzLabs/DeSync
1•0xUnavailable•1h ago•0 comments

Automatic Programming Returns

https://cyber-omelette.com/posts/the-abstraction-rises.html
1•benrules2•1h ago•1 comments

Why Are There Still So Many Jobs? The History and Future of Workplace Automation [pdf]

https://economics.mit.edu/sites/default/files/inline-files/Why%20Are%20there%20Still%20So%20Many%...
2•oidar•1h ago•0 comments
Open in hackernews

Working on a Programming Language in the Age of LLMs

https://ryelang.org/blog/posts/programming-language-in-age-of-llms/
53•todsacerdoti•6mo ago

Comments

dustingetz•6mo ago
see also “Notational Intelligence” https://thesephist.com/posts/notation/

“The value of notation lies in how it enables us to work with new abstractions. With more powerful notation, we can work with ideas that would have been too complex or unwieldy without it. Equipped with better notation, we might think of solutions or hypotheses that would have been previously unthinkable. Without Arabic numerals, we don’t have long division. Without chess notation, the best strategies and openings may not have been played. Without a notation for juggling patterns called Siteswap, many new juggling patterns wouldn’t have been invented. I think notation should be judged by its ability to contribute to and represent previously unthinkable, un-expressible thoughts.”

This is pretty much the whole point of programming languages imo

middayc•6mo ago
Cool, this explains that idea of code as a tool for thinking in a really good way. Haven't read the whole post yet.
AllegedAlec•6mo ago
It'd be interesting to look at some of the stuff Alan Kay talked about. With STEPS he was working on some interesting notions that might actually help here. The entire work they were doing with that was effectively based around creatign DSLs for whatever problem area they were working around at the time; GUI; hell, iirc they implemented TCP by writing a DSL taht was able to read the ASCII images and tables in the TCP RFC and use that to implement packages.
middayc•6mo ago
I googled Alan Kay Steps and got to what seems an very interesting PDF.

I will read it, just to be certain by "might actually help here" - what is "here"? You mean with Rye language design generally, LLM-s relating to new languages or something else? :)

AllegedAlec•6mo ago
I was specifically thinking about integration with LLMs. I feel like if we're able to really get the small problem domain related DSL stuff right we can divide the difficulty of a problem into multiple smaller issues. In my experience with LLMs so far, the major issue by far is it keeping enough in a small context that it reliably 'knows' what it needs to. If you can task it to first create a DSL for a problem domain and then express the solution in that DSL it made before, it might really simplify the problem.

In general I feel like there's some great applicability here in this specific language. The language docs imply a certain degree of homoiconicity, which I think would be really helpful for DSLs like this...

middayc•6mo ago
@AllegedAlec Rye is fully runtime homoiconic. Rebol had great emphasis on DSL-s (dialects), and Rye has them too (validation, math, ...), but tries to be more conservative with them, because the main Rye DSL should be quite flexible.

Instead of DSL-s Rye focus much more on constructing specialized, also limited and isolated contexts (scopes), that have just the functions you need or just the custom functions you need, while the evaluation mechanism doesn't change (is the one you (or LLM) already know(s)).

I haven't thought about contexts + LLM-s yet. I will read the PDF you referenced with interest! Here is a little more info about contexts: https://ryelang.org/meet_rye/specifics/context/

AllegedAlec•6mo ago
I saw the context mentioned shortly, but I'll look into it. Sounds interesting!
amccollum•6mo ago
If everyone is using LLMs to write new code, and LLMs are trained on existing code from the internet, that creates an enormous barrier to the adoption of new programming languages, because no new code will be written in them, therefore LLMs will never learn to write the code. It is a self-reinforcing cycle.

I've experienced this to some degree already in using LLMs to write Zig code (ironically, for my own pet programming language). Because Zig is still evolving so quickly, often the code the LLM produces is wrong because it's based on examples targeting incompatible prior versions of the language. Alternatively, if you ask an LLM to try to write code for a more esoteric language (e.g., Faust), the results are generally pretty terrible.

gnulinux•6mo ago
Fine-tuning existing base models on your programming language is pretty practical. [1] You might need a very good and large dataset but that's hardly a problem for a programming language you're generating because you better have the ability generate programs for fuzzing your compiler anyway.

[1] There are a lot of models that achieve this. E.g. Goedel-Prover-V2-32B [2] is a model based off of Qwen3-32B and fine tuned on Lean proofs. It works extremely well. I personally tried further fine tuning this model on Agda and although my dataset was pretty sloppy and small, it was pretty successful. If you actually sit down and generate a large dataset with variety it's pretty reachable to fine tune it for any similar prog lang.

[2] https://huggingface.co/Goedel-LM/Goedel-Prover-V2-32B

sshine•6mo ago
> enormous barrier to the adoption of new programming languages, because no new code will be written in them, therefore LLMs will never learn to write the code

Let’s see.

I’ve vibe-coded some apps with TypeScript and react, not knowing react at all, because I thought it’s the most exemplified framework online.

But I came to a point where my app was too buggy and diverged, and being unable to debug it, I refactored it to Vue, since I personally know it better.

My point is that just because there’s more training data, the quietly is not necessarily excellent; I ended up with a mixture of conflicting idioms seasoned react developers would have frowned upon.

Picking a less exemplified language and supplementing with more of your knowledge of the language might yield better results. E.g. while the AI can’t write better Rust on its own, I don’t mind contributing with Rust code myself more often.

roygbiv2•6mo ago
> But I came to a point where my app was too buggy and diverged, and being unable to debug it, I refactored it to Vue, since I personally know it better.

One of the many pitfalls with using an llm to write code. It's very easy to find yourself with a codebase you know nothing about that you can't progress any further because it keeps breaking.

sshine•6mo ago
It was an interesting experiment working with very little clue of the generated code.

I could learn about react and understand the large-scale incongruences / mismatching choices the LLM made for me.

But I already have one reactive framework in my wetware that I can have an educated opinion on.

wolttam•6mo ago
Let's not underestimate LLM's ability to do in-context learning. Perhaps it can just read the new lang's docs and apply what it already knows from other languages
middayc•6mo ago
But didn't LLMs read all the math books and can't really do arithmetics (they need special modes / hacks / python to do it I think)?

So why would they be able to "read" the docs and use that knowledge except up to pattern matching level. That's why I also assume, that tons of examples with results would do better than lang docs, but I haven't tested it yet.

vukgr•6mo ago
While I don't like to argue for LLM competency, you have to remember that at the end of the day LLMs are word generators. They will always be bad at math unless there is a major structural change.

So while they cant learn arithmetic they should be able to learn programming languages given that they are way closer to what it was designed and trained for.

melagonster•6mo ago
What if we require LLM to write anything in Brainf**? If the language design is small enough to insert into our message every time, maybe it can work well.
sshine•6mo ago
There are interesting ideas out there in the landscape of PLD and LLMs, mostly centred around query languages.

https://lmql.ai/

https://github.com/paralleldrive/sudolang-llm-support

https://ben.terhech.de/posts/2025-01-31-llms-vs-programming-... — take-away: output languages like Python and TypeScript fare better, as I’d expect.

Maybe the blog post implies: why make a language the LLMs have zero examples of, and thus can’t synthesize?

I’d still make a language for the heck of it, because programming as a recreational human activity is great.

blamestross•6mo ago
The biggest implications of LLM and programming is this:

LLMs are autoencoders for sequences. If an LLM can write the code, the entropy value of that code is low. We know that already, most human communication is low entropy, but the LLMs being good at it implies there is a more efficient structure we could be using. All the embeddings are artifacts of structure, but the entire ANN model obfuscates structures it encodes.

Clearly there are better programming languages, closer fit to our actual intents, than the existing ones. The LLM will never show them to us, we need to go make/find them ourselves.

thbb123•6mo ago
Thinking of something like APL or J?
dmytrish•6mo ago
Yes, in some sense JavaScript is the pinnacle of programming language design: it's so resilient to chaos that even stochastic parrots can write it with some success.

It's like the absolute minimal threshold of demands for sloppy code to work without immediately falling apart.

seunosewa•6mo ago
The LLMs can easily be trained to teach people how to use new programming languages.
middayc•6mo ago
But will people have interest in learning a new language? :)