frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•3m ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
1•1vuio0pswjnm7•5m ago•0 comments

Effective Nihilism

https://www.effectivenihilism.org/
1•abetusk•8m ago•1 comments

The UK government didn't want you to see this report on ecosystem collapse

https://www.theguardian.com/commentisfree/2026/jan/27/uk-government-report-ecosystem-collapse-foi...
2•pabs3•10m ago•0 comments

No 10 blocks report on impact of rainforest collapse on food prices

https://www.thetimes.com/uk/environment/article/no-10-blocks-report-on-impact-of-rainforest-colla...
1•pabs3•10m ago•0 comments

Seedance 2.0 Is Coming

https://seedance-2.app/
1•Jenny249•12m ago•0 comments

Show HN: Fitspire – a simple 5-minute workout app for busy people (iOS)

https://apps.apple.com/us/app/fitspire-5-minute-workout/id6758784938
1•devavinoth12•12m ago•0 comments

Dexterous robotic hands: 2009 – 2014 – 2025

https://old.reddit.com/r/robotics/comments/1qp7z15/dexterous_robotic_hands_2009_2014_2025/
1•gmays•16m ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•ksec•26m ago•1 comments

JobArena – Human Intuition vs. Artificial Intelligence

https://www.jobarena.ai/
1•84634E1A607A•29m ago•0 comments

Concept Artists Say Generative AI References Only Make Their Jobs Harder

https://thisweekinvideogames.com/feature/concept-artists-in-games-say-generative-ai-references-on...
1•KittenInABox•33m ago•0 comments

Show HN: PaySentry – Open-source control plane for AI agent payments

https://github.com/mkmkkkkk/paysentry
1•mkyang•35m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
1•ShinyaKoyano•45m ago•0 comments

The Crumbling Workflow Moat: Aggregation Theory's Final Chapter

https://twitter.com/nicbstme/status/2019149771706102022
1•SubiculumCode•49m ago•0 comments

Pax Historia – User and AI powered gaming platform

https://www.ycombinator.com/launches/PMu-pax-historia-user-ai-powered-gaming-platform
2•Osiris30•50m ago•0 comments

Show HN: I built a RAG engine to search Singaporean laws

https://github.com/adityaprasad-sudo/Explore-Singapore
1•ambitious_potat•56m ago•0 comments

Scams, Fraud, and Fake Apps: How to Protect Your Money in a Mobile-First Economy

https://blog.afrowallet.co/en_GB/tiers-app/scams-fraud-and-fake-apps-in-africa
1•jonatask•56m ago•0 comments

Porting Doom to My WebAssembly VM

https://irreducible.io/blog/porting-doom-to-wasm/
2•irreducible•56m ago•0 comments

Cognitive Style and Visual Attention in Multimodal Museum Exhibitions

https://www.mdpi.com/2075-5309/15/16/2968
1•rbanffy•58m ago•0 comments

Full-Blown Cross-Assembler in a Bash Script

https://hackaday.com/2026/02/06/full-blown-cross-assembler-in-a-bash-script/
1•grajmanu•1h ago•0 comments

Logic Puzzles: Why the Liar Is the Helpful One

https://blog.szczepan.org/blog/knights-and-knaves/
1•wasabi991011•1h ago•0 comments

Optical Combs Help Radio Telescopes Work Together

https://hackaday.com/2026/02/03/optical-combs-help-radio-telescopes-work-together/
2•toomuchtodo•1h ago•1 comments

Show HN: Myanon – fast, deterministic MySQL dump anonymizer

https://github.com/ppomes/myanon
1•pierrepomes•1h ago•0 comments

The Tao of Programming

http://www.canonical.org/~kragen/tao-of-programming.html
2•alexjplant•1h ago•0 comments

Forcing Rust: How Big Tech Lobbied the Government into a Language Mandate

https://medium.com/@ognian.milanov/forcing-rust-how-big-tech-lobbied-the-government-into-a-langua...
4•akagusu•1h ago•1 comments

PanelBench: We evaluated Cursor's Visual Editor on 89 test cases. 43 fail

https://www.tryinspector.com/blog/code-first-design-tools
2•quentinrl•1h ago•2 comments

Can You Draw Every Flag in PowerPoint? (Part 2) [video]

https://www.youtube.com/watch?v=BztF7MODsKI
1•fgclue•1h ago•0 comments

Show HN: MCP-baepsae – MCP server for iOS Simulator automation

https://github.com/oozoofrog/mcp-baepsae
1•oozoofrog•1h ago•0 comments

Make Trust Irrelevant: A Gamer's Take on Agentic AI Safety

https://github.com/Deso-PK/make-trust-irrelevant
9•DesoPK•1h ago•4 comments

Show HN: Sem – Semantic diffs and patches for Git

https://ataraxy-labs.github.io/sem/
1•rs545837•1h ago•1 comments
Open in hackernews

Ask HN: Do we need a language designed specifically for AI code generation?

4•baijum•8mo ago
Let's run a thought experiment. If we were to design a new programming language today with the primary goal of it being written by an AI (like Copilot) and reviewed by a human, what would its core features be?

My initial thoughts are that we would trade many of the conveniences we currently value for absolute, unambiguous clarity. For example:

- Would we get rid of most syntactic sugar? If there's only one, explicit way to write a `for` loop, the AI's output becomes more predictable and easier to review.

- Would we enforce extreme explicitness? Imagine a language where you must write `fn foo(none)` if there are no parameters, just to remove the ambiguity of `()`.

- How would we handle safety? Would features like mandatory visibility (`pub`/`priv`) and explicit ownership annotations for FFI calls become central to the language itself, providing guarantees the reviewer can see instantly?

- Would such a language even be usable by humans for day-to-day work, or would it purely be a compilation target for AI prompts?

What trade-offs would you be willing to make for a language that gave you higher confidence in the code an AI generates?

Comments

dtagames•8mo ago
LLMs don't work the way you think. In order to be useful, a model would have to be trained on large quantities of code written in your new language, which don't exist.

Even after that, it will exhibit all the same problems as existing models and other languages. The unreliability of LLMs comes from the way they make predictions, rather than "retrieve" real answers, like a database would. Changing the content and context (your new language) won't change that.

baijum•8mo ago
That's a very fair and critical point. You're right that we can't change the fundamental, probabilistic nature of LLMs themselves.

But that makes me wonder if the goal should be reframed. Instead of trying to eliminate errors, what if we could change their nature?

The interesting hypothesis to explore, then, is whether a language's grammar can be designed to make an LLM's probabilistic errors fail loudly as obvious syntactic errors, rather than failing silently as subtle, hard-to-spot semantic bugs.

For instance, if a language demands extreme explicitness and has no default behaviors, an LLM's failure to generate the required explicit token becomes a simple compile-time error, not a runtime surprise.

So while we can't "fix" the LLM's core, maybe we can design a grammar that acts as a much safer "harness" for its output.

dtagames•8mo ago
I would say we have this language already, too. It's machine code or its cousin, assembler. Processor instructions (machine code) that all software reduces down to are very explicit and have no default values.

The problem is that people don't like writing assembler, which is how we got Fortran in the first place.

The fundamental issue, then, is with the human language side of things, not the programming language side. The LLM is useful because it understands regular English, like "What is the difference between 'let' and 'const' in JS?," which is not something that can be expressed in a programming language.

To get the useful feature we want, natural language understanding, we have to accept the unreliable and predictive nature of the entire technique.

FloatArtifact•8mo ago
What I've always been confused on, why can't we train LLMs to code without ever seeing source code?

If it understands human language enough, it should be able to understand the logic laid out in the documentation mapped to symbols to construct code.

dtagames•8mo ago
We have this already. You can ask Cursor to go read the doc on syntax it may not have ever seen and write something that conforms. I used this recently to support a new feature in Lit which I'd never seen before and I doubt is in the training set much, if at all.

You can also describe your own app's syntax, architecture, function signatures, etc. in markdown files or just in chat and Cursor will write code that conforms to your desired syntax, which definitely doesn't exist in the training set.

FloatArtifact•8mo ago
Yes, but that's not how they're primarily trained.
baijum•7mo ago
This project could be one option for new languages: https://genlm.org/genlm-control/
muzani•8mo ago
Generally they work better with words that are more easily readable by humans. They have a lot of trouble with JSON and do YAML much better, for example. Running through more tokens doesn't just increase cost, it lowers quality.

So they'd likely go the other way. It's like how spoken languages have more redundancies built in.

theGeatZhopa•8mo ago
What's needed is a formalization and that formalization to been trained on. In not sure if systemprompt alone is powerful enough to check and enforce input as definite and exact formalized expression(s).

I don't think it will work out easily like "a programming language for LLM" - but you can always have a discussion with ol' lama