frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
1•jesperordrup•1m ago•0 comments

Write for Your Readers Even If They Are Agents

https://commonsware.com/blog/2026/02/06/write-for-your-readers-even-if-they-are-agents.html
1•ingve•1m ago•0 comments

Knowledge-Creating LLMs

https://tecunningham.github.io/posts/2026-01-29-knowledge-creating-llms.html
1•salkahfi•2m ago•0 comments

Maple Mono: Smooth your coding flow

https://font.subf.dev/en/
1•signa11•9m ago•0 comments

Sid Meier's System for Real-Time Music Composition and Synthesis

https://patents.google.com/patent/US5496962A/en
1•GaryBluto•16m ago•1 comments

Show HN: Slop News – HN front page now, but it's all slop

https://dosaygo-studio.github.io/hn-front-page-2035/slop-news
3•keepamovin•17m ago•1 comments

Show HN: Empusa – Visual debugger to catch and resume AI agent retry loops

https://github.com/justin55afdfdsf5ds45f4ds5f45ds4/EmpusaAI
1•justinlord•20m ago•0 comments

Show HN: Bitcoin wallet on NXP SE050 secure element, Tor-only open source

https://github.com/0xdeadbeefnetwork/sigil-web
2•sickthecat•22m ago•1 comments

White House Explores Opening Antitrust Probe on Homebuilders

https://www.bloomberg.com/news/articles/2026-02-06/white-house-explores-opening-antitrust-probe-i...
1•petethomas•22m ago•0 comments

Show HN: MindDraft – AI task app with smart actions and auto expense tracking

https://minddraft.ai
2•imthepk•27m ago•0 comments

How do you estimate AI app development costs accurately?

1•insights123•28m ago•0 comments

Going Through Snowden Documents, Part 5

https://libroot.org/posts/going-through-snowden-documents-part-5/
1•goto1•29m ago•0 comments

Show HN: MCP Server for TradeStation

https://github.com/theelderwand/tradestation-mcp
1•theelderwand•31m ago•0 comments

Canada unveils auto industry plan in latest pivot away from US

https://www.bbc.com/news/articles/cvgd2j80klmo
2•breve•32m ago•1 comments

The essential Reinhold Niebuhr: selected essays and addresses

https://archive.org/details/essentialreinhol0000nieb
1•baxtr•35m ago•0 comments

Rentahuman.ai Turns Humans into On-Demand Labor for AI Agents

https://www.forbes.com/sites/ronschmelzer/2026/02/05/when-ai-agents-start-hiring-humans-rentahuma...
1•tempodox•37m ago•0 comments

StovexGlobal – Compliance Gaps to Note

1•ReviewShield•40m ago•1 comments

Show HN: Afelyon – Turns Jira tickets into production-ready PRs (multi-repo)

https://afelyon.com/
1•AbduNebu•41m ago•0 comments

Trump says America should move on from Epstein – it may not be that easy

https://www.bbc.com/news/articles/cy4gj71z0m0o
6•tempodox•41m ago•2 comments

Tiny Clippy – A native Office Assistant built in Rust and egui

https://github.com/salva-imm/tiny-clippy
1•salvadorda656•46m ago•0 comments

LegalArgumentException: From Courtrooms to Clojure – Sen [video]

https://www.youtube.com/watch?v=cmMQbsOTX-o
1•adityaathalye•49m ago•0 comments

US moves to deport 5-year-old detained in Minnesota

https://www.reuters.com/legal/government/us-moves-deport-5-year-old-detained-minnesota-2026-02-06/
8•petethomas•52m ago•3 comments

If you lose your passport in Austria, head for McDonald's Golden Arches

https://www.cbsnews.com/news/us-embassy-mcdonalds-restaurants-austria-hotline-americans-consular-...
1•thunderbong•56m ago•0 comments

Show HN: Mermaid Formatter – CLI and library to auto-format Mermaid diagrams

https://github.com/chenyanchen/mermaid-formatter
1•astm•1h ago•0 comments

RFCs vs. READMEs: The Evolution of Protocols

https://h3manth.com/scribe/rfcs-vs-readmes/
3•init0•1h ago•1 comments

Kanchipuram Saris and Thinking Machines

https://altermag.com/articles/kanchipuram-saris-and-thinking-machines
1•trojanalert•1h ago•0 comments

Chinese chemical supplier causes global baby formula recall

https://www.reuters.com/business/healthcare-pharmaceuticals/nestle-widens-french-infant-formula-r...
2•fkdk•1h ago•0 comments

I've used AI to write 100% of my code for a year as an engineer

https://old.reddit.com/r/ClaudeCode/comments/1qxvobt/ive_used_ai_to_write_100_of_my_code_for_1_ye...
2•ukuina•1h ago•1 comments

Looking for 4 Autistic Co-Founders for AI Startup (Equity-Based)

1•au-ai-aisl•1h ago•1 comments

AI-native capabilities, a new API Catalog, and updated plans and pricing

https://blog.postman.com/new-capabilities-march-2026/
1•thunderbong•1h ago•0 comments
Open in hackernews

The Future of Programming

2•victor_js•9mo ago
I've been mulling over an idea about the long-term future of AI in programming that I believe is both inevitable and transformative (in a "brutal" way, to be honest).

I wanted to share it to get your thoughts and see if you foresee the same implications. Beyond Co-Pilots and Snippet Generation We're all seeing how AI can generate code, help debug, or explain snippets. But what if we take this much, much further?

Powerful, Multilingual Base Models: We already have models like Qwen, Llama, Gemini, etc., which are proficient in programming and understand multiple languages. These are our starting point. The Real Leap: Deep Training on Our Specific Code: This is the game-changer. It's not just about using a generic pre-trained model with limited context. I'm talking about the ability to train (or perform advanced fine-tuning on) one of these models with our entire proprietary codebase: hundreds of megabytes or even gigabytes of our software, our patterns, our internal APIs, our business logic.

The 'Program' Evolves into a Specification: Instead of writing thousands or millions of lines of imperative code as we do today, our primary "programming work" would involve creating and maintaining a high-level specification. This could be a highly structured JSON file, YAML, or a new declarative language designed for this purpose. This file would describe what the software should do, its modules, interactions, and objectives. 'Compiling' Becomes 'Training': The "compilation process" would take our specification (let's call it "program.json"). It would use the base model (which might already be pre-trained with our code or would be trained at that moment using our code as the primary corpus). The result of this "compilation" wouldn't be a traditional executable binary, but a highly specialized and optimized AI model that is the functional application.

Hardware Will Make It Viable: I know that right now, training large models is expensive and slow. But let's think long-term: GPUs 100x more powerful than today's, with Terabytes of VRAM, would make this "training-compilation" process for an entire project feasible in a weekend, or even hours. The current "horror of training" would become a manageable process, similar to a large compilation today.

Why Would This Be Absolutely Revolutionary? Exponential Development and Evolution Speed: Need a new feature or a major change? Modify the high-level specification and "recompile" (retrain the model). Automatic and Continuous Refactoring: The hell of massive manual refactoring could disappear. If you change the specification or update the base model with new best practices, the "code" (the resulting model) is automatically "refactored" during retraining to align. The 'Language' is the Model, the 'Program' is the Training Data: The paradigm shifts completely. The true "programming language" lies in the capabilities of the base model and how it can interpret our specifications and learn from our code. The "software" we directly write becomes those specifications and the preparation of data (our existing code) for training. The Programmer's Role: Evolution or Extinction (Towards AI Analyst/Architect): Line-by-line coding would drastically decrease. The programmer would evolve into an AI systems analyst, an architect of these specifications, a "trainer" guiding the model's learning, and a validator of the generated models. We define the what and how at a much higher level of abstraction. Custom-Tailored, Ultra-Optimized Software: Each application would be an AI model specifically fine-tuned for its purpose, potentially far more efficient and adapted than modular software assembled piece by piece today. I know this is years away, and there are many challenges (interpretability of the final model, debugging, security, etc.), but the direction seems clear. We're already seeing the early signs with models like Qwen and the increasing capabilities of fine-tuning.

Comments

jbellis•9mo ago
I've heard this idea from multiple smart people

but spec to code with an LLM takes something like six orders of magnitude more work than a traditional compiler, solving two of those OOMs with faster GPUs just doesn't get you there

proc0•9mo ago
> This could be a highly structured JSON file, YAML, or a new declarative language designed for this purpose.

That shouldn't be needed. The current "promise" is that AI should reason like a human, so in theory (or at least in the original definition of AGI) it should be literally the same as if giving instructions to a human engineer.

The problem right now is that the models display higher than average expertise but only in specific and narrow ways. In my opinion we still have narrow AI with LLMs, it's just that it's narrow in language and context processing, which makes it seem like it's doing actual reasoning. If it's doing any reasoning it is only indirectly by some coincidence that transformers are capturing some higher order structure of the world. What we need is an AI that thinks and reasons like a human so that it can easily take a task from beginning to end without needing any assistance at all.