frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

The Future of Programming

1•victor_js•3h ago
I've been mulling over an idea about the long-term future of AI in programming that I believe is both inevitable and transformative (in a "brutal" way, to be honest).

I wanted to share it to get your thoughts and see if you foresee the same implications. Beyond Co-Pilots and Snippet Generation We're all seeing how AI can generate code, help debug, or explain snippets. But what if we take this much, much further?

Powerful, Multilingual Base Models: We already have models like Qwen, Llama, Gemini, etc., which are proficient in programming and understand multiple languages. These are our starting point. The Real Leap: Deep Training on Our Specific Code: This is the game-changer. It's not just about using a generic pre-trained model with limited context. I'm talking about the ability to train (or perform advanced fine-tuning on) one of these models with our entire proprietary codebase: hundreds of megabytes or even gigabytes of our software, our patterns, our internal APIs, our business logic.

The 'Program' Evolves into a Specification: Instead of writing thousands or millions of lines of imperative code as we do today, our primary "programming work" would involve creating and maintaining a high-level specification. This could be a highly structured JSON file, YAML, or a new declarative language designed for this purpose. This file would describe what the software should do, its modules, interactions, and objectives. 'Compiling' Becomes 'Training': The "compilation process" would take our specification (let's call it "program.json"). It would use the base model (which might already be pre-trained with our code or would be trained at that moment using our code as the primary corpus). The result of this "compilation" wouldn't be a traditional executable binary, but a highly specialized and optimized AI model that is the functional application.

Hardware Will Make It Viable: I know that right now, training large models is expensive and slow. But let's think long-term: GPUs 100x more powerful than today's, with Terabytes of VRAM, would make this "training-compilation" process for an entire project feasible in a weekend, or even hours. The current "horror of training" would become a manageable process, similar to a large compilation today.

Why Would This Be Absolutely Revolutionary? Exponential Development and Evolution Speed: Need a new feature or a major change? Modify the high-level specification and "recompile" (retrain the model). Automatic and Continuous Refactoring: The hell of massive manual refactoring could disappear. If you change the specification or update the base model with new best practices, the "code" (the resulting model) is automatically "refactored" during retraining to align. The 'Language' is the Model, the 'Program' is the Training Data: The paradigm shifts completely. The true "programming language" lies in the capabilities of the base model and how it can interpret our specifications and learn from our code. The "software" we directly write becomes those specifications and the preparation of data (our existing code) for training. The Programmer's Role: Evolution or Extinction (Towards AI Analyst/Architect): Line-by-line coding would drastically decrease. The programmer would evolve into an AI systems analyst, an architect of these specifications, a "trainer" guiding the model's learning, and a validator of the generated models. We define the what and how at a much higher level of abstraction. Custom-Tailored, Ultra-Optimized Software: Each application would be an AI model specifically fine-tuned for its purpose, potentially far more efficient and adapted than modular software assembled piece by piece today. I know this is years away, and there are many challenges (interpretability of the final model, debugging, security, etc.), but the direction seems clear. We're already seeing the early signs with models like Qwen and the increasing capabilities of fine-tuning.

Comments

jbellis•2h ago
I've heard this idea from multiple smart people

but spec to code with an LLM takes something like six orders of magnitude more work than a traditional compiler, solving two of those OOMs with faster GPUs just doesn't get you there

proc0•2h ago
> This could be a highly structured JSON file, YAML, or a new declarative language designed for this purpose.

That shouldn't be needed. The current "promise" is that AI should reason like a human, so in theory (or at least in the original definition of AGI) it should be literally the same as if giving instructions to a human engineer.

The problem right now is that the models display higher than average expertise but only in specific and narrow ways. In my opinion we still have narrow AI with LLMs, it's just that it's narrow in language and context processing, which makes it seem like it's doing actual reasoning. If it's doing any reasoning it is only indirectly by some coincidence that transformers are capturing some higher order structure of the world. What we need is an AI that thinks and reasons like a human so that it can easily take a task from beginning to end without needing any assistance at all.

A Formal Analysis of Apple's iMessage PQ3 Protocol [pdf]

https://www.usenix.org/system/files/conference/usenixsecurity25/sec25cycle1-prepub-595-linker.pdf
2•luu•5m ago•0 comments

Tesla confirms it has given up on its Cybertruck range extender

https://electrek.co/2025/05/07/tesla-never-make-cybertruck-range-extender-achieve-promised-range/
2•addaon•9m ago•0 comments

Show HN: Destiny Matrix – Free, instant, no-signup numerology analysis

https://destiny-matrix.cc/
2•yxchen1994•11m ago•0 comments

Show HN: Build a Figma plugin that converts vectors to clip-path: shape()

https://www.figma.com/community/plugin/1501841537217920533/shape-converter
1•haxfenx•11m ago•0 comments

Tracker Boot – Be Agile with Tracker Boot

https://trackerboot.com
2•euske•13m ago•0 comments

The Fraying of the US Global Currency Reserve System (2020)

https://www.lynalden.com/fraying-petrodollar-system/
1•felineflock•14m ago•0 comments

Show HN: Raw Binary Program Analysis Tool

https://github.com/nstarke/BaseAddressDiscoverererer
2•bootbloopers•18m ago•0 comments

AI Avatar of Slain Murder Victim Addresses Killer in Court

https://www.washingtonpost.com/nation/2025/05/08/ai-victim-court-sentencing/
2•andygcook•19m ago•0 comments

Institutionalizing Politicized Science

https://www.science.org/doi/10.1126/science.ady6128
4•anigbrowl•26m ago•0 comments

UnitedHealthcare sued by shareholders over reaction to CEO's killing

https://www.nbcnews.com/business/business-news/unitedhealthcare-sued-shareholders-reaction-ceos-killing-rcna205550
3•donohoe•28m ago•1 comments

90s Cable Simulator – Recreating Retro Cable TV with a Raspberry Pi

https://www.youtube.com/watch?v=CDW1wokbRiQ
2•bane•29m ago•0 comments

AI is already eating its own: Prompt engineering is quickly going extinct

https://www.fastcompany.com/91327911/prompt-engineering-going-extinct
3•ajdude•32m ago•0 comments

Aetna reimburses 25% less than what they claim (case study)

https://johnsonkevin.com/2025/05/08/aetna-underpays-by-25-percent.html
7•kevin499•40m ago•0 comments

ChatGPT's deep research tool gets a GitHub connector to answer about code

https://techcrunch.com/2025/05/08/chatgpts-deep-research-tool-gets-a-github-connector-to-answer-questions-about-code/
1•badmonster•41m ago•0 comments

Audiobookshelf: Self-hosted audiobook and podcast server

https://www.audiobookshelf.org/
2•fjk•42m ago•0 comments

Spacetop AR desktop is now available as a Windows app

https://liliputing.com/spacetop-ar-desktop-is-now-available-as-a-windows-app/
1•PaulHoule•44m ago•0 comments

Research Topics in Knowledge Graph

https://github.com/heathersherry/Knowledge-Graph-Tutorials-and-Papers
2•badmonster•46m ago•0 comments

BoquilaHUB 0.2 – AI for Biodiversity

https://github.com/boquila/boquilahub/releases/tag/v0.2.0
1•jdiaz97•47m ago•0 comments

Show HN: The weekly active users of my fake plugin TabTab have exceeded 500

https://chromewebstore.google.com/detail/tabtab-tab-management-too/bplfdojoimpegfcgepljdbfjdalmcffa
1•jackiefeng•1h ago•2 comments

NIH Bans New Funding from U.S. Scientists to Partners Abroad

https://www.nytimes.com/2025/05/06/health/nih-us-scientist-funding-foreign-research.html
8•insane_dreamer•1h ago•1 comments

The Most Valuable Commodity in the World Is Friction

https://kyla.substack.com/p/the-most-valuable-commodity-in-the
1•walterbell•1h ago•0 comments

Ask HN: Privacy concerns when using AI assistants for coding?

4•Kholin•1h ago•4 comments

Show HN: Boost Reader – An online reader I built to stop context switching

https://boost-reader.vercel.app/
1•beaniez•1h ago•1 comments

Commercial Solutions for Classified CSfC for NSA to deliver secure cybersecurity

https://www.nsa.gov/Resources/Commercial-Solutions-for-Classified-Program/
2•Bluestein•1h ago•0 comments

The Centuries-Long Struggle to Make English Words Behave

https://www.nytimes.com/2025/04/15/books/review/enuf-is-enough-gabe-henry-pronoun-trouble-john-mcwhorter.html
2•pseudolus•1h ago•2 comments

Management Cybernetics Basics for Urbanists and YIMBYs

https://www.governance.fyi/p/management-cybernetics-101-for-urbanists
3•RetiredRichard•1h ago•0 comments

Tech and Non-Tech Stacks to Run Listen Notes (2025)

https://www.listennotes.com/blog/tech-non-tech-stacks-to-run-listen-notes-2025-113/
1•wenbin•1h ago•0 comments

Some __nonstring__ Turbulence

https://lwn.net/Articles/1018486/
1•signa11•1h ago•0 comments

Alphabet's share price plunges on traffic drop testimony

https://finance.yahoo.com/news/alphabets-share-price-plunges-traffic-163947084.html
6•donohoe•1h ago•1 comments

A Conversation with Jony Ive

https://www.youtube.com/watch?v=wLb9g_8r-mE
2•Jhsto•1h ago•0 comments