frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•8m ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•9m ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•11m ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•11m ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
1•basilikum•14m ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•14m ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•19m ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
3•throwaw12•20m ago•1 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•20m ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•21m ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•23m ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•26m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•29m ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•35m ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•37m ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•42m ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•44m ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
1•lifeisstillgood•44m ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•47m ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•48m ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•50m ago•0 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•51m ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
2•ramenbytes•54m ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•55m ago•0 comments

Ed Zitron: The Hater's Guide to Microsoft

https://bsky.app/profile/edzitron.com/post/3me7ibeym2c2n
2•vintagedave•58m ago•1 comments

UK infants ill after drinking contaminated baby formula of Nestle and Danone

https://www.bbc.com/news/articles/c931rxnwn3lo
1•__natty__•59m ago•0 comments

Show HN: Android-based audio player for seniors – Homer Audio Player

https://homeraudioplayer.app
3•cinusek•59m ago•2 comments

Starter Template for Ory Kratos

https://github.com/Samuelk0nrad/docker-ory
1•samuel_0xK•1h ago•0 comments

LLMs are powerful, but enterprises are deterministic by nature

3•prateekdalal•1h ago•0 comments

Make your iPad 3 a touchscreen for your computer

https://github.com/lemonjesus/ipad-touch-screen
2•0y•1h ago•1 comments
Open in hackernews

The Future of Programming

2•victor_js•9mo ago
I've been mulling over an idea about the long-term future of AI in programming that I believe is both inevitable and transformative (in a "brutal" way, to be honest).

I wanted to share it to get your thoughts and see if you foresee the same implications. Beyond Co-Pilots and Snippet Generation We're all seeing how AI can generate code, help debug, or explain snippets. But what if we take this much, much further?

Powerful, Multilingual Base Models: We already have models like Qwen, Llama, Gemini, etc., which are proficient in programming and understand multiple languages. These are our starting point. The Real Leap: Deep Training on Our Specific Code: This is the game-changer. It's not just about using a generic pre-trained model with limited context. I'm talking about the ability to train (or perform advanced fine-tuning on) one of these models with our entire proprietary codebase: hundreds of megabytes or even gigabytes of our software, our patterns, our internal APIs, our business logic.

The 'Program' Evolves into a Specification: Instead of writing thousands or millions of lines of imperative code as we do today, our primary "programming work" would involve creating and maintaining a high-level specification. This could be a highly structured JSON file, YAML, or a new declarative language designed for this purpose. This file would describe what the software should do, its modules, interactions, and objectives. 'Compiling' Becomes 'Training': The "compilation process" would take our specification (let's call it "program.json"). It would use the base model (which might already be pre-trained with our code or would be trained at that moment using our code as the primary corpus). The result of this "compilation" wouldn't be a traditional executable binary, but a highly specialized and optimized AI model that is the functional application.

Hardware Will Make It Viable: I know that right now, training large models is expensive and slow. But let's think long-term: GPUs 100x more powerful than today's, with Terabytes of VRAM, would make this "training-compilation" process for an entire project feasible in a weekend, or even hours. The current "horror of training" would become a manageable process, similar to a large compilation today.

Why Would This Be Absolutely Revolutionary? Exponential Development and Evolution Speed: Need a new feature or a major change? Modify the high-level specification and "recompile" (retrain the model). Automatic and Continuous Refactoring: The hell of massive manual refactoring could disappear. If you change the specification or update the base model with new best practices, the "code" (the resulting model) is automatically "refactored" during retraining to align. The 'Language' is the Model, the 'Program' is the Training Data: The paradigm shifts completely. The true "programming language" lies in the capabilities of the base model and how it can interpret our specifications and learn from our code. The "software" we directly write becomes those specifications and the preparation of data (our existing code) for training. The Programmer's Role: Evolution or Extinction (Towards AI Analyst/Architect): Line-by-line coding would drastically decrease. The programmer would evolve into an AI systems analyst, an architect of these specifications, a "trainer" guiding the model's learning, and a validator of the generated models. We define the what and how at a much higher level of abstraction. Custom-Tailored, Ultra-Optimized Software: Each application would be an AI model specifically fine-tuned for its purpose, potentially far more efficient and adapted than modular software assembled piece by piece today. I know this is years away, and there are many challenges (interpretability of the final model, debugging, security, etc.), but the direction seems clear. We're already seeing the early signs with models like Qwen and the increasing capabilities of fine-tuning.

Comments

jbellis•9mo ago
I've heard this idea from multiple smart people

but spec to code with an LLM takes something like six orders of magnitude more work than a traditional compiler, solving two of those OOMs with faster GPUs just doesn't get you there

proc0•9mo ago
> This could be a highly structured JSON file, YAML, or a new declarative language designed for this purpose.

That shouldn't be needed. The current "promise" is that AI should reason like a human, so in theory (or at least in the original definition of AGI) it should be literally the same as if giving instructions to a human engineer.

The problem right now is that the models display higher than average expertise but only in specific and narrow ways. In my opinion we still have narrow AI with LLMs, it's just that it's narrow in language and context processing, which makes it seem like it's doing actual reasoning. If it's doing any reasoning it is only indirectly by some coincidence that transformers are capturing some higher order structure of the world. What we need is an AI that thinks and reasons like a human so that it can easily take a task from beginning to end without needing any assistance at all.