frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Autism Incidence in Girls and Boys May Be Nearly Equal, Study Suggests

https://www.medpagetoday.com/neurology/autism/119747
1•paulpauper•11s ago•0 comments

Wellness Hotels Discovery Application

https://aurio.place/
1•cherrylinedev•1m ago•1 comments

NASA delays moon rocket launch by a month after fuel leaks during test

https://www.theguardian.com/science/2026/feb/03/nasa-delays-moon-rocket-launch-month-fuel-leaks-a...
1•mooreds•1m ago•0 comments

Sebastian Galiani on the Marginal Revolution

https://marginalrevolution.com/marginalrevolution/2026/02/sebastian-galiani-on-the-marginal-revol...
1•paulpauper•4m ago•0 comments

Ask HN: Are we at the point where software can improve itself?

1•ManuelKiessling•5m ago•0 comments

Binance Gives Trump Family's Crypto Firm a Leg Up

https://www.nytimes.com/2026/02/07/business/binance-trump-crypto.html
1•paulpauper•5m ago•0 comments

Reverse engineering Chinese 'shit-program' for absolute glory: R/ClaudeCode

https://old.reddit.com/r/ClaudeCode/comments/1qy5l0n/reverse_engineering_chinese_shitprogram_for/
1•edward•5m ago•0 comments

Indian Culture

https://indianculture.gov.in/
1•saikatsg•8m ago•0 comments

Show HN: Maravel-Framework 10.61 prevents circular dependency

https://marius-ciclistu.medium.com/maravel-framework-10-61-0-prevents-circular-dependency-cdb5d25...
1•marius-ciclistu•8m ago•0 comments

The age of a treacherous, falling dollar

https://www.economist.com/leaders/2026/02/05/the-age-of-a-treacherous-falling-dollar
2•stopbulying•8m ago•0 comments

Ask HN: AI Generated Diagrams

1•voidhorse•11m ago•0 comments

Microsoft Account bugs locked me out of Notepad – are Thin Clients ruining PCs?

https://www.windowscentral.com/microsoft/windows-11/windows-locked-me-out-of-notepad-is-the-thin-...
2•josephcsible•11m ago•0 comments

Show HN: A delightful Mac app to vibe code beautiful iOS apps

https://milq.ai/hacker-news
3•jdjuwadi•14m ago•1 comments

Show HN: Gemini Station – A local Chrome extension to organize AI chats

https://github.com/rajeshkumarblr/gemini_station
1•rajeshkumar_dev•14m ago•0 comments

Welfare states build financial markets through social policy design

https://theloop.ecpr.eu/its-not-finance-its-your-pensions/
2•kome•18m ago•0 comments

Market orientation and national homicide rates

https://onlinelibrary.wiley.com/doi/10.1111/1745-9125.70023
4•PaulHoule•18m ago•0 comments

California urges people avoid wild mushrooms after 4 deaths, 3 liver transplants

https://www.cbsnews.com/news/california-death-cap-mushrooms-poisonings-liver-transplants/
1•rolph•19m ago•0 comments

Matthew Shulman, co-creator of Intellisense, died 2019 March 22

https://www.capenews.net/falmouth/obituaries/matthew-a-shulman/article_33af6330-4f52-5f69-a9ff-58...
3•canucker2016•20m ago•1 comments

Show HN: SuperLocalMemory – AI memory that stays on your machine, forever free

https://github.com/varun369/SuperLocalMemoryV2
1•varunpratap369•21m ago•0 comments

Show HN: Pyrig – One command to set up a production-ready Python project

https://github.com/Winipedia/pyrig
1•Winipedia•23m ago•0 comments

Fast Response or Silence: Conversation Persistence in an AI-Agent Social Network [pdf]

https://github.com/AysajanE/moltbook-persistence/blob/main/paper/main.pdf
1•EagleEdge•23m ago•0 comments

C and C++ dependencies: don't dream it, be it

https://nibblestew.blogspot.com/2026/02/c-and-c-dependencies-dont-dream-it-be-it.html
1•ingve•24m ago•0 comments

Show HN: Vbuckets – Infinite virtual S3 buckets

https://github.com/danthegoodman1/vbuckets
1•dangoodmanUT•24m ago•0 comments

Open Molten Claw: Post-Eval as a Service

https://idiallo.com/blog/open-molten-claw
1•watchful_moose•24m ago•0 comments

New York Budget Bill Mandates File Scans for 3D Printers

https://reclaimthenet.org/new-york-3d-printer-law-mandates-firearm-file-blocking
2•bilsbie•25m ago•1 comments

The End of Software as a Business?

https://www.thatwastheweek.com/p/ai-is-growing-up-its-ceos-arent
1•kteare•26m ago•0 comments

Exploring 1,400 reusable skills for AI coding tools

https://ai-devkit.com/skills/
1•hoangnnguyen•27m ago•0 comments

Show HN: A unique twist on Tetris and block puzzle

https://playdropstack.com/
1•lastodyssey•30m ago•1 comments

The logs I never read

https://pydantic.dev/articles/the-logs-i-never-read
1•nojito•32m ago•0 comments

How to use AI with expressive writing without generating AI slop

https://idratherbewriting.com/blog/bakhtin-collapse-ai-expressive-writing
1•cnunciato•33m ago•0 comments
Open in hackernews

The Future of Programming

2•victor_js•9mo ago
I've been mulling over an idea about the long-term future of AI in programming that I believe is both inevitable and transformative (in a "brutal" way, to be honest).

I wanted to share it to get your thoughts and see if you foresee the same implications. Beyond Co-Pilots and Snippet Generation We're all seeing how AI can generate code, help debug, or explain snippets. But what if we take this much, much further?

Powerful, Multilingual Base Models: We already have models like Qwen, Llama, Gemini, etc., which are proficient in programming and understand multiple languages. These are our starting point. The Real Leap: Deep Training on Our Specific Code: This is the game-changer. It's not just about using a generic pre-trained model with limited context. I'm talking about the ability to train (or perform advanced fine-tuning on) one of these models with our entire proprietary codebase: hundreds of megabytes or even gigabytes of our software, our patterns, our internal APIs, our business logic.

The 'Program' Evolves into a Specification: Instead of writing thousands or millions of lines of imperative code as we do today, our primary "programming work" would involve creating and maintaining a high-level specification. This could be a highly structured JSON file, YAML, or a new declarative language designed for this purpose. This file would describe what the software should do, its modules, interactions, and objectives. 'Compiling' Becomes 'Training': The "compilation process" would take our specification (let's call it "program.json"). It would use the base model (which might already be pre-trained with our code or would be trained at that moment using our code as the primary corpus). The result of this "compilation" wouldn't be a traditional executable binary, but a highly specialized and optimized AI model that is the functional application.

Hardware Will Make It Viable: I know that right now, training large models is expensive and slow. But let's think long-term: GPUs 100x more powerful than today's, with Terabytes of VRAM, would make this "training-compilation" process for an entire project feasible in a weekend, or even hours. The current "horror of training" would become a manageable process, similar to a large compilation today.

Why Would This Be Absolutely Revolutionary? Exponential Development and Evolution Speed: Need a new feature or a major change? Modify the high-level specification and "recompile" (retrain the model). Automatic and Continuous Refactoring: The hell of massive manual refactoring could disappear. If you change the specification or update the base model with new best practices, the "code" (the resulting model) is automatically "refactored" during retraining to align. The 'Language' is the Model, the 'Program' is the Training Data: The paradigm shifts completely. The true "programming language" lies in the capabilities of the base model and how it can interpret our specifications and learn from our code. The "software" we directly write becomes those specifications and the preparation of data (our existing code) for training. The Programmer's Role: Evolution or Extinction (Towards AI Analyst/Architect): Line-by-line coding would drastically decrease. The programmer would evolve into an AI systems analyst, an architect of these specifications, a "trainer" guiding the model's learning, and a validator of the generated models. We define the what and how at a much higher level of abstraction. Custom-Tailored, Ultra-Optimized Software: Each application would be an AI model specifically fine-tuned for its purpose, potentially far more efficient and adapted than modular software assembled piece by piece today. I know this is years away, and there are many challenges (interpretability of the final model, debugging, security, etc.), but the direction seems clear. We're already seeing the early signs with models like Qwen and the increasing capabilities of fine-tuning.

Comments

jbellis•9mo ago
I've heard this idea from multiple smart people

but spec to code with an LLM takes something like six orders of magnitude more work than a traditional compiler, solving two of those OOMs with faster GPUs just doesn't get you there

proc0•9mo ago
> This could be a highly structured JSON file, YAML, or a new declarative language designed for this purpose.

That shouldn't be needed. The current "promise" is that AI should reason like a human, so in theory (or at least in the original definition of AGI) it should be literally the same as if giving instructions to a human engineer.

The problem right now is that the models display higher than average expertise but only in specific and narrow ways. In my opinion we still have narrow AI with LLMs, it's just that it's narrow in language and context processing, which makes it seem like it's doing actual reasoning. If it's doing any reasoning it is only indirectly by some coincidence that transformers are capturing some higher order structure of the world. What we need is an AI that thinks and reasons like a human so that it can easily take a task from beginning to end without needing any assistance at all.