frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenClaw Creator: Why 80% of Apps Will Disappear

https://www.youtube.com/watch?v=4uzGDAoNOZc
1•schwentkerr•3m ago•0 comments

What Happens When Technical Debt Vanishes?

https://ieeexplore.ieee.org/document/11316905
1•blenderob•5m ago•0 comments

AI Is Finally Eating Software's Total Market: Here's What's Next

https://vinvashishta.substack.com/p/ai-is-finally-eating-softwares-total
1•gmays•5m ago•0 comments

Computer Science from the Bottom Up

https://www.bottomupcs.com/
1•gurjeet•6m ago•0 comments

Show HN: I built a toy compiler as a young dev

https://vire-lang.web.app
1•xeouz•7m ago•0 comments

You don't need Mac mini to run OpenClaw

https://runclaw.sh
1•rutagandasalim•8m ago•0 comments

Learning to Reason in 13 Parameters

https://arxiv.org/abs/2602.04118
1•nicholascarolan•10m ago•0 comments

Convergent Discovery of Critical Phenomena Mathematics Across Disciplines

https://arxiv.org/abs/2601.22389
1•energyscholar•10m ago•1 comments

Ask HN: Will GPU and RAM prices ever go down?

1•alentred•10m ago•0 comments

From hunger to luxury: The story behind the most expensive rice (2025)

https://www.cnn.com/travel/japan-expensive-rice-kinmemai-premium-intl-hnk-dst
2•mooreds•11m ago•0 comments

Substack makes money from hosting Nazi newsletters

https://www.theguardian.com/media/2026/feb/07/revealed-how-substack-makes-money-from-hosting-nazi...
5•mindracer•12m ago•1 comments

A New Crypto Winter Is Here and Even the Biggest Bulls Aren't Certain Why

https://www.wsj.com/finance/currencies/a-new-crypto-winter-is-here-and-even-the-biggest-bulls-are...
1•thm•12m ago•0 comments

Moltbook was peak AI theater

https://www.technologyreview.com/2026/02/06/1132448/moltbook-was-peak-ai-theater/
1•Brajeshwar•13m ago•0 comments

Why Claude Cowork is a math problem Indian IT can't solve

https://restofworld.org/2026/indian-it-ai-stock-crash-claude-cowork/
1•Brajeshwar•13m ago•0 comments

Show HN: Built an space travel calculator with vanilla JavaScript v2

https://www.cosmicodometer.space/
2•captainnemo729•13m ago•0 comments

Why a 175-Year-Old Glassmaker Is Suddenly an AI Superstar

https://www.wsj.com/tech/corning-fiber-optics-ai-e045ba3b
1•Brajeshwar•14m ago•0 comments

Micro-Front Ends in 2026: Architecture Win or Enterprise Tax?

https://iocombats.com/blogs/micro-frontends-in-2026
1•ghazikhan205•16m ago•0 comments

These White-Collar Workers Actually Made the Switch to a Trade

https://www.wsj.com/lifestyle/careers/white-collar-mid-career-trades-caca4b5f
1•impish9208•16m ago•1 comments

The Wonder Drug That's Plaguing Sports

https://www.nytimes.com/2026/02/02/us/ostarine-olympics-doping.html
1•mooreds•17m ago•0 comments

Show HN: Which chef knife steels are good? Data from 540 Reddit tread

https://new.knife.day/blog/reddit-steel-sentiment-analysis
1•p-s-v•17m ago•0 comments

Federated Credential Management (FedCM)

https://ciamweekly.substack.com/p/federated-credential-management-fedcm
1•mooreds•17m ago•0 comments

Token-to-Credit Conversion: Avoiding Floating-Point Errors in AI Billing Systems

https://app.writtte.com/read/kZ8Kj6R
1•lasgawe•17m ago•1 comments

The Story of Heroku (2022)

https://leerob.com/heroku
1•tosh•18m ago•0 comments

Obey the Testing Goat

https://www.obeythetestinggoat.com/
1•mkl95•18m ago•0 comments

Claude Opus 4.6 extends LLM pareto frontier

https://michaelshi.me/pareto/
1•mikeshi42•19m ago•0 comments

Brute Force Colors (2022)

https://arnaud-carre.github.io/2022-12-30-amiga-ham/
1•erickhill•22m ago•0 comments

Google Translate apparently vulnerable to prompt injection

https://www.lesswrong.com/posts/tAh2keDNEEHMXvLvz/prompt-injection-in-google-translate-reveals-ba...
1•julkali•22m ago•0 comments

(Bsky thread) "This turns the maintainer into an unwitting vibe coder"

https://bsky.app/profile/fullmoon.id/post/3meadfaulhk2s
1•todsacerdoti•23m ago•0 comments

Software development is undergoing a Renaissance in front of our eyes

https://twitter.com/gdb/status/2019566641491963946
1•tosh•23m ago•0 comments

Can you beat ensloppification? I made a quiz for Wikipedia's Signs of AI Writing

https://tryward.app/aiquiz
1•bennydog224•24m ago•1 comments
Open in hackernews

Ask HN: Is it still worth learning a new programming language?

16•xparadigm•1mo ago
I have been writing Python code for a few years now. But I feel like LLMs can write much better code than me. I used to keep myself updated with newer technology. But now I am loosing interest. I was interested in learning Rust. But I don't find any motivation now since I can just vibe code with Rust. Any thoughts in that?

Comments

ben_w•1mo ago
I found myself losing interest but for different reasons. A different magic tool besides LLMs that promised to solve problems I never had, but which has become necessary as most job adverts called for it.

Reactive.

Not just React itself, but the paradigm, so also SwiftUI.

I never had a problem with "massive ViewControllers", the magic that's supposed to glue it all together is just a little bit fragile and hard to debug when it does break, and the syntactic sugar (at least for SwiftUI) is just self-similar enough for me to keep mixing it up.

But learning new languages? Nah, I'm currently learning/getting experience with JavaScript and ruby by code-reviewing LLM output.

Antibabelic•1mo ago
If you don't actually want to write code there's no reason to learn anything. The question is, if LLMs can write much better code than you, what does your employer need you for?
kevin061•1mo ago
Your employer needs you because writing code was never the hardest part of programming and software engineering in general. The hardest part is managing expectations, responsibilities, cross-team communication, multi-domain expertise, corporate bureaucracy and pushing back against unnecessary requirements and constraints. None of which LLMs can solve, and are especially terrible at pushing back.
gus_massa•1mo ago
I agree. I remember once the full especification I got was

> Enough

After talking for 4 hours and 3 coffe cups, I got enough corner cases and main case to understand what they wanted. 1 week later I got a list of criteria that can be programmed. 5 years later most of the unusual but anoying rought corners were fixed. We still had a button to aprove manually the weird cases.

2rsf•1mo ago
LLM's are far from perfect, any production grade code must go through a human inspection unless it's a tiny ad hoc app. This means that you need to be familiar with the languiage and the environment to get good quality code.
chistev•1mo ago
Specialize
gaganuk•1mo ago
What kind of LLM can write prod grade code right now? i think LLM's should be merely used as a tool.
lordkrandel•1mo ago
Only Juniors can think that. You can "vibe code" with Rust? And who is doing the reviews? Verifying requisites, performance, security? You must know the language very well to have a senior level.
kypro•1mo ago
Agreed. There's no way someone can vibe code production-quality code today...

Interestingly as AI models are becoming "more competent" I'm finding more and more issues with AI generated code in the project I work on...

Whenever AI is used by a more junior dev (or a senior dev who simply can't be assed) you always find strange patterns which a senior would never have done...

Typically the code works, but there might be subtle security issues or just unusual coding patterns where it appears an LLM has written slop, and instead of taking a step back and reconsidering its approach when errors crop up, LLMs tend to just add layers of complexity to patch over its slop.

These problems obviously compound if left unchecked.

I actually prefer how things were last year when coding models were less competent because at least if a problem was hard enough they'd get nowhere. Today they're good enough to keep hacking until the slop it writes is just about working.

In regards to OPs question though, I suspect there's less point in playing around with different technologies to get some basic understanding of how they work today (LLMs can do this). But if you want to be able to guide LLMs towards good solutions and ensure the code being produced in the era of AI is good, then having engineers with a deep understanding of the technologies they're using is very important.

apothegm•1mo ago
If you don’t know the language better than the LLM, you can’t notice when it’s making terrible decisions. Use the LLM to accelerate your ramp-up and make you more productive while learning, but it’s not a replacement for learning.
eevmanu•1mo ago
I find the premise reasonable, though I do have an observation.

The current AI hype may have placed us in a filter bubble or echo chamber, shaping our conclusions. These highly specialized algorithms can nudge or reward us for thinking in specific ways.

Regarding programming languages, there is immense value in understanding internal primitives.

As example, consider concurrency primitives. Different languages provide different levels of abstraction: high-level library support in Python, the event loop structure in JavaScript, compiler-level implementations in Rust and C++, runtime-intrinsic mechanisms in Go and Java, and virtual machine intrinsics, such as Erlang.

By viewing languages through this lens, you recognize that each implements these primitives differently, allowing you to choose the most effective tool for the job.

If your goal is to assess the short-term economic value of a technology, your logic is understandable. However, learning new languages and tools remains worthwhile. When AI agents begin invoking these tools on the fly, you may not know if a specific choice is the most effective one. Without this knowledge, you will have some gaps to challenge the AI's decision.

In the long run, making the effort to master these concepts yields far greater value as a software engineer. It enables you to understand the rationale behind applying a precise tool to a precise task.

There are valid arguments supporting various perspectives on this. However, while any approach can be useful, this discussion highlights the need for wisdom: the awareness of one's own biases. As I noted earlier, filter bubbles can distort judgment. Continuously questioning your conclusions helps ensure you move toward the best outcomes. I hope you find this recommendation useful.

whatamidoingyo•1mo ago
For the longest time, I wanted to really dive deep into lower-level learning (e.g. C, Assembly, HDL, chips). LLMs temporarily killed my motivation to continue learning C. I wanted to build a clipboard history similar to windows 11, but for a Linux-based OS. Prompted ChatGPT for the code, and it spit some out. It was pretty bad, nowhere near a finished project. I deleted the LLM code and started anew.

I remembered why I wanted to learn this stuff. It's not for money, or to look cool.

It's for the fascination I have for computing.

How do electrons flow through a wire? How do the chips within a computer direct that flow to produce an image on a screen? These questions are mind-blowing for me. I don't think LLMs can kill this fascination. Although, for web programming, sure. I always hated front-end programming, and now I don't really have to do it (I don't have the same fascination for the why of such tech). So will I ever learn a new front-end framework? Most likely not.

sloaken•1mo ago
Nand to tetris - you can thank me later
whatamidoingyo•1mo ago
Haha, I've taken it. Incredible course. I'll raise your suggestion to the Turing Complete game :)
softwaredoug•1mo ago
LLMs can write. Often with more clarity than I can. But I still like to write, because writing is thinking. And I want to hone my thinking about the problem.

The same can be said about coding. Code to think and explore a problem. See how different languages help you approach a problem. Get a deeper understanding about a topic by coding.

sky2224•1mo ago
My question for you is: how do you not learn a language by having an LLM aid in writing it for you?

What I've found now is LLMs allow me to use/learn new languages, frameworks, and other technologies while being significantly more productive than before. On the flip side, as others have mentioned, I shoot myself in the foot more often.

Basically, I output more, I see more pitfalls upfront, and I get proficient sooner. This can be the case for you if you take on an active development approach rather than a blind/passive one where you just blindly accept whatever a model outputs.

gethly•1mo ago
If you want to be a serious programmer, you have to learn a compiled language, not the Python garbage. And definitely not Rust as that is for seasoned programmers. Java´/Scala/Kotlin, Zig, Odin, Jai(next year), C, C++, C#, Go, Swift, are safe bets with employment opportunities that allow you to also venture into other languages with ease.

Forget about LLM. That's for normies.

I totally understand the lack of motivation. Doing something just for the sake of doing it is pointless. So you have to find a project to do and the language will then become merely a tool instead of the whole point. So find out what you need or want to do, maybe a 3D engine, mobile application, database, search engine, image recognition/OCR, maybe robotics or some arduino automation or whatever and just start working on it.