frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Hosting a website on a disposable vape

https://bogdanthegeek.github.io/blog/projects/vapeserver/
459•dmazin•3h ago•112 comments

Launch HN: Trigger.dev (YC W23) – Open-source platform to build reliable AI apps

36•eallam•1h ago•18 comments

CubeSats are fascinating learning tools for space

https://www.jeffgeerling.com/blog/2025/cubesats-are-fascinating-learning-tools-space
70•warrenm•2h ago•14 comments

Programming Deflation

https://tidyfirst.substack.com/p/programming-deflation
46•dvcoolarun•2h ago•24 comments

How big a solar battery do I need to store all my home's electricity?

https://shkspr.mobi/blog/2025/09/how-big-a-solar-battery-do-i-need-to-store-all-my-homes-electric...
67•FromTheArchives•3h ago•109 comments

RustGPT: A pure-Rust transformer LLM built from scratch

https://github.com/tekaratzas/RustGPT
266•amazonhut•6h ago•118 comments

Removing newlines in FASTA file increases ZSTD compression ratio by 10x

https://log.bede.im/2025/09/12/zstandard-long-range-genomes.html
174•bede•3d ago•65 comments

Apple has a private CSS property to add Liquid Glass effects to web content

https://alastair.is/apple-has-a-private-css-property-to-add-liquid-glass-effects-to-web-content/
130•_alastair•1h ago•66 comments

PayPal to support Ethereum and Bitcoin

https://newsroom.paypal-corp.com/2025-09-15-PayPal-Ushers-in-a-New-Era-of-Peer-to-Peer-Payments,-...
95•DocFeind•2h ago•77 comments

Folks, we have the best π

https://lcamtuf.substack.com/p/folks-we-have-the-best
242•fratellobigio•9h ago•69 comments

Show HN: Daffodil – Open-Source Ecommerce Framework to connect to any platform

https://github.com/graycoreio/daffodil
22•damienwebdev•1h ago•2 comments

Language Models Pack Billions of Concepts into 12k Dimensions

https://nickyoder.com/johnson-lindenstrauss/
301•lawrenceyan•12h ago•97 comments

Show HN: I reverse engineered macOS to allow custom Lock Screen wallpapers

https://cindori.com/backdrop
41•cindori•8h ago•31 comments

The Mac App Flea Market

https://blog.jim-nielsen.com/2025/mac-app-flea-market/
159•ingve•9h ago•85 comments

Death to type classes

https://jappie.me/death-to-type-classes.html
75•zeepthee•3d ago•48 comments

Show HN: Semlib – Semantic Data Processing

https://github.com/anishathalye/semlib
25•anishathalye•2h ago•8 comments

Meta bypassed Apple privacy protections, claims former employee

https://9to5mac.com/2025/08/21/meta-allegedly-bypassed-apple-privacy-measure-and-fired-employee-w...
57•latexr•1h ago•22 comments

Pgstream: Postgres streaming logical replication with DDL changes

https://github.com/xataio/pgstream
36•fenn•4h ago•2 comments

A qualitative analysis of pig-butchering scams

https://arxiv.org/abs/2503.20821
146•stmw•12h ago•80 comments

A string formatting library in 65 lines of C++

https://riki.house/fmt
5•PaulHoule•40m ago•3 comments

Which NPM package has the largest version number?

https://adamhl.dev/blog/largest-number-in-npm-package/
133•genshii•13h ago•55 comments

Not all browsers perform revocation checking

https://revoked-isrgrootx1.letsencrypt.org/
76•sugarpimpdorsey•13h ago•62 comments

The Culture novels as a dystopia

https://www.boristhebrave.com/2025/09/14/the-culture-novels-as-a-dystopia/
34•ibobev•7h ago•55 comments

The madness of SaaS chargebacks

https://medium.com/@citizenblr/the-10-payment-that-cost-me-43-95-the-madness-of-saas-chargebacks-...
47•evermike•5h ago•67 comments

Creating a VGA Signal in Hubris

https://lasernoises.com/blog/hubris-vga/
6•lasernoises•1h ago•1 comments

Denmark's Justice Minister calls encrypted messaging a false civil liberty

https://mastodon.social/@chatcontrol/115204439983078498
354•belter•4h ago•222 comments

NASA's Guardian Tsunami Detection Tech Catches Wave in Real Time

https://www.jpl.nasa.gov/news/nasas-guardian-tsunami-detection-tech-catches-wave-in-real-time/
119•geox•2d ago•20 comments

Cory Doctorow: "centaurs" and "reverse-centaurs"

https://locusmag.com/2025/09/commentary-cory-doctorow-reverse-centaurs/
64•thecosas•3d ago•18 comments

Human writers have always used the em dash

https://www.theringer.com/2025/08/20/pop-culture/em-dash-use-ai-artificial-intelligence-chatgpt-g...
62•FromTheArchives•2d ago•80 comments

PythonBPF – Writing eBPF Programs in Pure Python

https://xeon.me/gnome/pythonbpf/
124•JNRowe•3d ago•28 comments
Open in hackernews

Programming Deflation

https://tidyfirst.substack.com/p/programming-deflation
46•dvcoolarun•2h ago

Comments

djoldman•1h ago
> Will this lead to fewer programmers or more programmers?

> Economics gives us two contradictory answers simultaneously.

> Substitution. The substitution effect says we'll need fewer programmers—machines are replacing human labor.

> Jevons’. Jevons’ paradox predicts that when something becomes cheaper, demand increases as the cheaper good is economically viable in a wider variety of cases.

The answer is a little more nuanced. Assuming the above, the economy will demand fewer programmers for the previous set of demanded programs.

However. The set of demanded programs will likely evolve. So to over-simplify it absurdly: if before we needed 10 programmers to write different fibonacci generators, now we'll need 1 to write those and 9 to write more complicated stuff.

Additionally, the total number of people doing "programming" may go up or down.

My intuition is that the total number will increase but that the programs we write will be substantially different.

virgilp•1h ago
> Don’t bother predicting which future we'll get. Build capabilities that thrive in either scenario.

I feel this is a bit like the "don't be poor" advice (I'm being a little mean here maybe, but not too much). Sure, focus on improving understanding & judgement - I don't think anybody really disagrees that having good judgement is a valuable skill, but how do you improve that? That's a lot trickier to answer, and that's the part where most people struggle. We all intuitively understand that good judgement is valuable, but that doesn't make it any easier to make good judgements.

ramesh31•12m ago
It's just experience, i.e. a collection of personal reference points against seeing how said judgements have played out over time in reality. This is what can't be replaced.

I think the current state of AI is absolutely abysmal, borderline harmful for junior inexperienced devs who will get led down a rabbit hole they cannot recognize. But for someone who really knows what they are doing it has been transformative.

sublinear•1h ago
I think this is a bit like attempting your own plumbing. Knowledge was never the barrier to entry nor was getting your code to compile. It just means more laypeople can add "programming" to their DIY project skills.

Maybe a few of them will pursue it further, but most won't. People don't like hard labor or higher-level planning.

Long term, software engineering will have to be more tightly regulated like the rest of engineering.

BinaryIgor•37m ago
I agree with the first part of your comment, but don't follow the rest - why SE you should be more tightly regulated? It doesn't need to be; if anything, it will just stifle its progress and evolution
sublinear•9m ago
I think AI will make more visible where code diverges from the average. Maybe auditing will be the killer app for near-future AI.

I'm also thinking about a world where more programmers are trying to enter the workforce self-taught using AI. The current world is the continued lowering of education standards and political climate against universities.

The answer to all of the above from the perspective of who don't know or really care about the details may be to cut the knot and impose regulation.

Delegate the details to auditors with AI. We're kinda already doing this on the cybersecurity front. Think about all the ads you see nowadays for earning your "cybersecurity certification" from an online-only university. Those jobs are real and people are hiring, but the expertise is still lacking because there aren't clearer guidelines yet.

With the current technology and generations of people we have, how else but AI can you translate NIST requirements, vulnerability reports, and other docs that don't even exist yet but soon will into pointing someone who doesn't really know how to code towards a line of code they can investigate? The tools we have right now like SAST and DAST are full of false positives and non-devs are stumped how to assess them.

thiago_fm•1h ago
Literally all new products nowadays come with a great degree of software and hardware. Whether they are a SaaS or a kitchen product.

Programming will still exist, it will be just different. Programming has changed a lot of times before as well. I don't think this time is different.

If programming became suddenly too easy to iterate upon, people would be building new competitors to SAP, Salesforce, Shopify and other solutions overnight, but you rarely see any good competitor coming around.

The necessary involvement behind understanding your customers needs, iterating on it between product and tech is not to be underestimated. AI doesn't help with that at all, at maximum is a marginal iteration improvement.

Knowing what to build has been for a long time the real challenge.

rkozik1989•57m ago
This article is really only useful if LLMs are actually able to close the gap from where they are now to where they want to be in a reasonable amount of time. There are plenty of historical examples of technologies where the last few milestones are nearly impossible to achieve: hypersonic/supersonic travel, nuclear waste disposal, curing cancer, error-free language translation, etc. All of which have had periods of great immediate success, but development/research always gets stuck in the mud (sometimes for decades) because the level complexity to complete the race is exponentially higher than it was at the start.

Not saying you should disregard today's AI advancements, I think some level of preparedness is a necessity, but to go all in on the idea that deep learning will power us to true AGI is a gamble. We've dumped billions of dollars and countless hours of research into developing a cancer cure for decades but we still don't have a cure.

BinaryIgor•54m ago
100%; Exactly as you've pointed out, some technologies - or their "last" milestones - might never arrive or could be way further into the future than people initially anticipated.
bgroat•35m ago
We're 90%... we're almost half way there!
germandiago•24m ago
Exactly this.
HumblyTossed•56s ago
It costs 10% to get 90% of the way there. Nobody ever wants to spend the remaking 90% to get us all the way there.
dakiol•8m ago
In software we are always 90% there. Is that 10% the part that gives us jobs. I don’t see LLMs that different from, let’s say, the time compilers or high level languages appeared.
mlhpdx•49m ago
A related idea is sub-linear cost growth where the unit cost of operating software gets cheaper the more it’s used. This should be common, right? But it’s oddly rare in practice.

I suspect the reality around programming will be the same - a chasm between perception and reality around the cost.

basfo•49m ago
I’ve been thinking about the impact of LLMs on software engineering through a Marxist lens. Marx described one of capitalism’s recurring problems as the crisis of overproduction: the economy becomes capable of producing far more goods than the market can absorb profitably. This contradiction (between productive capacity and limited demand) leads to bankruptcies, layoffs, and recessions until value and capital are destroyed, paving the way for the next cycle.

Something similar might be happening in software. LLMs allow us to produce more software, faster and cheaper, than companies can realistically absorb. In the short term this looks amazing: there’s always some backlog of features and technical debt to address, so everyone’s happy.

But a year or two from now, we may reach saturation. Businesses won’t be able to use or even need all the software we’re capable of producing. At that point, wages may fall, unemployment among engineers may grow, and some companies could collapse.

In other words, the bottleneck in software production is shifting from labor capacity to market absorption. And that could trigger something very much like an overproduction crisis. Only this time, not for physical goods, but for code.

BinaryIgor•44m ago
Interesting, but way too optimistic and biased towards the scenario that infinite progress of LLMs and similar tools is just given, when it's not.

"Every small business becomes a software company. Every individual becomes a developer. The cost of "what if we tried..." approaches zero.

Publishing was expensive in 1995, exclusive. Then it became free. Did we get less publishing? Quite the opposite. We got an explosion of content, most of it terrible, some of it revolutionary."

If it only were the same and so simple.

alphazard•27m ago
This mindset that the value of code is always positive is responsible for a lot of problems in industry.

Additional code is additional complexity, "cheap" code is cheap complexity. The decreasing cost of code is comparable to the decreasing costs of chainsaws, table saws, or high powered lasers. If you are a power user of these things then having them cheaply available is great. If you don't know what you're doing, then you may be exposing yourself to more risk than reward by having easier access to them. You could accidentally create an important piece of infrastructure for your business that gives the wrong answers, or requires expensive software engineers to come in and fix. You accidentally cost yourself more in time dealing with the complexity you created than the automation ever brought in benefit.

germandiago•17m ago
Well, this has happened to me with pieces of code directly in front of an AI. You go 800% faster or more and now you have to go and finish it. All the increase in speed is lost in debugging, fixing, fitting and other mundane tasks.

I believe the reason for this is that we still need judgement to do those tasks, AIs are not perfect at it and they spit a lot of extra code and complexity at times. Then now you need to reduce that complexity. But to reduce it, you need to understand the code in the first place. Now you cut here and there, you find a bug, but you are diving in code you do not understand fully yet.

So the human cognition has to go on par with what the AI is doing.

What ended up happening to me (not all the time, for example this for one-off scripts or small scripts is irrelevant, or to author a well-known algorithm that is short enough without bugs) is that I have a sense of speed that ends up not being really true once you have to complete the task as a whole.

On top of that, you tend to lose more context if you generate a lot of code with AI, as a human, and the judgement must be yours anyway. At least, until AIs get really brilliant at it.

They are good at other things. For example, I think they do decently well at reviewing code and finding potential improvements. Bc if they say bullsh*t, as any of us could say in a review, you just go ahead to the next comment and you can always find something valuable from there.

Same for "combinatoric thinking". But for tasks they need more "surgery" and precision, I do not think they are particularly good, but just that they make you feel like they are particularly good, but when you have to deal with the whole task, you notice this is not the case.

afpx•27m ago
"Understanding. Judgment. The ability to see how pieces fit together. The wisdom to know what not to build."

How would one even market oneself in a world where this is what is most valued?

mangamadaiyan•9m ago
Question 1: is this indeed what is most valued at the moment?

Question 2: Do you think this will ever become valuable?

mjr00•2m ago
> How would one even market oneself in a world where this is what is most valued?

That's basically the job description of any senior software development role, at least at any place I've worked. As a senior pumping out straightforward features takes a backseat to problem analysis and architectural decisions, including being able to describe tradeoffs and how they impact the business.

hollowonepl•25m ago
Some valid questions asked in the article but I don’t like the terminology used from title to content to assess situation and options. I’d rather call it Commoditization of Software Engineering.
mola•53s ago
Today they told me to use an agent to code a pretty large feature.. It's the first time they did that. They wanted that feature on q4 but didn't have enough capacity, so they had an idea that if I'll do it with ai then it'll fit in the capacity planning!

That was an odd experience, I would use any tool for the job if it seems fit, I don't mind using Agentic AI for that, but I can't really own the feature and timeline if I'm handed this directive.

If it's a poc, then they still can't really counton having it around at q4.

I tried my best to make them understand this. I hope they do.

recroad•52s ago
> AI isn't just redistributing the same pie; it's making the pie-making process fundamentally cheaper

Not if you believe most other articles related to AI posted here including the one from today (from Singularity is Nearer).