frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Deletion of Docker.io/Bitnami

https://community.broadcom.com/tanzu/blogs/beltran-rueda-borrego/2025/08/18/how-to-prepare-for-th...
58•zdkaster•1h ago•18 comments

Altered states of consciousness induced by breathwork accompanied by music

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0329411
216•gnabgib•5h ago•86 comments

Canaries in the Coal Mine? Recent Employment Effects of AI [pdf]

https://digitaleconomy.stanford.edu/wp-content/uploads/2025/08/Canaries_BrynjolfssonChandarChen.pdf
40•p1esk•3h ago•25 comments

Yamanot.es: A music box of train station melodies from the JR Yamanote Line

https://yamanot.es/
190•zdw•8h ago•55 comments

Bookmarks.txt is a concept of keeping URLs in plain text files

https://github.com/soulim/bookmarks.txt
44•secwang•3h ago•28 comments

Sci-Hub has been blocked in India

https://sci-hub.se/sci-hub-blocked-india
48•the-mitr•1h ago•5 comments

Malicious versions of Nx and some supporting plugins were published

https://github.com/nrwl/nx/security/advisories/GHSA-cxm3-wv7p-598c
348•longcat•1d ago•382 comments

Toyota is recycling old EV batteries to help power Mazda's production line

https://www.thedrive.com/news/toyota-is-recycling-old-ev-batteries-to-help-power-mazdas-productio...
227•computerliker•4d ago•104 comments

Nvidia DGX Spark

https://www.nvidia.com/en-us/products/workstations/dgx-spark/
85•janandonly•3d ago•88 comments

Launch HN: Bitrig (YC S25) – Build Swift apps on your iPhone

124•kylemacomber•14h ago•89 comments

Unexpected productivity boost of Rust

https://lubeno.dev/blog/rusts-productivity-curve
350•bkolobara•14h ago•318 comments

Will Bardenwerper on Baseball's Betrayal of Its Minor League Roots

https://lithub.com/will-bardenwerper-on-baseballs-betrayal-of-its-minor-league-roots/
12•PaulHoule•2d ago•1 comments

Google has eliminated 35% of managers overseeing small teams in past year

https://www.cnbc.com/2025/08/27/google-executive-says-company-has-cut-a-third-of-its-managers.html
378•frays•8h ago•166 comments

VIM Master

https://github.com/renzorlive/vimmaster
252•Fluffyrnz•14h ago•86 comments

Show HN: Meetup.com and eventribe alternative to small groups

https://github.com/polaroi8d/cactoide
75•orbanlevi•9h ago•36 comments

Researchers find evidence of ChatGPT buzzwords turning up in everyday speech

https://news.fsu.edu/news/education-society/2025/08/26/on-screen-and-now-irl-fsu-researchers-find...
135•giuliomagnifico•8h ago•216 comments

The GitHub website is slow on Safari

https://github.com/orgs/community/discussions/170758
329•talboren•20h ago•245 comments

GMP damaging Zen 5 CPUs?

https://gmplib.org/gmp-zen5
179•sequin•13h ago•146 comments

The Therac-25 Incident (2021)

https://thedailywtf.com/articles/the-therac-25-incident
417•lemper•23h ago•253 comments

Certificates for Onion Services

https://onionservices.torproject.org/research/proposals/usability/certificates/
7•keepamovin•3h ago•0 comments

On the screen, Libyans learned about everything but themselves (2021)

https://newlinesmag.com/argument/on-the-screen-libyans-learned-about-everything-but-themselves/
19•thomassmith65•2d ago•1 comments

Object-oriented design patterns in C and kernel development

https://oshub.org/projects/retros-32/posts/object-oriented-design-patterns-in-osdev
214•joexbayer•1d ago•139 comments

Areal, Are.na's new typeface

https://www.are.na/editorial/introducing-areal-are-nas-new-typeface
120•g0xA52A2A•2d ago•78 comments

Beginning 1 September, we will need to geoblock Mississippi IPs

https://dw-news.dreamwidth.org/44429.html
184•AndrewDucker•10h ago•221 comments

About Containers and VMs

https://linuxcontainers.org/incus/docs/main/explanation/containers_and_vms/
64•Bogdanp•2d ago•44 comments

A failure of security systems at PayPal is causing concern for German banks

https://www.nordbayern.de/news-in-english/paypal-security-systems-down-german-banks-block-payment...
227•tietjens•12h ago•159 comments

Implementing Forth in Go and C

https://eli.thegreenplace.net/2025/implementing-forth-in-go-and-c/
143•Bogdanp•16h ago•20 comments

Partner with Product to pay down technical debt

https://dev.jimgrey.net/2025/08/19/unlocking-high-software-engineering-pace-partner-with-product-...
7•kiyanwang•2d ago•1 comments

Using information theory to solve Mastermind

https://www.goranssongaspar.com/mastermind
99•SchwKatze•4d ago•32 comments

Lago – Open-Source Usage Based Billing – Is Hiring in Sales, Eng, Ops (EU, US)

https://www.ycombinator.com/companies/lago/jobs
1•AnhTho_FR•13h ago
Open in hackernews

Canaries in the Coal Mine? Recent Employment Effects of AI [pdf]

https://digitaleconomy.stanford.edu/wp-content/uploads/2025/08/Canaries_BrynjolfssonChandarChen.pdf
40•p1esk•3h ago

Comments

majormajor•1h ago
LLMs are very useful tools for software development, but focusing on employment does not appear to really dig into if it will automate or augment labor (to use their words). Behaviors are changing not just because of outcomes but because of hype and expectations and b2b sales. You'd expect the initial corporate behaviors to look much the same whether or not LLMs turn into fully-fire-and-forget employee-replacement tools.

Some nits I'd pick along those lines:

>For instance, according to the most recent AI Index Report, AI systems could solve just 4.4% of coding problems on SWE-Bench, a widely used benchmark for software engineering, in 2023, but performance increased to 71.7% in 2024 (Maslej et al., 2025).

Something like this should have the context of SWE-Bench not existing before November, 2023.

Pre-2023 systems were flying blind with regard to what they were going to be tested with. Post-2023 systems have been created in a world where this test exists. Hard to generalize from before/after performance.

> The patterns we observe in the data appear most acutely starting in late 2022, around the time of rapid proliferation of generative AI tools.

This is quite early for "replacement" of software development jobs as by their own prior statement/citation the tools even a year later, when SWE-Bench was introduced, were only hitting that 4.4% task success rate.

It's timing lines up more neatly with the post-COVID-bubble tech industry slowdown. Or with the start of hype about AI productivity vs actual replaced employee productivity.

eru•1h ago
Yes, even if the underlying AI stops advancing today, it will take a while for the economy to digest and adjust to the new systems. Eg a lot of the improvements in usefulness in the last few quarters came from better tooling, not necessarily better models.

But with progress continuing in the models, too, it's an even more complicated affair.

trhway•19m ago
Offshoring was similar - i.e. companies discovered that expensive labor here can be performed inexpensively there while senior laborers/PMs here would perform the overseeing role - and we can look at it how long it took to digest it and adjust to it. While 15-20 years ago it was all the rage, today it is just an established well understood and efficiently utilized, where applicable, practice.
ath3nd•57m ago
> LLMs are very useful tools for software development

That's an opinion many disagree with. As a matter of fact, the only limited study up to date showed that LLMs usage decrease productivity for experienced developers by roughly 19%. Let's reserve opinions and link studies.

https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

My anecdotal experience, for example, is that LLMs are such a negative drain on both time and quality that one has to be really early in their career to benefit from their usage.

yakshaving_jgt•47m ago
I’m 15 years into my career and I write Haskell every day. I’m getting a massive productivity boost from using an LLM.
black_knight•17m ago
How do you find the quality of the Haskell code produced by LLM? Also, how do you use the LLM when coding Haskell? Generating single functions or more?
yakshaving_jgt•4m ago
I'm stuck in my ways with vim/tmux/ghci etc, so I'm not using some AI IDE. I write stuff into ChatGPT and use the output, copying manually, or writing it myself with inspiration from what I get. I feed it a fair bit of context (like, say, a production module with a load of database queries, and the associated spec module) so that it copies the structure and patterns that I've established.

Maybe one of the reasons I'm getting good results is because the LLM effectively has to argue with GHC, and GHC always wins here.

I've found that it's a superpower also for finding logic bugs that I've missed, and for writing SQL queries (which I was never that good at).

manmademagic•43m ago
I wouldn't call myself an 'experienced' developer, but I do find LLMs useful for once-off things, where I can't justify the effort to research and implement my own solution. Two recent examples come to mind:

1. Converting exported data into a suitable import format based on a known schema 2. Creating syntax highlighting rules for language not natively support in a Typst report

Both situations didn't have an existing solution, and while the outputs were not exactly correct, they only needed minor adjustments.

Any other situation, I'd generally prefer to learn how to do the thing, since understanding how to do something can sometimes be as important as the result.

wahnfrieden•36m ago
That's a skill issue. That lone study was observing untrained participants.

It's no surprise to me that devs who are accustomed to working on one thing at a time due to fast feedback loops have not learned to adapt to paralellizing their work (something that has been demonized at agile style organizations) and sit and wait on agents and start watching YouTube instead, as the study found (productivity hits were due to the participants looking at fun non-work stuff instead of attempting to parallelize any work).

The study reflects usage of emergent tools without training, and with regressive training on previous generation sequential processes, so I would expect these results. If there is any merit in coordinating multiple agents on slower feedback work, this study would not find it.

ardit33•30m ago
LLMs help a lot in doing 'well defined' tasks, and things that you already know you want, and they just accelerate the development of it. You still have to re-write some of it, but they do the boring stuff fast.

They are not great if your tasks are not well defined. Sometimes, they suprise you with great solutions, sometimes they produce mess that just wastes your time and deviates from your mission.

To, me LLMs have been great accelerants when you know what you want, and can define it well. Otherwise, they can waste your time by creating a lot of code slop, that you will have to re-write anyways.

One huge positive sideffect, is that sometimes, when you create a component, (i.e. UI, feature, etc), often you need a setup to test, view controllers, data, which is very boring and annoying / time wasting to deal. LLM can do that for you within seconds (even creating mock data), and since this is mostly test code, it doesn't matter if the code quality is not great, it just matters to get something in the screen to test the real functionality. AI/LLMs have been a huge time savers for this part.

hochstenbach•56m ago
One would expect that if such studies indeed indicate that AI has an effect on early-career workers in AI-exposed occupations, that this would be a global effect. I wonder if there are good comparable non-US studies available.
moi2388•18m ago
As a non-US citizen, in my EU country we’re still starving for new programmers.
trhway•17m ago
Poland? Sometimes ago i looked up salaries in Warsaw - it were like $10-$20K/month which as i understand is pretty high by EU standards.
yurishimo•6m ago
Really? That's crazy. I'm earning a bit over 5k in the Netherlands. Granted, not Amsterdam, but still.
NitpickLawyer•37m ago
> Hard to generalize from before/after performance.

While this is true, there are ways to test (open models) on tasks created after the model was released. We see good numbers there as well, so something is generalising there.

whatever1•1h ago
To me it seems that LLMs are a tool that only increase productivity for given headcount in dimensions that were neglected in the past.

For example, everyone now writes emails with perfect grammar in a fraction of a time. So now the expectation for emails is that they will have perfect grammar.

Or one can build an interactive dashboard to visualize their spreadsheet and make it pleasing. Again the expectation just changed. The bar is higher.

So far I have not seen productivity increase in dimensions with direct sight to revenue. (Of course there is the niche of customer service, translation services etc that already were in the process of being automated)

Wololooo•1h ago
I'm sad to see this for several reasons because I do not expect or want everyone up use a LLM to converse with me via mail, the whole point is to exchange information, with everyone using a LLM as output and input, now the whole thing becomes a game of telephone.

You do not need to build a spreadsheet visualiser tool there are plenty of options that exist and are free and open source.

I'm not against advances, I'm just really failing to see what problem was in need of solving here.

The only use I can get behind is the translation, which admittedly works relatively well with LLMs in general due to the nature of the work.

manmademagic•53m ago
It's an interesting dilemma, since if I know that an email was written mostly with AI, it feels to me like the author didn't put effort in, and thus I won't put much effort into reading the email.

I had a conversation with my manager about the implications of everyone using AI to write/summarise everything. The end result will most likely be staff getting Copilot to generate a report, then their manager uses Copilot to summarise the report and generate a new report for their manager, ad inifinitum.

Eventually all context is lost, busywork is amplified, and nobody gains anything.

sschueller•48m ago
I don't have time to read paragraphs of AI slop emails. Please keep them short and to the point. No need to send it through an LLM.
dumbfoundded•47m ago
Corporations will require everything going through an LLM to meet company standards.
monster_truck•1h ago
I've got a few buddies over at Microsoft, they've all said something along the lines of "I really hate using copilot. They at least let us use pre-approved models in VSCode, we get most that come out. But all AI metrics are tracked and there are layoffs every quarter. I have kids now man. Strange times. I know you would have quit months ago" and they're right.
mandeepj•54m ago
Hopefully they have racked up a few million.

https://www.fool.com/investing/2024/11/29/this-magnificent-s...

indymike•49m ago
Now that bs work has next to no cost, I see a lot more bs work being done, and often on pointless bureaucratic activities involving generating questionnaires and answering them. It's as if the activities add up to a big net zero.
ggm•45m ago
Any Board which supports management hollowing out future profits by either firing, or not hiring junior staff deserves to have their bonus rescinded.

Think like a forestry investor, not a cash crop next season.