frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Tell HN: Help restore the tax deduction for software dev in the US (Section 174)

1172•dang•5h ago•460 comments

Containerization is a Swift package for running Linux containers on macOS

https://github.com/apple/containerization
129•gok•1h ago•34 comments

Apple announces Foundation Models and Containerization frameworks, etc

https://www.apple.com/newsroom/2025/06/apple-supercharges-its-tools-and-technologies-for-developers/
406•thm•4h ago•255 comments

Show HN: Munal OS: a graphical experimental OS with WASM sandboxing

https://github.com/Askannz/munal-os
135•Gazoche•4h ago•52 comments

Apple introduces a universal design across platforms

https://www.apple.com/newsroom/2025/06/apple-introduces-a-delightful-and-elegant-new-software-design/
348•meetpateltech•5h ago•540 comments

What methylene blue can (and can’t) do for the brain

https://neurofrontiers.blog/what-methylene-blue-can-and-cant-do-for-the-brain/
63•wiry•3d ago•30 comments

Domains I Love

https://www.ahmedsaoudi.com/blog/domains-i-love/
30•ahmedfromtunis•1h ago•17 comments

Launch HN: Chonkie (YC X25) – Open-Source Library for Advanced Chunking

84•snyy•6h ago•30 comments

Go is a good fit for agents

https://docs.hatchet.run/blog/go-agents
86•abelanger•5d ago•67 comments

Show HN: Somo – a human friendly alternative to netstat

https://github.com/theopfr/somo
62•hollow64•4h ago•19 comments

Sly Stone Has Died

https://abcnews.go.com/US/sly-stone-pioneering-leader-funk-band-sly-family/story?id=122666345
4•brudgers•36m ago•0 comments

Doctors could hack the nervous system with ultrasound

https://spectrum.ieee.org/focused-ultrasound-stimulation-inflammation-diabetes
107•purpleko•7h ago•11 comments

Hokusai Moyo Gafu: an album of dyeing patterns

https://ndlsearch.ndl.go.jp/en/imagebank/theme/hokusaimoyo
119•fanf2•7h ago•13 comments

Bruteforcing the phone number of any Google user

https://brutecat.com/articles/leaking-google-phones
401•brutecat•8h ago•129 comments

Pi in Pascal's Triangle

https://www.cut-the-knot.org/arithmetic/algebra/PiInPascal.shtml
36•senfiaj•3d ago•4 comments

Algovivo an energy-based formulation for soft-bodied virtual creatures

https://juniorrojas.com/algovivo/
48•tzury•6h ago•3 comments

Why quadratic funding is not optimal

https://jonathanwarden.com/quadratic-funding-is-not-optimal/
88•jwarden•7h ago•69 comments

The new Gödel Prize winner tastes great and is less filling

https://blog.computationalcomplexity.org/2025/06/the-new-godel-prize-winner-tastes-great.html
85•baruchel•7h ago•23 comments

Show HN: Most users won't report bugs unless you make it stupidly easy

136•lakshikag•7h ago•75 comments

A bit more on Twitter/X's new encrypted messaging

https://blog.cryptographyengineering.com/2025/06/09/a-bit-more-on-twitter-xs-new-encrypted-messaging/
93•vishnuharidas•3h ago•58 comments

How do you prototype a nice language?

https://kevinlynagh.com/newsletter/2025_06_03_prototyping_a_language/
8•surprisetalk•3d ago•0 comments

Myanmar's chinlone ball sport threatened by conflict and rattan shortages

https://www.aljazeera.com/gallery/2025/6/5/myanmars-chinlone-ball-sport-threatened-by-conflict-and-rattan-shortages
13•YeGoblynQueenne•4d ago•0 comments

A man rebuilding the last Inca rope bridge

https://www.atlasobscura.com/articles/last-inca-rope-bridge-qeswachaka-tradition
55•kaonwarb•2d ago•14 comments

Finding Shawn Mendes (2019)

https://ericneyman.wordpress.com/2019/11/26/finding-shawn-mendes/
325•jzwinck•15h ago•51 comments

Astronomers have discovered a mysterious object flashing signals from deep space

https://www.livescience.com/space/unlike-anything-we-have-seen-before-astronomers-discover-mysterious-object-firing-strange-signals-at-earth-every-44-minutes
53•gmays•2h ago•29 comments

Show HN: Glowstick – type level tensor shapes in stable rust

https://github.com/nicksenger/glowstick
31•bietroi•6h ago•3 comments

RFK Jr. ousts entire CDC vaccine advisory committee

https://apnews.com/article/kennedy-cdc-acip-vaccines-3790c89f45b6314c5c7b686db0e3a8f9
52•doener•44m ago•4 comments

Maypole Dance of Braid Like Groups (2009)

https://divisbyzero.com/2009/05/04/the-maypole-braid-group/
32•srean•7h ago•3 comments

LLMs are cheap

https://www.snellman.net/blog/archive/2025-06-02-llms-are-cheap/
279•Bogdanp•10h ago•250 comments

Potential and Limitation of High-Frequency Cores and Caches (2024)

https://arch.cs.ucdavis.edu/simulation/2024/08/06/potentiallimitationhighfreqcorescaches.html
18•matt_d•3d ago•10 comments
Open in hackernews

Anthropic's AI-generated blog dies an early death

https://techcrunch.com/2025/06/09/anthropics-ai-generated-blog-dies-an-early-death/
73•Sourabhsss1•6h ago

Comments

paxys•5h ago
It's fascinating how creative these large AI companies are at finding ways to burn through VC funding. Hire a team of developers/content writers/editors, tune your models, set up a blog and build an entire infrastructure to publish articles to it, market it, and then...shut it all down in a week. And this is a company burning through multiple billions of dollars every quarter just to keep the lights on.
elzbardico•5h ago
The joys of wealth transfer from the poor and the middle class workers to the asset owning class via inflation and the Cantillion Effect [1].

1- https://www.adamsmith.org/blog/the-cantillion-effect

stanford_labrat•4h ago
I've always thought of these VC fueled expeditions to nowhere as the opposite. Wealth transfer from the owning class to the middle class seeing as a lot of these ventures crash and burn with nothing to show for it.

Except for the founders/early employees who get a modest (sometimes excessive) paycheck.

chimeracoder•4h ago
> I've always thought of these VC fueled expeditions to nowhere as the opposite. Wealth transfer from the owning class to the middle class seeing as a lot of these ventures crash and burn with nothing to show for it.

That would be the case if VCs were investing their own money, but they're not. They're investing on behalf of their LPs. Who LPs are is generally an extremely closely-guarded secret, but it includes institutional investors, which means middle-class pensions and 401(k)s are wrapped up in these investments as well, just as they were tied up in the 2008 financial crisis.

It's not as clean-cut as it seems.

givemeethekeys•4h ago
Can VC's get their funding from mutual funds and pension plans?
rightbyte•2h ago
I think that is the 'find bag holders' part of the plan?
hinkley•3h ago
I think the chilling effect on mom and pop businesses undoes all of that. When they (we) disrupt and industry the power consolidates but in new hands. The idea is to get it away from the entrenched interests but like a good cultural revolution the second tier ends up in charge when the first tier gets beheaded.
swyx•5h ago
it's fascinating how you think being creative is an insult.
an-honest-moose•5h ago
It's about how they're applying that creativity, not the creativity itself.
bowsamic•5h ago
What makes you think they think that? If someone says “finding creative ways to murder people” you think they’re saying the problem is the “creative” part?
pscanf•5h ago
People use AI to write blogs, passing them off as human-written. AI companies use humans to write blogs, passing them off as AI-written. :)
anon7000•5h ago
AI generated web content has got to be one of the most counterproductive things to use AI on.

If I wanted an AI summary of a topic or answer to a question, a chatbot of choice can easily provide that for you. There’s no need for yet another piece of blogspam that isn’t introducing new information into the world. That content is already available inside the AI model. At some point, we’ll get so oversaturated with fake, generated BS that there won’t be enough high quality new information to feed them.

echelon•5h ago
Using AI generated content to mass-scale torpedo the web could be a tool to get people off of Google and existing social media platforms.

I'm certainly using Google less and less these days, and even niche subreddits are getting an influx of LLM drivel.

There are fantastic uses of AI, but there's an over-abundance of low-effort growth hacking at scale that is saturating existing conduits of signal. I have to wonder if some of this might be done intentionally to poison the well.

tartoran•2h ago
> Using AI generated content to mass-scale torpedo the web could be a tool to get people off of Google and existing social media platforms.

How? Fill the web with AI generated content or just using LLMs to search for information? As more junk is poured into training LLMs this too will take a hit at some point. I remember how great the early web search was, one could find from thousands to millions of hits for request. At some point it got so polluted that it became nearly useless. It wasn't only spam that made is less useful, it was also the search providers who twisted the rules to get them to reap all the benefits.

h1fra•5h ago
hear me out: seo
saulpw•5h ago
This is pretty reductive. Many people want to pump some new thoughts they had into an AI to generate something tolerable to post on their blog. The writing isn't the point; the thoughts are. But they can't just post 200 words of bullet points (or don't feel like they can, anyway). So the AI is an assistant which takes their thoughts and makes them look acceptable for publication.
mjr00•5h ago
> The writing isn't the point; the thoughts are. But they can't just post 200 words of bullet points (or don't feel like they can, anyway).

Who or what is clamoring for that AI-generated padding which turns 200 words of bullet points into 2000 words of prose, though? It's not like there's suddenly going to be 10x more insight, it's just 10x more slop to slog through that dilutes whatever points the writer had.

If you have 200 words' worth of thoughts you want to share... you can just write 200 words.

staticman2•5h ago
Blogging is a pretty niche activity in general these days.

I think if writing more than 200 words is painful for you, blogging probably isn't for you?

Capricorn2481•5h ago
> The writing isn't the point; the thoughts are

This is so, so wrong. The writing is the thoughts. A person's un-articulated bullet points are not worth that much. And AI is not going to pull novel ideas out of your brain via your bullet points. It's either going to summarize them incorrectly or homogenize them into something generic. It would be like dropping acid with a friend and asking ChatGPT to summarize our movie ideas.

The idea that writing is an irrelevant way to gatekeep people with otherwise brilliant ideas is not reality. You don't have to be James Baldwin, but I will not get a sense for what your ideas even are via an AI summary.

ausbah•5h ago
if you can’t write your thoughts as something cohesive to begin with i don’t using LLMs is going to solve your problem. writing is absolutely the point if you’re trying to communicate with text. lack of clarity is usually sign of lack of understanding imo, i see it in my own writing
Veen•5h ago
The writing is the point. A well-structured, well-argued, and well-written article indicates the writer has devoted considerable time to understanding and thinking through the topic — if they haven't, it quickly becomes obvious. A series of bullet points indicates the opposite, and using an AI to hide the fact that the "writer" has invested minimal cognitive effort is dishonest.
nkrisc•4h ago
It’s ridiculous to expect people to read something you couldn’t even be bothered to write.

If you just want to get the information out then just post the bullet points, what do you care?

If you want to be recognized as a writer, then write.

4ndrewl•4h ago
> The writing isn't the point; the thoughts are.

Writing _is_ thinking.

fullshark•5h ago
Anthropic cares about that, every individual content creator does not. Their goal is to win the war for attention, which is now close to zero sum with everyone on the internet and there's only 24 hours in the day.
jerf•4h ago
This is the fundamental reason why I am in favor of a ban on simply posting AI-generated content in user forums. It isn't that AI is fundamentally bad per se, and to the extent that it is problematic now, that badness may well be a temporary situation. It's because there's not a lot of utility in you as a human being basically just being an intermediary to what some AI says today. Anyone who wants that can go get it themselves, in an interactive session where they can explore the answer themselves, with the most up-to-date models. It's fundamentally no different than pasting in the top 10 Google results for a search with no further commentary; if you're going to go that route just give a letmegooglethat.com link. It's exactly as helpful, and in its own way kind of carries the same sort of snarkiness with it... "oh, are you too stupid to AI? let me help you with that".

Similarly, I remember there was a lot of frothy startup ideas around using AI to do very similar things. The canonical one I remember is "using AI to generate commit messages". But I don't want your AI commit messages... again, not because AI is just Platonically bad or something, but because if I want an AI summary of your commit, I'd rather do it in two years when I actually need the summary, and then use a 2027 AI to do it rather than a 2025 AI. There's little to no utility to basically caching an AI response and freezing it for me. I don't need help with that.

verall•3h ago
I fully agree with this, besides that if an AI could auto-generate a commit message that I can edit to make actually correct and comprehensive, it will probably be a better, more descriptive message than whatever I come up with in usually ~3 minutes.

The value is a nice starting point but the message is still confirmed by the actual expert. If it's fully auto-generated or I start "accepting" everything, then I agree it becomes completely useless.

9rx•3h ago
> It's because there's not a lot of utility in you as a human being basically just being an intermediary to what some AI says today.

To be fair, there has never been a lot of utility in you as a human being involved, theoretically speaking. The users do not use a forum because you, a human, are pulling knobs and turning levers somewhere behind a meaningless digital profile. Any human involvement that has been required for the software to function is merely an implementation detail. The harsh reality, as software developers continually need to be reminded of, is that users really don't care about how the software works under the hood!

For today, a human posting AI-generated content to a forum is still providing all the other necessary functions required, like curation and moderation. That is just as important and the content itself, but something AI is still not very good at. A low-value poster may not put much care into that, granted, but "slop" would be dealt with the same way regardless of whether it was generated by AI or hand written by a person. The source of content is ultimately immaterial.

Once AI gets good, we'll all jump to AI-driven forums anyway, so those who embrace it now will be more likely to stave off the Digg/Slashdot future.

code_biologist•1h ago
It's been interesting to watch this play out in microcosm in different spaces. Danbooru and Gelbooru are two anime image boards that banned AI image content, largely to their benefit in my opinion. Rule34 is a similar image board that has allowed AI images and they've need to make tagging and searching adaptations to add to handle the high volume of AI images versus human artists. I'm glad there's an ecosystem of different options, but I find myself gravitating to the ones that have banned AI content.
raincole•4h ago
What we wish for: better search.

What we got: more content polluting search, aka worse search.

vouaobrasil•3h ago
I don't think better search is exactly what we want. It would also be great to have less quantity and more quality. I think optimizing only search to make it better (including AI) only furthers the quantity aspect of content, not quality. Optimizing search or trying to make it better is the wrong goal IMO.
tartoran•2h ago
Better search implies separating the wheat from the chaff. Unfortunately SEO spam took over and poisoned the whole space.
xorokongo•4h ago
This only means that the web (websites and web 2.0 platforms) for public usage is becoming redundant because any type of data that can be posted on the web can now be generated by an LLM. LLMs have been only around for a short while but the web is already becoming infested with AI spam. Future generations that are not accustomed to the old pre AI web will prefare to use AI rather than the web, LLMs will eventually be able to generate all aspects of the web. The web will remain useful for private communication and general data transfer but not for surfing as we know it today.

Edit to add:

Projects like the Internet Archive will be even more important in the future.

fallinditch•4h ago
Editorial guidelines at many publications explicitly state that AI can assist with drafts, outlines, and editing, but not with generating final published stories.

AI is widely used for support tasks such as: - Transcribing interviews - Research assistance and generating story outlines - Suggesting headlines, SEO optimization, and copyediting - Automating routine content like financial reports and sports recaps

This seems like a reasonable approach, but even so I agree with your prediction that people will mostly interact with the web via their AI interface.

_fat_santa•4h ago
> AI generated web content has got to be one of the most counterproductive things to use AI on.

For something like a blog I would agree, but I found AI to be fantastic at generating copy for some SaaS websites I run. I find it to be a great "polishing engine" for copy that I write. I will often write some very sloppy copy that just gets the point across and then feed that to a model to get a more polished version that is geared to a specific outcome. Usually I will generate a couple variants of the copy I fed it, validate it for accuracy, slap it into my CMS and run an a/b test and then stick with the content that accomplishes the specific goal of the content best based on user engagement/click through/etc.

chermi•4h ago
While I largely agree, I don't think it's quite correct to say AI generated blogs contain no new information. At least not in a practical sense. The output is a function of the the LLM and the prompt. The output contains new information assuming the prompt does. If the prompt/context contains internal information no one outside the company has access to, then a public post thus generated certainly contains information new to the public.
hk1337•3h ago
Depends on the web content. I've been using Claude to generate posts for things I am selling in Facebook Marketplace with good results.
ysavir•3h ago
What do you feel sets this apart from the rest?
neya•5h ago
I can tell you this much - most people who are opposed to AI writing blog articles are usually from the editorial team. They somehow believe they're immune to being replaced by AI. And this stems from the misconception that AI content will always sound AI, soul-less, dry, boring, easy to spot and all that. This was true with ChatGPT-3xx. It's not anymore. In fact, the models have advanced so much so that you will have a really hard time distinguishing between a real writer and an AI. We actually tried this with a large Hollywood publisher in-house as a thought experiment. We asked some of the naysayers from the editorial + CXO team to sit in a room with us, while we presented on a large white screen - a comparison of two random articles - one written by AI, which btw wasn't trained, but just fed a couple of articles of the said writer on the slide into the AI's context window, and another which was actually written by the writer themselves. Nobody in the room could tell which was AI and which wasn't. This is where we stand today. Many websites you read daily actually have so much AI in them, just that you can't tell anymore.
koakuma-chan•5h ago
Have you tried gptzero?
neya•4h ago
Yep, it is not able to recognize. To be fair, it's not just dump it into ChatGPT and copy paste kind of AI. We feed it into the model in stages, we use 2-3 different models for content generation, and another 2 later on to smoothen the tone. But, all of these are just OTS models, not trained. For example, we do use Gemini in one of the flows.
A_D_E_P_T•4h ago
Counterpoint: GPT-4 and later variants, such as o3 and 4.5, have such a characteristic style that it's hard not to spot them.

Em dashes, "it's not just (x), it's (y)," "underscoring (z)," the limited number of ways it structures sentences and paragraphs and likes to end things with an emphasized conclusion, and I could go on all day.

DeepSeek is a little bit better at writing in a generic and uncharacteristic tone, but still... it's not good.

bpodgursky•3h ago
If you ask them to speak in a different voice, they will. It's only characteristic if the user has made no effort at all to mask that it is AI generated content.
whywhywhywhy•4h ago
> We asked some of the naysayers from the editorial + CXO team to sit in a room with us, while we presented on a large white screen - a comparison of two random articles - one written by AI, which btw wasn't trained

Needlessly close to bullying way to try and prove your point.

neya•4h ago
> We asked some

Which part of this looks like bullying? It was opt-in. They attended the presentation because they were interested.

Powdering7082•5h ago
Did the reporter reach out to Anthropic for public comment on this? They list a "source familiar" with some details about what the intended purpose was for, but no mention on the why
jasonthorsness•5h ago
Is there an archive anywhere? People can argue to no end based on some whimsical assumptions of what the blog was and why it was taken down, but it really comes down to the content. I have found even o3 cannot write high-quality articles on the topics I want to write about.
linkage•1h ago
Have you tried Perplexity's Discover feed? It's my go-to source of news these days. I don't know what model they use to generate content but it's really good.
jsemrau•4h ago
Up until a few weeks ago, my LinkedIn seemed to become better because of AI, but now it seems everything is lazy AI slop.

We meatbags are great pattern recognizers. Here is a list of my current triggers:

"The twist?",

"Then something remarkable happened",

That said, this is more of an indictment of the lazyness of the authors to provide clearer instructions on the style needed so the app defaults to such patterns.

jsnider3•4h ago
We try things, sometimes they don't work.