frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Monzo wrongly denied refunds to fraud and scam victims

https://www.theguardian.com/money/2026/feb/07/monzo-natwest-hsbc-refunds-fraud-scam-fos-ombudsman
1•tablets•4m ago•0 comments

They were drawn to Korea with dreams of K-pop stardom – but then let down

https://www.bbc.com/news/articles/cvgnq9rwyqno
2•breve•7m ago•0 comments

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•9m ago•0 comments

Bash parallel tasks and error handling

https://github.com/themattrix/bash-concurrent
1•pastage•9m ago•0 comments

Let's compile Quake like it's 1997

https://fabiensanglard.net/compile_like_1997/index.html
1•billiob•10m ago•0 comments

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
1•birdculture•15m ago•0 comments

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•21m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•22m ago•1 comments

Slop News - HN front page right now hallucinated as 100% AI SLOP

https://slop-news.pages.dev/slop-news
1•keepamovin•27m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•29m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
2•tosh•35m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
3•oxxoxoxooo•39m ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•39m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
2•goranmoomin•43m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

3•throwaw12•44m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•46m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•48m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
3•myk-e•51m ago•5 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•52m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
4•1vuio0pswjnm7•54m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
2•1vuio0pswjnm7•55m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•57m ago•2 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•1h ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•1h ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•1h ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•1h ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•1h ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•1h ago•1 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•1h ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•1h ago•0 comments
Open in hackernews

Be Worried

https://dlo.me/archives/2025/10/03/you-should-be-worried/
106•theli0nheart•4mo ago

Comments

codr7•4mo ago
Or stand together and demand the madness stops, rather than pretend there's nothing to do about it; which could actually help improve the situation.

The people involved in making these decisions deserve to be locked up for life, and I'm sure they will be eventually.

joshgree8859•4mo ago
What new human madness has ever been stopped?
ttctciyf•4mo ago
At risk of Godwinisation, there's a very obvious example.
MaxfordAndSons•4mo ago
Wouldn't exactly call that a grassroots effort, though...
ryandrake•4mo ago
As we are recently seeing, it was only paused temporarily.
alaithea•4mo ago
Human cloning, nuclear bombs (other than for sabre rattling)... to name a couple.
graydot•4mo ago
Are there any grassroots(?) organizations doing activism such as FSF and ACLU in the AI space with local chapters? if not, it might be time for something like that, though with all the money flooding into LLMs (ignoring LLMs' manipulative power if it put its mind to it), we probably don't stand a chance.
AstroBen•4mo ago
The genie's out of the bottle. I think it's better that everyone have access to it, and are fully aware of its capabilities, rather than it being unknown to everyone and under the control of specific entities
jplusequalt•4mo ago
>The genie's out of the bottle. I think it's better that everyone have access to it, and are fully aware of its capabilities, rather than it being unknown to everyone and under the control of specific entities

The majority of people only have access to proprietary models, whose weights and training are closed source. The prospect of a populace that all out source their thinking to Google's LLM is horrifying.

AstroBen•4mo ago
I agree.. but what's the solution there? Somehow enforce global regulation on it?
jplusequalt•4mo ago
Why would this be a bad thing? You're acting like technology isn't already widely regulated.
jplusequalt•4mo ago
Technological inevitability is a plague. There was a good article shared on HN about this the other day.
randallsquared•4mo ago
> distracting us from a scarier notion

A more immediate notion, perhaps, but definitely not scarier than human extinction.

LostMyLogin•4mo ago
It's of my opinion that there is going to be a market for wrangling AI content from a consumer perspective to help maintain human-to-human knowledge transfer. I just have no idea what that looks like.
Bolwin•4mo ago
Honestly the biggest way in which LLMs have changed society is in the desperate, almost pathetic way every every business leader, career influencer, advice guru insists that they must use AI, that you should "learn" AI, that AI is taking over.

Anyway, in terms cultural change, I think the emerging image and video models will be a lot more disruptive. Text has been easy to fake for a while now, and barely gets people's attention anymore.

TrainedMonkey•4mo ago
I think there is a big difference from other fads here, listing some from memory - SL, Cloud Computing, Web 3.0, NFC, Big Data, Blockchain, 3D printing, IOT, VR, metaverse, NFTs.

If we plot all of these on a scale of how much it impacted day to day experience of an average user there is something highly unusual about AI. The slop is everywhere, every single person who is interacting with digital media is affected. I don't really know what this means, but this is pretty unusual when compared with other fads.

highwaylights•4mo ago
The author needn't regret not publishing this two years ago, it's a thought that had occurred to pretty much everyone long before then. It's just not clear that anything can be done to stop the snowball from gathering speed.
trhway•4mo ago
We had more than half a century of for example sci-fi literature describing that future, and over all those decades nobody was able to come up with even half-good plausible/feasible idea of how to deal with that. That suggests that it is outside of human intelligence capabilities to stop that snowball. Personally, i read a lot of sci-fi in my youth and i'm prepared to accept such my/our fate, even happily working where i can to speed it up (the faster the changes, the faster the human evolution (or at least adaptation) and what can be more exciting than that).

Current humans can't even deal with very simple and obvious issue of global warming. Thus it seems very unreasonable to expect any effective dealing with significantly more complex issues. And thus if not evolution then at least very accelerated adaptation is in order.

roxolotl•4mo ago
I think it’s more that there’s no will to do anything about it. As a piece earlier this week pointed out nothing about tech is genuinely inevitable[0]. There are humans making decisions to keep the snowball gathering speed.

0: https://deviantabstraction.com/2025/09/29/against-the-tech-i...

dabockster•4mo ago
> nothing about tech is genuinely inevitable

This reminds me of when everyone was saying that "everything on the internet is written in ink" - especially during the height of social media in the 2010s. So imagine my surprise in the first half of the 2020s when tons of content starts getting effectively deleted from the internet - either through actual deletion or things like link rot. Heck, I literally just said "the height of social media" - even that has pulled back.

So yeah, remember that tech ultimately serves people. And it only happens so long as people are willing to enable it to happen.

rsync•4mo ago
I think you are mistaken.

I suspect almost all of that data still exists - it just isn’t readily available.

In the desperate end-game of this most recent round of “it’s shit, but what if we collected enough of it?” every last bit of human generated content will be resurrected.

nyantaro1•4mo ago
nice username
dabockster•4mo ago
That’s a fair point. But I’ve learned from my own life experience to factor in things like Hanlon’s Razor and the Dunning-Kruger Effect when it comes to technology anymore. Especially post-Covid tech.

In this case, while it’s totally possible for this sort of data to still exist somewhere, I think the chances of it surfacing again in any accessible format are rare - purely because of the overall stupidity of the system. Keeping data that “alive” for decades is a skill in itself that seems to only happen in a heavily subsidized “perfect” economic times (at least to the outside observer). Once the going gets tough, there isn’t really any business value to saving the data and it likely gets deleted.

dheera•4mo ago
Meanwhile, I do a lot of photography and haven't posted anything in the past 2 years on Instagram because the AI garbage and influencer garbage now gets more attention than real photos of places on Earth you can actually go to. It feels not worth my time to post anything, considering how much effort it takes to post, time posts, and find hashtag soup, because if you don't do all of that, the platform doesn't show your images to people anyway.
socalgal2•4mo ago
switch to another platform? flickr? 500px? Or did you just want the likes? I still post a curated set of my photos to flickr. All CC licensed FWIW. There's no AI/influencer stuff there.
dheera•4mo ago
Nobody I know looks at those platforms. 500px is filled with bots, Flickr is unknown to people under 30. If humans don't look at it, it's not worth my time either.

I want a platform that real humans, including some sizeable chunk of my social circle, look at, and is filled with real content.

agent531c•4mo ago
Agreed, once Instagram started favoring reels/video, I stopped posting my photography there.

Ive been looking at using Photo.glass, but the subscription cost puts me off a bit emotionally after having been told to believe that 'social media is free' from the tech oligarchs. Logically though, I know that it theoretically attracts a higher bar of photographers who are willing to pay entry/support a new form of ad-free internet through that subscription - Similar to the idea of paid search engines.

agent531c•4mo ago
I only just realized that the site is actually https://glass.photo/, not photo.glass. I can't edit the comment now, but the suggested domain in the original comment leads to some spam site. Don't recommend visiting it
CPLX•4mo ago
While I agree I do think this way of looking at things is kind of insightful. I hadn't thought of it this way and it really rings true:

> I find my fear to be kind of an ironic twist on what the Matrix foresaw—the AI apocalypse we really should be worried about is one in which humans live in the real world, but with thoughts and feelings generated solely by machines. The images we see, the words we read, all generated with the intent to control us. With improved VR, the next step (out of the real world) doesn’t seem very far away, either.

channel_t•4mo ago
Yes, as far as I can tell infinite scroll + 2010s era social media recommendation algorithms alone have already decimated the wider human collective's ability to think for themselves, and has subsequently eroded sane discourse and democratic norms in societies all across the globe.
fny•4mo ago
In the long run, the internet will be so riddled with trash that no one will trust it. Instead people will turn to authorities they trust for the truth the same way they did with encyclopedias and local papers. Information provenance will be a massive market.

The return to that world will be very painful and chaotic however.

jplusequalt•4mo ago
>Instead people will turn to authorities they trust for the truth the same way they did with encyclopedias and local papers

I think a large portion of the population actively distrust experts.

fny•4mo ago
Which is why I said "authorities they trust."
orthecreedence•4mo ago
I think this is happening already, no? People seem to have found their enclaves, each with its distinct thought leaders, and now follow that enclave and believe all others to be full of lies and deceipt. The return to a central truth seems liek a pipe dream. The concept of truth and fact has been fractured seemingly beyond repair. If it is repaired, I don't think it will happen over any medium controlled by profiteering corporations: they have a vested interest in the fracture. And yet, all forms of modern communication fall under this umbrella. So, I believe we are at an impasse.
Cerium•4mo ago
Yes - but the authority most people have decided to trust is the Algorithm or their favorite LLM.
skziishs•4mo ago
> The concept of truth and fact has been fractured seemingly beyond repair.

It’s always been this way. That the you thought otherwise is just evidence of how good a central power was at controlling “the truth”.

Trust doesn’t scale. There are methods that work better than others, but it’s a very hard problem.

fullshark•4mo ago
Nah, they will just find the content to confirm their bias and not seek truth. This is essentially already the state of affairs for the internet.
bee_rider•4mo ago
Propaganda and lies can defeat all sort of human constructs—it can cause people to destroy their institutions and governments. However it eventually it comes into contact with reality and loses miserably.
znpy•4mo ago
Not really. In my opinion the current behaviour mainly plagues big and very large platforms/communities (think instagram, facebook, reddit).

I think this will create a push for going back to smaller “gated” communities: think phpbb forums from the early 2000s, maybe with invitation-only sign up (similar to lobste.rs, where somebody already in must invite you, and admins can track who-invited-who).

It would probably be a better experience overall.

StrandedKitty•4mo ago
You are describing Discord. Small, gated, heavily moderated by actual humans, and often invite-only communities on specific topics.
api•4mo ago
> Therefore, increasing proportions of people consuming text online will be unwittingly mind-controlled by LLMs and their handlers.

The "and their handlers" part is the part I find frightening. I would actually be less concerned if the AIs were autonomous.

Reminds me of a random podcast I heard once where someone was asked: "if you woke up in the middle of the night and saw either a random guy or a grey alien in your bedroom, which would scare you more?" The person being interviewed said the dude, and I 100% agree. AI as proxy for oligarchs is much scarier than autonomous alien AI.

kqr•4mo ago
I think the solution is to not aim to go online to "consume content". Instead, go online to learn new techniques and investigate well-reasoned opinions.

Generic "content" is that which fills out the space between the advertisements. That's never been good for you, whether written by humans or matrix multiplication.

Nashooo•4mo ago
Or seek out specific entertainment.
alaithea•4mo ago
Respectfully, I think you're missing the point that this is a societal rather than an individual concern. What will the average person's response to AI be? Probably to not recognize it, let alone spurn it. The cumulative effects of your neighbors, particularly the young ones who will grow up amidst this, or the old and gullible, being led along by computers over years is the thing you need to be more concerned about.
criley2•4mo ago
When I look at the state of how humans have manipulated each other, how the media is noxious propaganda, how businesses have perfected emotional and psychological manipulation of us to sell us crap and control our opinions, I don't think AI's influence is worse. In fact I think it's better. When I have a spicy political opinion, I can either go get validated in an echo chamber like reddit or newsmedia, or let ChatGPT tell me I'm a f'n idiot and spell out a much more rational take.

Until the models are diluted to serve the true purpose of the thoughtcontrol already in fully effect in non-AI media, they're simply better for humanity.

alaithea•4mo ago
ChatGPT has been shown to spend much more time validating people's poor ideas than it does refuting them, even in cases where specific guardrails have supposedly been implemented, such as to avoid encouraging self-harm. See recent articles about AI usage inducing god-complexes and psychoses, for instance[1]. Validation of the user giving the prompt is what it's designed to do, after all. AI seems to be objectively worse for humanity than what we've had before it.

[1]: https://www.psychologytoday.com/us/blog/urban-survival/20250...

criley2•4mo ago
Strongly disagree, and you've misread what you've linked. These linked cases are situations where people are staying in one chat and posting thousands and thousands of replies into a single context, diluting the system prompt and creating a fever-dream of hallucination and psychosis. These are also rarely thinking and tool calling models, relying more on raw-LLM generation instead of thinking and sourcing (cheap/free models versus high powered subscriber only thinking models).

As we all know, the longer the context, the worse the reply. I strongly recommend you delete your context frequently and never stay in one chat.

What I'm talking about is using fresh chat for questions about the world, often political questions. Grab statistics on something and walk through major arguments for and against an idea.

If you think ChatGPT is providing worse answers than X.com and reddit.com for political questions, quite frankly, you've never used it before.

Try it out. Go to reddit.com/r/politics and find a +5,000 comment about something, or go to x.com and find the latest elon conspiracy, and run it by ChatGPT 5-thinking-high.

I guarantee you ChatGPT will provide something far more intellectual, grounded, sourced and fair than what you're seeing elsewhere.

UtopiaPunk•4mo ago
Why would an LLM give you a more "rational take"? It's got access to a treasure trove of kooky ideas from Reddit, YouTube comments, various manifestos, etc etc. If you'd like to believe a terrible idea, an LLM can probably provide all of the most persuasive arguments.
criley2•4mo ago
Apologies, it sounds like you have no experience with modern models. Yes, you can push and push and push get it to agree with all manner of things, but off-rip on the first reply in a new context it will provide extremely grounded and rational takes on politics. It's a night and day difference compared to your average reddit comment or X post.

In my years of use and thousands and thousands of chat uses, I have literally never seen chatGPT provide a radical answer to a political question without me forcing it, heavy-handedly, to do so.

bloppe•4mo ago
ChatQanon is coming
kqr•4mo ago
Sure, and there are people who stuff themselves full of fast food, alcohol, and/or cigarettes. I get that those things are different in that it is possible to levy vice taxes on them, but the primary defense is and will be education.

What we can do as technologists is establish clear norms around information junk food for our children and close acquaintances, and influence others to do the same.

It's not going to happen overnight -- as with many such things, I expect it'll take decades of mistakes followed by decades of repairing them. What we've learned from other such mistakes is that saying "feel bad about the dumb thing" ("be worried") is less effective than "here's a smart thing you can do instead".

drdaeman•4mo ago
I’m not sure education or awareness is a solution. It doesn’t hurt, of course, but I think the real issue is that we’re frequently feeling “low energy” (for my lack of a better term) so entry barriers become important and least-effort options start to win (“just picking a phone/tablet” easily wins here most of time), even if were well aware that they’re not as rewarding.

I blame all the background stress and I think it’s a more important factor.

ambicapter•4mo ago
You can't control other people, and this article is mostly about the effect of AI on other people, whom you can't control.
jacknews•4mo ago
It should be very obvious that the 'infosphere' (media, culture, education, day-to-day interactions and environment) heavily influences people, and ends up controlling them by setting the 'normal' for society, and it's affordances .
theli0nheart•4mo ago
I like this take. Unfortunately most people don't have this level of self-control.
chipdjjd5•4mo ago
> investigate well-reasoned opinions

...generated by AI?

kqr•4mo ago
I have yet to find such a thing.
disambiguation•4mo ago
Your point is valid, but i disagree with the framing.

The issue is the blind trust that the internet is built on top of.

It's was that initial good faith that made the internet a special thing - you could come online and discover all the weird interesting things people were up to, have conversations with real people that weren't possible in the real world.

At this point most platforms have figured out how to exploit and profit off the blind trust - but AI is threatening to annihilate it completely.

I'm not worried about the generic content, the worthwhile stuff is what's at risk.

tim333•4mo ago
>the blind trust that the internet is built on top of

I've always thought of it as kind of the opposite - the one information network that is pretty much unpoliced where anyone can publish any nonsense or lies they feel like. It works because people do have some ability to distinguish truth from lies and the open nature means people can publish truth too.

disambiguation•4mo ago
> people do have some ability to distinguish truth from lies

In a pre-AI era, this is a reasonable heuristic.

But I think everyone has their own internal barometer - how much trash are they willing to sift through for gold?

I'm concerned that AI will poison the well to such an extent that people stop trusting, and subsequently stop using, the web altogether. This isn't only a sad state of affairs for the average user, but ties into an existential risk for the business model of the entire tech industry.

tim333•4mo ago
The main trick of looking who or what the source is should still work.

I've actually found the LLMs quite good for debunking false facts. It's quite funny seeing Elon Musk interact with Grok when it points out his untruths.

drdaeman•4mo ago
I have an issue with "inherently superior ... by dopamine output" part. It's the foundation of the whole article/worry but it's not supported by anything (The Matrix quotes don't count), making the whole article hang on a dubious premise of impending doom that is not shown to exist in reality.
jackphilson•4mo ago
for fear of being overly utilitarian here its not really an issue that people are manipulatable but that they are manipulated into doing the wrong things (consumerism, political divide-and-conquer strategies)

and rejecting manipulation from a deontological stance reduces agency and output for doing good in the real world

manipulation = campaigns = advertisements = psyops (all the same, different connotations)

zetanor•4mo ago
Television and the commercial Internet are optimized to consume as much life as possible so that part of the captured attention can be auctioned to advertisers and other propagandists for pennies a minute. Returning to doing the same thing but Certified With No AI™ doesn't substantially reduce the badness of the thing.
mbgerring•4mo ago
Before LLMs were mainstream, rationalists and EA types would come on Hacker News to convince people that worrying about how "weak" AI would be used was a waste of time, because the real problem was the risk of "strong" AI.

Those arguments looked incredibly weak and stupid when they were making them, and they look even stupider now.

And this isn't even their biggest error, which, in my opinion, was classifying AI as a bigger existential risk than climate change.

An entire generation of putatively intelligent people lost in their own nightmares, who, through their work, have given birth to chaos.

drcode•4mo ago
Weak ai is a problem, but isn't going to lead to 100% human extinction

Human extinction won't happen until a couple years later, with stronger ai (if it does happen, which I unfortunately think it will- if we remain on our current trajectory)

mbgerring•4mo ago
"This theoretical event that I just made up would lead to 100% human extinction"

Neat, go write science fiction.

Hundreds of billions of dollars are currently being lit on fire to deploy AI datacenters while there's an ecosystem destabilizing heat wave in the ocean. Climate change is a real, measurable, present threat to human civilization. "Strong AI" is something made up by a fan fiction author. Grow up.

drcode•4mo ago
It can't be true because it sounds like science fiction to you?

Everything about every part of AI in 2025 sounds exactly like science fiction in every way. We are essentially living in the exact world described in science fiction books this very moment, even though I wish we didn't.

Have you ever used an ai chatbot? How is that not exactly like something you'd find in science fiction?

mbgerring•4mo ago
The idea of “Strong AI” as an “existential risk” is based entirely on thought experiments popularized by a small, insular, drug-soaked fan fiction community. I am begging you to touch grass.
Exuma•4mo ago
No, I shall not "be worried." Unless you are some spineless blob of agencyless ooze... you should be able to parse reality in such a way that your day not need to be filled with worrying
lockedinsuburb•4mo ago
I find it funny that the ending is an "in summary"... was it AI generated?

The thing to ask yourself: does what I'm reading provide any value to me? If it does, then what difference does it make where it comes from.

kmoser•4mo ago
> was it AI generated?

You're absolutely right!

But seriously, if you don't know that it's incorrect information, it does make a difference. Knowing it was produced by AI at least gives you foreknowledge that it may include hallucinations.

theli0nheart•4mo ago
Ok, as the author I have to admit that's a hilarious observation. But no, I wrote this myself.
mullingitover•4mo ago
> Increasing numbers of people who consume content on the Internet will completely sacrifice their ability to think for themselves.

Bless the author's heart.

All the major social media apps have been doing machine learning-driven getNext() for years now. Well before LLMs were even a thing. The Youtube algorithm was doing this a decade ago. This isn't on the horizon, we've already drowned in it.

bearjaws•4mo ago
Watch a teenage for 3 hours just endlessly scrolling.

Most of the content is basically Idiocracy's "Ow my balls".

sodality2•4mo ago
Algorithm-chosen human-made content is on some level preferable to algorithm-chosen and created content, right?
drdaeman•4mo ago
That's what most people would say - but why do they say this?

As I understand it:

1. Because machine-generated content is not as good. Recent technical improvements are (IMHO) showing obvious and significant improvements over last year SOTA tech, indicating that the field is still very green. As long as machine-generated content is distinguishable, as long as there are quirks in there that we easily notice, of course it'll be less preferable.

2. Our innate "our vs foreign" biases. I suspect that until something happens to our brains, we'll always tend to prefer "human" to "non-human", just like we prefer "our" products (for arbitrary definition of "our" that drastically varies across societies, cultures and individuals) to other products because we love mental binary partitioning.

mullingitover•4mo ago
Not always! I have definitely had some AI-generated songs ("BBL Drizzy" being a notorious example) that were stuck in my head for weeks. I think the music industry is at the greatest risk in the near term.
sodality2•4mo ago
BBL Drizzy seems to me like a case where the cultural zeitgeist was more important than the actual additions made by AI. The lyrics were human - King Willonius admitted as much - so wasn't it just Udio AI reading them out + sampled backing track? Then Metro Boomin remixed the far more popular version by sampling bits and pieces, and I think that his contributions were 100% transformative. There's no way BBL DRIZZY BPM 150.mp3 could have been made by an AI any time soon
AndrewKemendo•4mo ago
A few years ago I was on an airplane back from Asia and I saw for the first time somebody using both hands to scroll tiktok.

A woman in front of me had her phone cradled in both hands, with index and thumb from both hands on the screen - one hand was scrolling and swiping and the other one was tapping the like and other interaction buttons. It was at such a speed that she would seemingly look at two consecutive posts in 1 second and then be able to like or comment within an additional second.

It left me really shaken as to what the actual interaction experience is like if you’re trying to consume short form content but you’re only seeing the first second before you move on.

It explains a lot about how thumbnails and screenshots and beginnings of videos have evolved overtime in order to basically punch you right in the face with what they want you to know.

It’s really quite shocking the extent to which we’re at the lowest possible common denominator for attention and interaction.

lofaszvanitt•4mo ago
And they make a terrible job. At least on the surface, what's being fed to the humans.
rootnod3•4mo ago
Even as horrible as the current state of that already is, there is a difference between letting AI pick the next video in line or having the next video be DONE by AI
bambax•4mo ago
Most comments seem to agree with the article, and I don't quite understand why.

People have been manipulated since forever, and coerced before that. You used to be burned or hanged if your opinions differed even a little from orthodoxy (and orthodoxy could change in a span of a couple of years!)

AI slop is mostly noise. It doesn't manipulate, it makes thinking a little more difficult. But so did TV.

elAhmo•4mo ago
This is not really comparable with TV, not even close.

There was/is a relatively small amount of channels you have access to, and effectively all your neighbours and friends have the same content.

Short form video took this to the extreme by figuring out what specific content you like and just feed you that - as a result people spend significantly more time watching TikTok and Youtube than they (or their previous generation) did with TV. TV was also often in background, not really actively watched, which is not the case on the internet.

Now, once you put AI generated content there combined with AI recommendation systems, this problem becomes even worse - more content, faster feedback loop, infinite amount of "creators" tailored to what your sweet spot is.

stellalo•4mo ago
> AI slop is mostly noise. It doesn't manipulate

Not until you start mass-producing fake photos, fake videos, fake audios, put all of it into social media, shake shake shake.

SoftTalker•4mo ago
> What am I personally going to do about this? Well, to start, I’m going to start taking content way less seriously unless it was created before 2022

There's an old fable about this, The Boy Who Cried "Wolf" about people adapting to false claims. They just discount the source, which is what is going to happen with social media once it is dominated by AI slop. Nobody will find it worth anything anymore, and the empires will melt down. I'm not on any of the big social sites, but I'm already watching a lot less on YouTube, basically only watching channels that I know to be real people. My other recommendations are mostly AI garbage now, outside of that.

keiferski•4mo ago
Because you are I will not be able to tell whether something is machine- or human-generated, and the machine generated stuff will get more clicks than the human generated stuff, it’s likely that the majority of popular online content (and even printed content post-2023) will have been created by AI (and perhaps solely by AI).

Sorry, but when you make claims like this, it just tells me that you are not very familiar with popular culture. Most people hate AI content and at best find it a meme-esque joke. And young people increasingly get their news from individuals on TikTok/YouTube/etc. - who are directly incentivized to be as idiosyncratic and unique (read: not like AI) as possible in order to get followers. Platforms like YouTube do not benefit from their library being entirely composed of AI slop, and so will be implementing ways to filter AI content from "real people" content.

Ultimately AI tools are mostly going to be useful in situations where the author doesn't matter: sports scores, stock headlines, etc. Everything else will likely be out-competed by actual humans being human.

viraptor•4mo ago
> Most people hate AI content

I think you're overgeneralising here. People don't hate AI content. Just content so low quality that they recognise it as AI. This is not universal and the recognition will drop further: https://journals.sagepub.com/doi/10.1177/09567976231207095

> from individuals on TikTok/YouTube/etc. - who are directly incentivized to be as idiosyncratic and unique (read: not like AI

AI content can be just as unique. It's not all-or-nothing. People can inject a specific style and direction in an otherwise generated content to keep it on brand.

keiferski•4mo ago
Any attempt to create an AI “influencer” has been met with massive backlash.

At best you’re going to get some generically anonymous bot pretending to be human, that has limited reach because they don’t actually exist in the real world. Much of the media influence game involves podcasts, events, interviews, and a host of other things that can’t be faked.

I just don’t really see what scenario the doomsayers are imagining here. An entire media sphere of AIs that somehow shift public opinion without existing or interacting with the real media world? The practicalities don’t make sense.

AstroBen•4mo ago
> Much of the media influence game involves podcasts, events, interviews, and a host of other things that can’t be faked.

Have you not been following how fast video gen is improving? We're not far off convincing fake video interviews

The backlash only happens when people can tell it's AI

keiferski•4mo ago
Again, I don’t really see what scenario here is actually some kind of doomsday.

So someone makes a fake video of X famous person saying an absurd thing on Joe Rogan’s podcast.

It’s not on the official Rogan account, but just on some low quality AI slop channel. Maybe it fools a handful of people…but who cares? People are already pretty trained to be skeptical of video.

I think we’ll mostly just see a focus on identity verification. If the content isn’t verified as being by the real person, it’ll just be treated as mindless entertainment.

itsnowandnever•4mo ago
I think humans having a platform to tell the masses to "be worried" is as troublesome these days as AI content. mass media that can be manipulated has been around for 100 years. I don't think AI is unique.
narag•4mo ago
Indeed. Totalitarians of the past century didn't need any AI to control masses and cause more than 100M deaths. And those ideologies are far from dead.
theli0nheart•4mo ago
What's different is the available leverage.

It's fairly trivial to write code that can autogenerate hundreds or even thousands of AI-generated videos using Veo 3 with individual characters to push any narrative you'd like and push to Instagram or TikTok.

That's way scarier to me than a newspaper having a bias, or someone with an audience publishing a controversial blog post.

zahlman•4mo ago
> Best-in-class AI detection is barely better than random chance and will only get worse

Really? Because I still see blatantly obvious AI-generated results in web searches all the time.

viraptor•4mo ago
That's content where the author did but care about masking. Don't mistake "there's lots of AI content that's easy to identify" for "AI content is always easy to identify".
zahlman•4mo ago
Sure, but the average level of effort out there is so low that I'm not worried about losing my ability to notice this content instinctively any time soon. That's kinda why these tools are popular in the first place, after all.

(Also, a lot of AI operators come across like they wouldn't be capable of fixing those issues even if they cared.)

zahlman•4mo ago
I'm not really sure that the title filter shortening "You Should Be Worried" to just "Be Worried" is making it any less clickbaity....
SirensOfTitan•4mo ago
LLMs are the latest progression in decades of technology and social changes that leave people less connected and less capable in exchange for more comfort. I think it's likely that AI technology eclipses humans at least partially by atrophying our own skills and abilities, particularly 1. our ability to endure discomfort in service of a goal and 2. our capacities to make decisions.

I don't really know what to do about it, even with ground rules of engagement, we all still need to participate in a larger culture where it seems like it's a runaway guarantee that LLMs erode more critical skills that leave us with less and a handful of companies who develop this tech with more.

I'm slowly changing my life around what LLMs tell me, but not necessarily in the ways you'd expect:

1. I have a very simple set of rules of engagement for LLMs. For work, I don't let LLMs write code, and I won't let myself touch an LLM before suffering on a project for an hour at least.

2. I am an experienced meditator with a lot of experience in the Buddhist tradition. I've dusted off my Christian roots, and started exploring these ideas with new eyes, partially from a James Hillman-esq / Rob Burbea Soulmaking Dharma look. I've found a lot of meaning in personal fabrication and myth, and my primary practice now is Centering Prayer.

3. I've been working for a little while on a personal edu-tech idea with the goal of using LLM tech as an auxiliary tech to help people re-develop lost metacognitive skills and not use LLMs as a crutch. I don't know if this will ever see the light of day, it is currently more of a research project than anything, and it has a certain kind of iconoclastic frame like Piotr Wozniak's around what education is and what it should look like.

axelpacheco•4mo ago
If the optimizing function is engagement it wouldn’t be too different than what we’re doing now. It’s just what humans want isn’t it?
sublinear•4mo ago
> In summary: [...] Therefore, increasing proportions of people consuming text online will be unwittingly mind-controlled by LLMs and their handlers.

The article never actually backs this up.

jacknews•4mo ago
"intelligence is not necessary to wield power."

Great quote, which should be obvious when we look at our 'leaders', especially in recent history.