This of course means that Freenow is now on the personal blacklist. People should not engage with companies who advertise with "AI" slop.
nevermind if the things are people or their lives!!
If they didn't, we wouldn't be having these problems.
The problem isn't AI, it's how marketing has eaten everything.
So everyone is always pitching, looking for a competitive advantage, "telling their story", and "building their brand."
You can't "build trust" if your primary motivation is to sell stuff to your contacts.
The SNR was already terrible long before AI arrived. All AI has done is automated an already terrible process, which has - ironically - broken it so badly that it no longer works.
Wired: "Build things society needs"
The people yearn for the casino. Gambling economy NOW! Vote kitku for president :)
PS. Please don't look at the stock market.
That is false. You build a different type of trust: people need to trust that when they buy something from you it is a good product that will do what they want. Maybe someone else is better, but it won't be enough better as to be worth the time they would need to spend to evaluate that. Maybe someone else is cheaper, but you are still reasonably priced for the features you offer. They won't get fired for buying you because you have so often been worthy of the trust we give you that in the rare case you do something wrong it was nobody is perfect not that you are no longer trustworthy (you can only pull this trick off a few times before you become untrustworthy)
The above is very hard to achieve, and even when you have it very easy to lose. If you are not yet there for someone you still need to act like you are and down want to lose it even though they may never buy from you often enough to realize you are worth it.
That assumes people have the ability to choose not to do these things, and that they can't be manipulated or coerced into doing them against their will.
If you believe that advertising, especially data-driven personalised and targeted advertising, is essentially way of hacking someone's mind to do things it doesn't actually want to do, then it becomes fairly obvious that it's not entirely the individual's fault.
If adverts are 'Buy Acme widgets!' they're relatively easy to resist. When the advert is 'onion2k, as a man in his 40s who writes code and enjoys video games, maybe you spend too much time on HN, and you're a bit overweight, so you should buy Acme widgets!' it calls for people to be constantly vigilant, and that's too much to expect. When people get trapped by an advert that's been designed to push all their buttons, the reasonable position is that the advertiser should take some of the responsibility for that.
Most ads are just manipulating me, but there are times I need the thing advertised if only I knew it was an option.
Evil contains within itself the seed of its own destruction ;)
Sure, sometimes you should fight the decline. But sometimes... just shrug and let it happen. Let's just take the safety labels off some things and let problems solve themselves. Let everybody run around and do AI and SEO. Good ideas will prevail eventually, focus on those. We have no influence on the "when", but it's a matter of having stamina and hanging in there, I guess
Larry Fink and The Money Owners.
Perhaps I am too optimistic...
The exact quote is: "I foresee the day where AI become so good at making a deep fake that the people who believed fake news as true will no longer think their fake news is true because they'll think their fake news was faked by AI."
"Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should."
Wow. A new profile text for my Tinder account!
Now this formula has been complicated by technological engineering taking over aspects of marketing. This may seem to be simplifying and solving problems, but in ways it actually makes everything more difficult. Traditional marketing that focused on convincing people of solutions to problems is being reduced in importance. What is becoming most critical now is convincing people they can trust providers with potential solutions, and this trust is a more slippery fish than belief in the solutions themselves. That is partly because the breakdown of trust in communication channels means discussion of solutions is likely to never be heard.
Interviewer: How will humans deal with the avalanche of fake information that AI could bring?
YNH: The way humans have always dealt with fake information: by building institutions we trust to provide accurate information. This is not a new phenomenon btw.
In democracies, this is often either the government (e.g. the Bureau of Labor Statistics) or newspapers (e.g. the New York Times) or even individuals (e.g. Walter Cronkite).
In other forms of government, it becomes trust networks built on familial ties e.g. "Uncle/Aunt is the source for any good info on what's happening in the company" etc
Moreover, the more political a topic the more likely the author is trying to influence your thoughts (but not me I promise!). I forgot who, but a historian was asked why they wouldn’t cover civil war history, and responded with something to the affect of “there’s no way to do serious work there because it’s too political right now”.
It’s also why things like calling your opponents dumb, etc is so harmful. Nobody can fully evaluate the truthfulness of your claims (due to time, intellect, etc) but if you signal “I don’t like you” they’re rightfully going to ignore you because you’re signaling you’re unlikely to be trustworthy.
Trust is hard earned and easily lost.
I once went to a school that had complementary subscriptions. The first time I sat down to read one there was an article excoriating President Bush about hurricane Katrina. The entire article was a glib expansion of an expert opinion who was just some history teacher who said that it was “worse than the battle of Antietam” for America. No expertise in climate. No expertise in disaster response. No discussion of facts. “Area man says Bush sucks!” would have been just as intellectually rigorous. I put the paper back on the shelf and have never looked at one since.
Don’t get emotionally attached to content farms.
Regardless, clearly labeled opinions are standard practice in journalism. They're just not on the front page. If you saw that on the front page, then I'd need more context, because that is not common practice at NYT.
It’s simply reality, or else propaganda wouldn’t work so well.
Except those institutions have long lost all credibility themselves.
Wall Street, financier centric and biased in general. Very pro oligarchy.
The worst was their cheerleading for the Iraq war, and swallowing obvious misinformation from Colin Powell at face value.
Also, this is entirely hand-written ;)
hopefully soon we move onto judging content by its quality, not whether AI was used. banning digital advertisement would also help align incentives against mass-producing slop (which has been happening long before ChatGPT released)
I do love the irony of someone building a tool for AI sales bots complaining that their inbox is full of AI sales slop. But I actually agree with the article’s main idea, and I think if they followed it to its logical conclusion they might decide to do something else with their time. Seems like a great time to do something that doesn’t require me to ever buy or sell SaaS products, honestly.
This is just how I write in the last few years
It's just funny, even by hand, to be writing in the infinite AI content style while lamenting the awfulness of infinite AI content while co-founding a monetization and billing system for AI agents.
I just reached out to my family for any trustworthy builders they've had, and struck up conversations with some of my fancier neighbors for any recommendations.
(I came to the conclusion that all builders are cowboys, and I might as well just try doing some of this myself via youtube videos)
Using the internet to buy products is not a problem for me, I know roughly the quality of what I expect to get and can return anything not up to standard. Using the internet to buy services though? Not a chance. How can you refund a service
How do you know that? Or is it just that your bias is coybows are bad and so you assume someone who dresses and acts better is better?
Now step back, I'm not asking you personally, but the general person. It is possible that you have the knowledge and skills to do the job and so you know how to inspect it to ensure it was done right. However the average person doesn't have those skills and so won't know the well dressed person who does a bad job that looks good from the poorly dressed person who does a good job but doesn't look as good.
Our issue was water intrusion along a side wall that was flowing under our hardwoods, warping them and causing them to smell. The first contractor replaced the floor and added in an outside drain.
The drain didn't work, and the water kept intruding and the floor started to warp again.
When we got multiple highly rated contractors out, all of them explained that the drain wasn't installed correctly, that a passive drain couldn't prevent the problem at that location, and that the solution was to either add an actively pumped drain or replace the lower part of the wall with something waterproof. We ended up replacing that part of the wall, and that has fixed the issue along that wall. (We now have water intrusion somewhere else, sigh).
If anything, I was originally biased for the cowboy, as they came recommended, he and his workers were nice, and the other options seemed too expensive & drastic. Now I've learned my lesson, at least about these types of trickier housing issues.
Also, no one mentioned evaluating someone by how they're dressed - the issue was family/friend recommendations vs online reviews, and I while I do take recommendations from friends and family into account, I've actually had better luck trusting online (local) reviews.
Maybe it could lead to a resurgence of the business model where you buy a program and don’t have to get married to the company that supports it, though?
I’d love it if the business model of “buy our buggy product now, we’ll maybe patch it later” died.
you need to prove beyond a doubt that YOU are the right one to buy from, because it's so easy for 3 Stanford dropouts in a trenchcoat to make a seemingly successful business in just a few days of vibecoding.
I'm using this
The modern software market actually seems like a total inversion of normal human bartering and trade relationships, actually…
In Ye Olden Days, you go to the blacksmith, and buy some horseshoes. You expect the things to work, they are simple enough that you can do a cursory check and at least see if they are plausibly shaped, and then you put them on your horse and they either work or they don’t. Later you sell him some carrots, buy a pot: you have an ongoing relationship checkpointed by ongoing completed tasks. There were shitty blacksmiths and scummy farmers, but at some point you get a model of how shitty the blacksmith is and adjust your expectations appropriately (and maybe try to find somebody better when you need nails).
Ongoing contracts were the domain of specialists and somewhat fraught with risk. Big trust (and associated mechanics, reputation and prestige). Now we’re negotiating an ongoing contracts for our everyday tools, it is totally bizarre.
Nit: that is not how it worked. You took your horse to the blacksmith and he (almost always he - blacksmiths benefit from testosterone even if we ignore the rampant sexism) make shoes to fit. You knew it was good because the horse could still walk (if the blacksmith messes up that puts a nail in their flesh instead of the hoof and the horse won't walk for a few days while it heals). In 1600 he made the shoes right there for the horse, in 1800 he bought factory made horseshoes and adjusted them. Either way you never see the horseshoes until they are one the horse and your check is only that the horse can still walk.
Well, no worries. If you subscribe to the post+ service I’ll fix it in a couple years, promise.
That’s why we’re seeing so much semantic drift too. The forms of credibility survive, but the intent behind them doesn’t. The system works, but the texture that signals real humans evaporates. That’s the trust collapse. Over optimized sameness drowning out the few cues we used to rely on.
I follow even AI slop via reddit RSS.
I control however what comes in.
The last five times I've looked at something in case it was a legitimate user email it was AI promotion of someone just like in the article.
Their only way to escalate, apart from pure volume, is to take pains to intentionally emulate the signals of someone who's a legitimate user needing help or having a complaint. Logically, if you want to pursue the adversarial nature of this farther, the AIs will have to be trained to study up and mimic the dialogue trees of legitimate users needing support, only to introduce their promotion after I've done several exchanges of seemingly legitimate support work, in the guise of a friend and happy customer. All pretend, to get to the pitch. AI's already capable of this if directed adeptly enough. You could write a script for it by asking AI for a script to do exactly this social exploit.
By then I'll be locked in a room that's also a Faraday cage, poking products through a slot in the door—and mocking my captors with the em-dashes I used back when I was one of the people THEY learned em-dashes from.
One thing about it, it's a very modern sort of dystopia!
But you can’t really even make the case to them anymore because like you said they can’t/won’t even read your email.
What mostly happens is they constantly provide free publicity to existing big players whose products they will cover for free and/or will do sponsored videos with.
The only real chance you have to be covered as a small player is to hope your users aggregate to the scale where they make a request often enough that it gets noticed and you get the magical blessing from above.
Not sure what my point is other than it kinda sucks. But it is what it is.
This is the biggie; especially with B2B. It's really 3 months, these days. Many companies have the lifespan of a mayfly.
AI isn't the new reason for this. It's been getting worse and worse, in the last few years, as people have been selling companies; not products, but AI will accelerate the race to the bottom. One of the things that AI has afforded, is that the lowest-tier, bottom-feeding scammer, can now look every bit as polished and professional as a Fortune 50 company (often, even more).
So that means that not only is the SNR dropping, the "noise" is now a lot riskier and uglier.
Made my day. So true.
I stopped accepting telephone calls before 2010. They still ring the phone.
How can infinite AI content be strictly awful if it forces us to fix issues with our trust and reward systems? Short term, sure. But infinite (also) implies long term.
I wish I had a really smart game theorist friend who could help me project forward into time if for nothing other than just fun.
Don't get me wrong, I'm not trying to reduce the value of "ouch, it hurts right now" stories and responses.
But damned if we don't have an interesting and engaging problem on our hands right now. There's got to be some people out there who love digging in to complicated problems.
What's next after trust collapses? All of us just give up? What if that collapse is sooner than we thought; can we think about the fun problem now?
You can't regress back to a being a kid just because the problems you face as an adult are too much to handle.
However this is resolved, it will not be anything like "before". Accept that fact up front.
Would this truly be a move back? I've met people outside my social class and disposition who seem to rely quite heavily on networking this way.
If you try to “go back” you’ll just end up recreating the same structure but with different people in charge
Meet the New boss same as the old boss - biological humans cannot escape this state because it’s a limit of the species
The real question to ask in this gold rush might be what kind of shovels we can sell to this corner of hand shakers and lunchers. A human-verifiable reputation market? Like Yelp but for "these are real people and I was able to talk to an actual human." Or diners and golf carts, if you're not into abstractions.
You're assuming they can be fixed.
> But damned if we don't have an interesting and engaging problem on our hands right now. There's got to be some people out there who love digging in to complicated problems.
I'm sure the peasants during Holomodor also thought: "wow, what an interesting problem to solve".
And I refuse to examine my privilege - that's a brain rot narrative I won't be a part of.
I didn't grow up rich or even well to do. When all you have is two sticks, you find fun with the two sticks. You can come and take my sticks away from me, but you'll never take the joy I have no matter what I have in front of ne. Do I have more than two sticks now? Yes, but I care more about fun than what I have.
I don't know what Holomodor is, but I'm going to go look it up and learn. Thank you for that. What do you think they did for fun?
One such example was call centers. In the 2000s implementing a call center in India was all the rage on cost cutting. The customer experience was terrible and suddenly having a US-based call center (the thing companies just abandoned) was now a feature.
I think we’ll see similar things with AI. Everyone will get flooded with AI slop. Folks will get annoyed and suddenly interacting with a real human or a real human writing original content will be a “feature” that folks flock to.
FINAL Financial hours of U.S.A. just before the 1929 crash
https://www.youtube.com/watch?v=dxiSOlvKUlA&t=1008s
The Volcker Shock: When the Fed Broke the Economy to Save the Dollar (1980)
https://www.youtube.com/watch?v=cTvgL2XtHsw
How Inflation Makes the Rich Richer
No structure, outdated stuff marked as "preview" from 2023/2024, wikipedia like in depth articles about everything but not for simple questions like: how to implement a backend for frontend.
You find fragments and pieces of information here and there - but no guidance at all. Settings hidden behind tabs etc.
A nightmare.
No sane developer would have done such a mess, because of time constraints and bloat. You see and experience first hand, that the few gems are from the trenches, with spelling mistakes etc.
Bloat for SEO, the mess for devs.
Growing on X is so simple I’m shocked it works.
100x comments a day
10x posts a day
15x DM’s a day
1x thread a day
1x email a day
This is how you grow your presence on X.
Even if having a presence matters, how can you actually say something meaningful if you post 10 times a day - there's no way (unless you just repeat yourself). Hopefully my algorithm's just gone weird but sadly the people I used to follow stopped posting.
We might be transitioning to a world where trust has value and is earned and stored in your reputation. Clickbait is a symptom of people valuing attention over trust. Clickbait spends a percentage of their reputation by trading it for attention.
In a world of many providers, most people have not heard of any particular individual provider. This means they have no reputation to lose, so their choice to act in a reputation losing manner is easy.
Beyond a certain scale when everyone can play that game we end up with the problem that this article describes. The content is easy but vacuous. There are far more people vying for the same number of eyballs now.
The solution is, I believe, earned trust. Curators select items from sources they trust. The ones that do a good job become trusted curators. In a sense HackerNews is a trusted curator. Reddit is one that is losing, or has lost, trust.
AI could probably take on some of the role of that curation. In the future perhaps more so. An AI can scan the sources of an article to see if the sources make the claims that the article says it makes. I doubt it can do so with sufficient accuracy to be useful right now, but I don't think that is too far off.
Perhaps the various fediverse reddit clones had the wrong idea. Maybe they should in a distributed fashion where each point is a subreddit analogue operated each with their own ways of curation, then an upper level curation can make a site of the groups they trust.
This makes a multi level trust mechanism. At each level there are no rules governing behaviour. If you violate the values of a higher layer, they lose trust in you. AI could run its own curation nodes. It might be good at it or it might be terrible, it doesn't really matter. If it is consistently good, it earns trust.
I don't mind there being lots of stuff, if I can still find the good stuff.
I predict a renaissance of meeting people in person.
I hope that will come to fruition.
Just kidding, that just goes into my RL trash can.
huijzer•5h ago
One small one I do not agree with is "Are you burning VC cash on unsustainable unit economics?". I think it's safe to conclude by now that unsustainable businesses can be kept alive for years as long as the investors want it.
arnon•3h ago
pessimizer•2h ago