Upsides of AI: I can ask it if my farts are caused by the celery I ate earlier
I've found that LLMs don't give good advice regarding diet. They just agree with whatever your hunch is.
ChatGPT agreed with my hopeful self that I got diahrreah from VR sickness as opposed to my poor food handling, which it turned out to be
I take your other points, but I can't see the connection there. I've heard that they increase electricity rates in many cases (poorly managed electric utilities that can't build out grid capacity without raising rates for everyone), but not that they're affecting housing.
Next to that there is net congestion. The energy grid is currently critical, if you add a data center that means you will not be able to connect 20 to 30 newly build homes to power. There are currently new homes that are waiting for a connection to the grid before people can live there.
Space. In the densest country of Europe (non-microstate), a hyper scale data center could have been a neighborhood.
Latest point, maybe not the strongest, is construction workers. While construction workers building a data center are different from construction workers building homes, it doesn't really help with the labor shortages in construction if electricians are all busy building data centers.
If the Dutch government was a bit smarter, they would buy out the farmers and create a mega-campus for ASML, including housing for all those expats.
Edit: I stand corrected, last month ASML was granted permission to expand by 20.000 employees.
This is an insane regulation, and I wonder if it was passed by NIMBYs whose actual goal is to prevent the construction of housing near them.
The municipality bought the emissions rights from the farmers that held those 8 cows and the farmers then had to move/remove/slaughter 8 cows.
Welcome to The Netherlands.
I'm ashamed that we don't care more about human dignity. I care about human dignity and wonder if I'm an outlier? Even a tiny pledge and affirmation "Hey, we see you, we are working to bring relief and guaranteed dignity to your lives by doing xyz" would help. Instead when I ask for peace in war[edit: and basic income, anything that is an essential part of dignity[edit 2: and I hear its not possible right now while that isn't said of AI investments] I hear unaccountable leadership dodging the responsibility [of their constituents] and accelerating conflict while their friends' pockets get thicker.
This was said with a straight face like “people love puppies!”.
No self awareness at all.
Also, looking at current market situation how many people would be willing to say to their bosses or even publicly that they think AI is quite a lot of bullshit.
My new favorite game at work is "guess if this person is really into AI or they just have to be because their boss is and if they weren't they would get replaced by someone who is" and it's quite hard to say.
And since the "boss" of CEOs are the investors in the stock market, and the stock market is automated to ridiculous degree, is this AI pushing for itself?
Meanwhile I saw some survey where only something like a third of Gen Z and lower are pro-AI.
Of course the survey also said like 70%+ of them still used it.
You can tell that everyone loves chain buffet restaurants by going to Golden Corral and asking everybody if they are enjoying their meals
I’m honestly baffled. What’s there not to like?
It's an expensive route to mediocrity, which doesnt offer an edge in a market where everyone is using the same snakeoil.
So now you're wrangling an "AI" system and you're doing most of the work you would have had to anyway. ...And when you don't it can get really embarrassing.
https://www.abajournal.com/news/article/elite-wall-street-la...
Not the first time, surely not the last. The problem is that so much money is tied up in this thing, and the moment the music stops the bag holders are going to be utterly doomed.
magnet for scum like boosters on X, middle managment types, linkedin ai influences, ppl making fake videos on facebook.
At least crypto does not take away more jobs than it creates, where as we all know AI takes away more jobs and no-one can give a solution or explain what the "new jobs" are.
Because the value from AI is to automate the jobs from humans. Claiming otherwise is being intellectually dishonest. Same goes for defining "AGI".
Except sometimes when there's a huge black swan event, or when the bubble pops. Such events can result in significant layoffs even though it's a completely different mechanism.
Then again the CEOs of these companies want to get their company at all cost to society.
In fact it's a very sad story about a 20 year old throwing their life away instead of fighting for what he believes is right through non-violent activism and/or regulations.
Last year I wrote an article asking the very question "Who will be the next Luddites?", National Geographics followed-up months later. I'm sure many before, after or in-between covered the same topic. There is truth to it, we will be impacted but let's not forget we went through this during the industial revolution and we should be better equipped than ever to fight using meaningful non-violent acts and operations.
https://www.linkedin.com/pulse/who-neo-luddites-more-importa...
http://nationalgeographic.com/history/article/luddite-indust...
I’m sick and tired of AI hatred without people facing the truth. People hate AI because AI is on a trajectory to replace them and become better than a human. That is the fundamental reality.
Look don’t get angry at me. If you are on HN chances are you’re most likely delusional and completely wrong about AI. The majority of HN called vibe coding useless and said LLM have no potential. Now my company won’t even hire someone who hasn’t used Claude and I haven’t touched a text editor or ide in half a year. Same with the teeming hordes of experts on HN who said driverless cars will never come. All wrong. People on this site need to stop jumping on these band wagons of stupidity and pointless blame games.
Can we talk about that rather than blame corporations for being what they’ve been since before AI? Yeah corporations are psychopaths and corrupt and nobody cares. Same story till the end of time. We are on a cusp of a paradigm shift and your skills as a programmer are about to be utterly trashed because an AI is on trajectory to dominate your skills.
Face reality.
The first is the fear of job loss, and I feel like this is the most straightforward to deal with. Personally, I think the solution should be to share the productivity of AI with society at large, in particular since AI owes most of its abilities to training on the works of society. The easiest way would be a straight tax on AI usage, and using that tax to pay a universal basic income. There are obviously a ton of variations on this idea, but I think the general premise of sharing the gains with everyone is sound. I don’t think many would complain if they lost their job but kept their income.
The other two critiques are trickier. The first is the environmental impact of AI, and the response is difficult. Doing work to make it more efficient, and continuing to develop cleaner energy sources is paramount. Taxing and efficiency requirements might be a start. We have the technology to produce energy in sustainable ways, but it is expensive. It has to be non-negotiable if massive energy usage for AI is to continue.
The last is the REAL conversation, and I don’t know the answer. How do we handle AI doing creative work? How do we treat AI creative work? How much creative work do we feel comfortable handing over to AI?
I guess there is another issue, related to the last one, which is how do we deal with the ability to use AI to mislead and commit fraud at scale. How do we deal with not being able to trust what actually said/done by a human and what is AI pretending to be human? How do we avoid and mitigate the ability for AI to generate a massive amount of custom content that is used to mislead and defraud people? So much of our current mitigation strategy relies on the assumption that it takes a lot of effort and time to do certain things that can now be done instantly thousands of times?
People who bring up basic income need to get serious about the numbers involved because I never see it. It's not a realistic solution.
* A job guarantee like we had during the great depression
* Lowering retirement age
* Raise minimum wage
* Expanding medicare to everyone
It's worth remembering that if AI really can do everyone's jobs then it'll be wildly deflationary so there's no need to worry about pesky government spending on this stuff or paying people more. Spend spend spend, baby!
Ah youre worried it cant do that? Maybe it is mostly smoke and mirrors then.
The extra steps reduce costs and encourage offsetting production. Those are important steps!
^ this would be an accurate representation of your opinion then?
One could say the same thing about all the little art projects a hypothetical society on UBI might busy itself making. The pertinent difference seems to be one about scale and co-ordination. Job guarantees say we work together–through a centralised power–to build big things. Handing everyone cash leans more towards arts and crafts and consumption.
So without AI, the path forward is obvious: those 3 will become worse. Lowering retirement age, raising minimum wage, and expanding medicare won't happen without AI. They can't.
We already are reasonably close to a job guarantee. If unemployed people would accept any job, unemployment would drop by a lot. Not to zero, obviously, but a lot. Unemployment is also pretty low by historical standards, so fixing unemployment with a job guarantee can't fix much. We'll need something else.
> It's worth remembering that if AI really can do everyone's jobs then it'll be hyperdeflationary so no need to worry about pesky government spending on this stuff.
So yeah, I disagree. If you're going to assume AI will just jump to how capable it'll be 100 years from now, then you need to think a bit deeper. What AI effectively does, it provides capital-based labor. You buy a robot. Robot costs a lot, but operational expenses are marginal, energy and (maybe) "tokens". Add solar power, and let's say local AI becomes a thing, at least for normal robots, and you need nothing other than the initial cost of the robot.
Okay, so this will mean everything can be staffed with tens of thousands of these robots. Remote mine? No problem. 500 robots in your house? Why not. Cleaning very large facilities? Not a problem. Farm hundreds of square kilometers? Fine. Dig a canal to avoid the strait of Hormuz and just do it with shovels? Let's get to it. AI can be a universal machine that can do anything labor can achieve.
Obviously AI will massively increase the output of the economy, and people will figure out what to do with that, as people will want a shitload of things done. Which means the problem you're identifying will be trivial to solve, and we'll figure something out.
Historically, that "we'll figure something out" has usually meant the economical wipeout of large parts of the population, sooner or later followed either by some epidemic event or other "act of god" (like fires) that was a consequence of squalor and poverty, or by some sort of war to thin out the herd.
I'd prefer if history would not repeat itself for once.
24k puts you near poverty level. $1k per month will cover food expenses, it won't cover transport, shelter, and certainly not medical. On 12k per year you have enough money for food and praying that an emergency doesn't happen. It's hard enough living on 40k, and I'm not even in a place where costs are expensive.
I get where you're coming from. But this is politically unworkable, and for good reason. If AI increases productivity, that means more wealth, which means living standards should go up.
> I get where you're coming from.
You do? Have you priced out health insurance lately? I have. Insurance on HealthCare.gov for my partner and I would be $1700/month for what amounts to catastrophic coverage. It had around a $20k deductible and covered nothing other than an annual physical prior to hitting the deductible.
With $2k/month to work with between us, I guess we have to somehow find a place to live and eat on the remaining $300 as we pay for our functionally worthless health insurance since there is no way in hell we could afford to pay the deductible.
The natural progression of this is always government price fixing, which always ends up in complete destruction of the economy.
Telling a bunch of people they should accept being poorer has always worked out historically.
$12k might be nice in parts of Asia, but when the average rent is $1200/month, it doesn't go very far anywhere in the US.
Many of us see the current US administration as being either real life modern nazis or heavily influenced by such.
So I was wondering; are you being serious?
That 12k doesn't include healthcare, it doesn't include a lot of things. It's basically ensuring that people live well below poverty level, and for what? I just don't get how the numbers work, even if it was politically feasible.
I'd much rather have free healthcare and other amenities other countries have. Here in the US if you lose your job there is virtually nothing between you and the streets besides family and friends.
I'm facing this right now. I cannot get a job in tech which means restarting my career. Getting a job right now is not easy in any field especially not in anything like a living wage. If I did not have my parents I would be on the streets right now, thankfully I don't have a mortgage or anything like that. I'm not sure how much $12k per year would really help, it certainly wouldn't pay for housing.
It's rough out there.
For high levels of UBI it’s not possible to get all of the necessary tax revenue from taxing billionaires or corporations or other simplistic ideas that sound good unless you do math.
If we go back to a 60% corporate tax rate, for sure.
A 60% corporate tax rate wouldn’t get to the levels needed for UBI proposals either.
The pay levels are not comparable because you are also recompensed with time. You may choose to spend your time in a number of ways that you find rewarding that also reduce your expenses. Making your own meals, clothes, furniture, beer, wine etc. There are a lot of people who would enjoy doing these things but are too time poor to do so.
Your expenses also reduce by the amount you must spend in order to make yourself available to work. Travel, work clothes, medical certificates when sick. You can spend a lot in order to be paid.
If you want a world with a reasonable distribution of income levels. It stands to reason that those receiving more right now should receive less. Certainly, the absolute wealthiest should reduce the most, but on a global scale, it is hard to defend that those in the top 10% of incomes should retain their position.
The proposal for how much a universal income should pay is a variable to be argued itself. I can certainly see it being argued for at a lower level than ultimately desired since something is better than none.
In a sense the end state of a universal income in an equitable world would be remarkably simple. The income available divided by the world's population,
Those reviving more than their share now may not be happy about it, but I'm not sure they have a right to their larger portion either.
Almost definitionally it would. If society is saving a bunch of money on all that saved labor, that extra value is still there, it just needs to be appropriately redistributed
- employ you at 60k/yr
- replace you with a machine that costs a lot of money, and also send you UBI of 60k/yr
It should be obvious the latter is not an option that is ever going to happen.
The question that always pops up for me when it comes to UBI applied to the current capitalist system: even if you did actually come up with the money somehow (which is a pretty huge if as you say), once everyone has X “base money” per month, doesn’t that mean the cost of living (specifically renting) will rise to match this new “base”?
Like copyright. All modern LLMs are built on troves of copyrighted material that was used in their training. AI companies are claiming this is fair use, while pretty much all of the copyright holders would strongly disagree. This is going to get litigated for years, but regardless of what various legal systems decide, morally, people can be against this.
And people are already sick and tired of AI-generated content being used to replace human made content, be it on Spotify or TikTok. This is part "AI replacing humans", part "I'm being scammed by lower quality content".
OpenAI: We’re allowed to steal everything to train our AI and you can’t complain
Developer: Ok, I’ll use your AI to train mine
OpenAI: NO NOT LIKE THAT, UNFAIR
Altman and friends' "stop us before we shoot grandma" PR tour in 2023 and '24 is largely the cause of this AI backlash. If you tell everyone you're building something that will kill us all, you will scare up investors. But you'll also turn the public against you. In truth, we have zero evidence of the alignment problem to date in the existential form. Instead, it's the usual technology enabling bad actors stuff.
That's massively moving the goalposts on what counts as "an existential problem." The original framing was not economic dislocation but actual existence, i.e. existential. This new framing is a retreat to a way-of-life argument.
And I'm still calling baloney! The "AI will kill us all" argument backfired on Altman et al, so now we have an "it'll take over all the jobs" pitch. But it's all smoke and mirrors for investors. We have no good reason to expect current AI methods will lead to an AGI that can not only do most human labour, but do so economically competitively.
This is the "safety" messaging that OpenAI and Anthropic keep harping on and on, and on about, while whistling a merry tune as they turn around and sell AI to the US military and worse, to the tune of $billions/year already.
The "and worse" needs elaboration, because fundamentally the single biggest cash cow for AI vendors will be (and maybe already is) implementing a dystopian future where everything we say, type, or do will not just be recorded but also: read, analysed, and cross-correlated by unfeeling heartless machines tasked with keeping us in line.
I'm not being paranoid, President Biden said as much, but only in reference to China. If you think only China has motivation to use AI to keep a lid on dissent, I have a bridge to sell you. And if you think the Land Of The Free(tm) will never abuse AI in this manner, well... I have some bad news. You may want to sit down.
Here in Australia, the cyberpunk dystopia is already starting to be rolled out. A customer of ours asked their IT team to hook up a variety of HR-related information sources to a their new pet AI system tasked with making recommendations for hiring, promotion, and demotion.
Welcome 1984, citizen.
If the Epstein class wouldn't go for something like this in a world where they needed workers to produce, the idea that they will when we are surplus to requirement is inconceivable.
We erase it and call out the ghouls “creating” that shit, simple. They deserve being called out for creating shit and poisoning our minds.
In the more immediate run, I think the concern is that AI will reduce the ability of workers to collectively bargain and thereby grant the wealthy oligarchs even more control over their workers’ lives.
UBI has been a major donor priority, at least on the left.
However, they will also disregard any attempt to slow down or halt AI progress in general, so it isn't like the people wanting to end AI in general are any more likely to succeed than those wanting to do what I propose.
I personally feel my suggestions would be slightly more feasible to gain support for than trying to stop AI completely. The power brokers in control of AI currently certainly aren't going to stop developing and pushing AI, but they might be convinced that sharing the wealth is the only way to avoid massive revolt in the long run. While it is conceivable that the wealthy wouldn't need the masses for labor like they do now in the AI future, they still need to not be killed in a massive uprising when 90% of the population is unemployed and starving. While I know a lot of people think the plan is just to kill off that part of the population, that is not that easy to do even with an army of AI robots, and would likely be cheaper and easier to just share a bit of the productivity. I don't think it will be trivial, but I don't think it is impossible.
This is straightforward? This is a colossal task. Monumental. Billionaires own it. That’s the political status quo. You could build something to counter those centers of power. But from what base?
Well-paid software developers have scoffed at or been ignorant of worker organizing for, maybe forever? But I have good paycheck and equity... Now what?
A lot of this will both cost money AND require people to change their jobs, their investments, their equipment, ... And they hate it.
Everyone, including governments will have to adapt.
And to add insult to injury, everything comes from the US and it's really expensive.
Perhaps we can plug those people into an AI-generated simulation of life so they don't notice.
Making it more efficient will probably >>increase<< the total energy devoted to AI, not reduce it. See Jevon's Paradox.
I'm curious for metrics, but Dario strikes me as being less perpetually online. Given equal time, they may each be unlikeable. But they don't put themselves out there equally–Sam and Elon are unable to focus on their work. (I'll admit I've had a soft spot for Dario since he stood up to Hegseth–maybe I'm just not seeing the equal hate he's getting.)
The very same CEOs are extremely against social support, any taxes for themselves and any govermental agencies that help or protect people.
How is can this be possibly easiest in the world of Thiel, Musk, Trump, Vance, Palantier and overtone window moving toward economically conservative for years.
1. Lack of memory/continuity
2. Lack of agency
3. Lack of self-awareness
Based on my understanding of the basic 'loop' of an LLM, solutions for these may be decades off or not possible. Which leads me to the fourth problem:
4. Lack of compute
To get anywhere near AGI we need massive context windows. The whole thing is a mess.
Have you not had a discussion with Opus where it insists it is correct about something it is objectively wrong about for several turns?
> Do you realize that "memory" requires eating your hilariously small context window?
I do! LLMs are structured differently than humans, so the component we call "memory" corresponds to what humans call "short-term memory"; practical long-term memory for an LLM looks much more like what a human would call "let me write this down". But you can and commercially available systems do load it into context on demand when it's needed for some problem or another.
The LLM only currently has the illusion of these things. Hence the bubble.
I know that you (or anyone) as a human being don't have the illusion of these things.
This is not like the car replacing the horse for transportation. The LLM as-is cannot fundamentally replace the person. They require the agency of a human to take turns at all, and even more so to enact change in the world.
Your LLM does not actively engage in the world because it does not experience anything. It only responds to queries. We can do a lot with that, but it's not intelligence. It can't say oh hey SpicyLemonZest, I was thinking and had an idea the other day. Because it has nothing between each query.
UBI is a dangerous distraction in this context. It's a mammoth cost to achieve an impoverished quality of life. It may be worth implementing in general, but it absolutely must stay out of the conversation about AI. It's like if the ruling class started announcing that they would like to imprison us all, and your "discussion" about the problem revolved around how we can make our future jail cells feel as nice as possible.
We are allowed to regulate businesses. We simply don't.
You can't put things back in the bag. Perhaps the true underlying social problems are:
1. There's too many humans and not enough jobs.
2. The capitalist system only rewards profit seeking and cost externalization.
3. Our democratic representation myth is dead and buried.
4. Even in the developed world, middle-class security is gone.
So here's my question: given the current global system has failed and is clearly in its death throes, as a pan-national species how can we transition to a less mono-focal economic rationalism driven means of governance and self-organization without turning in to an autocracy or reinforcing negative nationalist bloc-level thinking that will tie us in to the same old human-thump-human stone age ape-ism and environmental cost externalization?
Perhaps AI can help in areas like improved education, improved media, proposals for improved government process or process transition for enhanced efficiency. Enforce transparency and accountability in the halls of power by reducing human process and corruption. Public auditable decision making and public auditable oversight. It's at least potential grounds for partial optimism. The best I can summon under present conditions. Of course, we want to avoid a dystopian global AI autocracy, the technocratic basis for which we have already well established, but if you view the present system as a dystopian human autocracy with the same technocratic basis (an increasingly rational perspective given recent events), then it starts to look more rosy.
In the same way that it was straightforward to deal with job loss from the industrial revolution, or when the US shipped away all its manufacturing capability?
And AI "Ikea-fies" art and creativity. It doesn't get rid of it. Of course you can get a generic table from IKEA, but for a real unique piece, you need to go to a real artist. Always.
The real main critique is for AI jobs that are a one-to-one replacement, your taxi driver, your dock worker etc. I don't think UBI is a viable solution (I used to) but nothing replaces the community and status that a real job gives you. This is going to be a tough one.
How much UBI you want from this AI tax ?
I don’t think they’d give me what I want
Every call for UBI should be qualified with two estimates:
1) How much money you think UBI will pay out
2) How much money you think the tax will generate
Creating a UBI program with AI taxes sounds like a clean solution to something until you do any math.
If we estimate today’s AI revenues across all the big providers at $100B annually (a little high) and divide by the population of the US, I get around $24 per month per person.
So a 100% tax on AI plans would allow us to give UBI of about 80 cents per day.
Even 10X the revenues wouldn’t make bring that to parity with UBI expectations. A 100% tax would also be an incredible gift to foreign AI companies that could offer similar services for half the price to everyone else in the world.
The work that is most replaceable by AI is work that is mostly digital. That work most easily moves to another country.
When the work is replaced by AI you can relocate it to another country much more easily than when you have to relocate workers.
Data centres popping up near you probably means higher electricity prices, poor air quality and water problems
Sam Altman is a massive penis, with a gift for saying the wrong thing at the wrong time.
The two things that link them are "rich" people imposing their will on everyone else, publicly.
Nothing at this point will make people believe AI is good for the masses.
What will need to happen for people to like AI ? I say they will get real $ month after month to cover more than the inflation, not the dumb tax deductions Trump harps on. In this case, maybe 1,000 USD per month adjusted for inflation yearly from AI will end this trend.
Why a payment ? All they see is the wealth of the top 1% increasing almost exponentially where they are struggling to pay their 'fixed' expenses.
In reality since 2008, the rich has been cashing in while workers have been footing the bill. That is the big issue.
People, esp. many SWEs, like generating with AI, or more telling, wouldn't want to give it up in their work.
On the other hand, people generally hate consuming the product of gen AI.
Consumer experience = mostly negative
Producer experience = mostly positive
Gone is all the experience in clean code, good idioms, etc. All replaced by easily generated shitty code that can be removed and generated again as we please, until it works. No thought about the quality of code itself. Some companies are straight up forcing programmers to live in Claude Code and never even see the code, just write the spec.
It’s disgusting. And the worst part is that you can’t opt-out. If you give even the slightest hint that you don’t like AI you’re seen as a Luddite and you’ll be put next in line for the upcoming layoff.
(a) loss of fulfillment (b) lower quality of output and nobody will care so the world will just "degrade" and (c) a perceived lack of autonomy ("forcing", "you can't opt out") around how adoption itself is executed
Although, full disclosure: I have quibbled with Gemini quite a bit over the trailing comma, which clutters the diff, and buries the lede at code review.
But it's been very gratifying to refer to modules entirely by their role in a given design pattern (eg "driven adapter") and be understood. To define the idiom, and see it adhered to.
But am I operating still at too low a level? Would I be penalized, at these "some companies" for not producing shitty code?
Ah, but in my particularly forward-deployed line, there's always an element of showmanship compelling me to write demonstrable code.
But, also, how can I specify the behavior if I can't name the component? Is it really possible to "vibe" code à sophisticated piece of software entirely from the user's domain terminology? Without any intermediate abstractions in mind? Inconceivable, frankly. There are invisible walls, invisible shapes beneath the surface.
Then again, I'm young enough to have never allocated memory manually in my professional life.
So we found something much worse than crypto.
You can opt-out of crypto, but you cannot opt-out of AI and have no choice but to participate.
The people: ??
Investors: Tell us more.
“Mythos is too dangerous to release.”
“OpenAI offers a bounty if you can get ChatGPT to teach you how to do a bioterroism.”
“Agentic agents will replace entire categories of jobs. They’ll just be like, gone”
This is all signaling to their customers; no not you on their $20/month plan, the governments and corporations of the world who have deep pockets, fat to trim, and borders to defend and expand.
It’s no surprise that people don’t like AI. It’s not for people.
billions use windows and gmail but have a poor opinion of microsoft and google both for obvious reasons. I expect the same will be true of AI platforms and the usual suspects behind them.
As we do this, we promise that if we set enough houses on fire, we'll build hell. And imagine how rich we'll be if we sell fuel to keep the hell we built running.
This is creative destruction in a whole new sense. Just chugging through genuine (or human) creativity, then training on human prompting, then finally ascending near the cluster of Anthropic/AWS nuclear power plants. And people pay for the pleasure.
The fact that AI acolytes are positively giddy about the above is just icing on the cake.
The situation might be different in the States, but I'd wager Joe Sixpack, brass fisher in Montana, couldn't care less about GPT-5.5 or whatever Musk is up to these days.
I don’t think Montana fishermen have a broad impact on society, or its decision making. There’s just not that many of them.
When you use ChatGPT for yourself, you may have a sense that what you see is made up; when someone else that you trust uses it and pronounces the output in a way that suggests it is their own, you are left doing much more complex social math to figure out if your trust in this person or entity can hold. It gets exhausting, personally.
Anyone who was in AI before 2022 can tell you about the last cycle that went from 2012-2018 or so when the metaverse failed, but we got tensorflow, pytorch, gpgpus
The cool thing is that every hype cycle generates a lot of really good new AI tech and integrations that persist. This time we got GPTs and diffusion sand splatting
I think this previous cycle will be seen as the penultimate with the next one permanently improving with no scale back.
We’ll be fine. We have survived every winter
It’s easy to fixate on the OpenAI and Anthropic-level companies, but the real inescapable flood of AI garbage is coming from the downstream companies building on the core AI providers. Communities like HN have some role to play here. Maybe some peer pressure on AI founders to, maybe, not make the world a worse place?
My wife was shocked to learn how much she liked Claude after these forced experiences with AI.
Of course normal people found this incredibly off putting.
But there are a lot of areas where AI is helping that people don't see, like in medicine. Drug development, cancer research and early detection, CT and MRI analysis, just to name a few. These uses cases are vastly more important but rarely get discussed. It's important to know that AI isn't this one singular thing or else we risk throwing the baby out with the bathwater.
AI is massively marketed by AI people as a tool to replace your job. So either the AI people are bad at marketing or the gains in other industry are insignificant/ do not generate shareholder value.
When AI produces those meaningful advances in those fields, great, we can start having meaningful discussions about them. The greatest medical advancement of the 21st century is likely mRNA, or maybe GLP-1 for some. Neither were LLM assisted in any meaningful way as far as I know (they predate ChatGPT, perhaps more primitive models were involved in ways I’m not familiar with). Until those advances come, this argument is fanfic.
Plus, in the most morbid way possible: who gives a shit about living longer if they are stripped of their career, are inundated with slop at every angle, and can’t trust any information. These are real problems that AI has already created, unlike the fanfic of ridding cancer.
A person having a negative attitude about AI doesn't mean that they wouldn't keep the parts that are mostly positive if they could.
What I really hate is agentic customer support, sales etc. - when you have to use them you realize how stupid the workflows, tool call, MCP, and all that garbage that is glued is just to reduce costs instead of churn.
PS: Ironically I'm working on coding an "agentic platform" for the product suite and their backend services. I simply don't feel confident about the product I'm building but I guess it is paying my bills for the moment
I wish articles like this would at least acknowledge the massive adoption AI has among programmers. It's not comparable to stuff like helping you write the occasional email, which I presume is the baseline for most people outside tech. Making it sound like a minor tool that some people are still just experimenting with completely misses the impact it has already had on software development.
Adoption in particular is a useless metric. They are forced to adopt even if it's not really helping in their case, or if it does help but using it makes them miserable, like being forced to switch jobs from something you enjoy to something you find boring and tedious. And then there's the "expertise debt" that will have who knows what impact in the coming decades.
As it stands though the whole "the public hates AI" is about as credible as that phase from a decade ago where random tweets were used to justify any position they wanted to.
Isn't this fundamentally what MBAs do with their time? Keep going with this analysis, because it goes much deeper... In my experience, BI is often a house of cards. A lot of times it's just narrative crafting, just like we're all encouraged to do when we write our resumes.
Can you embellish a story? Can you invent a convincing political narrative? As far as I can tell, that's the fundamental unit of US corporation.
I am not condoning violence, but claiming it is not a politically effective tactic is disingenuous. I get that columnists are trying to cover their asses, but still.
Violence is the reason slavery ended in the US. Violence brought us civil rights laws. Gay rights. Women's rights. Labor laws. Environmental protection laws.
Every right granted by default to white Christian gentlemen at the founding of this great nation had to be taken in blood by everyone else. That's just how America is.
When, where and how violence is justifiable is a different question, of course. But the premise that "Naturally, violence is never an answer, nor is it a politically effective tactic" is simply false. If violence were politically ineffective, authoritarian states wouldn't use so much of it.
And yet, as the will of the people is ignored to the benefit of but few, violence will become the answer.
2. flooding social media with obviously fake ai content
3. only billionaires benefiting from it and gloating about it .
Think back on a time where you and a teammate (or teammates) spent hours or days debating back and forth on different technological or architectural options for trade-offs. How much nuance and detail went into those discussion. We used to take pride in our ability to make careful and measured tradeoffs. And yet with this tech all that is thrown out the window.
The only people who still look positively at AI, are either the ones working on it/building something with it, or the ones who are profiting from it, kinda like crypto few years ago, and just like how crypto is mostly immediately associated with scams now, I imagine something similar will be associated with AI soon.
Even other tech people that are not directly in the AI industry hate AI, due to all the shortages in chips and prices increasing across the hardware board, from gamers to sysadmins to hobbyists, I mean, the rpi are almost like a fully fledged NUC few years ago.
Edit: to add, did AI improved the average person life? Nope, if not increasing the costs, or tracking and violating their privacy, it did flood the internet with slop, or a frustrating useless AI chat support.. from an average person perspective, it added none to their quality of life, it didn’t make things cheaper, it didn’t improve their travels, it didn’t magically made them teleport, and so on, instead, AI was used for all hostile purposes against average person. Even from technical perspective, have we seen any breakthrough in tech given AI is a “superior” assistant? Nope, software is more shitty and buggy now, and SaaS are even increasing the prices (probably to pay for AI tokens), software developers are saying coding isn’t fun anymore, hardware designs didn’t improve, governments processes still have the beuqacratic system plus AI. Unlike when automation was introduced decades ago, where people did notice an improvement in their quality of life.
This is hugely generalized and a little offensive, but there is definitely a core difference that could be more thoroughly described.
balamatom•1h ago