You've reached the end!
Enjoy your newfound freedom and live a real life.
I used to be able to curate my feed and pay attention to people I knew. I could see the photo projects my friends were working on, hear about life updates from past acquaintances, or reach out if someone was having a rough time.
Now, all cohorts, from recent coworkers to childhood friends I made before the first web browser, routinely spew paragraphs of LLM slop to shill a career coaching podcast. It’s Invasion of the Body Snatchers.
Social networks are only a small part of it. It’s email, mailing lists, billboards, sheets of paper stapled to utility poles, newspaper articles, dentist office phone lines, jigsaw puzzles, home furnishings, homework assigned to grade schoolers, birthday cards and on and on.
(I'm going to guess you mean generative AI such as image/video/text generation used to create slop on Facebook, but I really wish posts like this would clarify.)
> Everyone seems to have their own personal definition of acceptable AI use. If you Vibecode an entire app, it's because you are lazy and unskilled. But use AI for code review and writing tests? You are smart and efficient.
> You could use AI to remove photo backgrounds or clean up artifacts, that's just good editing. But generating an image for your blog post? You are stealing from hardworking artists. You are a fraud! You probably use AI as a writing assistant like a monster. But using it to generate documentation from your code is indispensable.
I was making a point that saying "I hate AI" is intellectually lazy. The discourse here can be a whole lot better if people put more effort into clarity.
I want Hacker News to be a better place for technically sophisticated conversations than most of Reddit.
I love all computer technology except printers.
Gimme more - looking forward to further leaps forward in AL and LLMs - the party has just started.
Am I the only one that wants to print on dot-matrix printers again? Maybe find a copy of The Print Shop (Broderbund). It could just be nostalgia kicking in.
Before AI, when someone showed you a presentation or an Excel sheet, even if it was complete horseshit that they had made up, they knew what was in it: they knew more about it than you, by definition.
Now, not so much; people output things they know nothing about, and when they show it to you they are discovering it just as you are.
This is novel, and discomforting.
> CRC (Cyclic Redundancy Check) errors on Wi-Fi indicate that data frames were corrupted during transmission, often caused by high electromagnetic interference (EMI), physical layer issues, or faulty hardware. They cause packet loss, slow speeds, and intermittent connectivity. Common solutions include replacing cables, reducing interference, updating drivers, and adjusting radio power
This is all well and good except: read the prompt carefully. It never actually says what CRC errors are. This is the average AI user: literally work on, build, and fix things without the slightest clue about what it is you're actually working on.
He makes >6 figures lol
Honestly, quite a tragedy for many. Myself, I have to be constantly fighting against this to slow myself down
Most Americans are using AI but are sick and tired of hearing about it (2 points, 12 days ago) https://news.ycombinator.com/item?id=47717956
I am definitely missing the pre-AI writing era (322 points, 23 days ago, 240 comments) https://news.ycombinator.com/item?id=47571279
I am leaving the AI party after one drink (121 points, 25 days ago, 130 comments) https://news.ycombinator.com/item?id=47545030
Is anybody else bored of talking about AI? (746 points, 28 days ago, 527 comments) https://news.ycombinator.com/item?id=47508745
I'm Sick of This AI Shit [video] (48 points, 2 months ago, 22 comments) https://news.ycombinator.com/item?id=47086392
Ask HN: AI Depression (56 points, 2 months ago, 28 comments) https://news.ycombinator.com/item?id=47001833
I am just sooo sick of AI prediction content, let's kill it already (74 points, 5 months ago, 71 comments) https://news.ycombinator.com/item?id=45982542
'Attention is all you need' coauthor says he's 'sick' of transformers (432 points, 6 months ago, 224 comments) https://news.ycombinator.com/item?id=45690840
Ask HN: Is anyone else sick of AI splattered code (89 points, 7 months ago, 84 comments) https://news.ycombinator.com/item?id=45278819
After watching the GPT images release video, it reenforced my skepticism that society will adapt. Then I thought about AI analysis of people's movements in public and realized that governments already capture everything, and now will be able to use infinite AI surveillance agents to watch all things all the time.
Any disobedience or crime (but really only against the government and gentry) can be instantly investigated by asking AI to analyze the behavior of all people and vehicles in the days prior to and after the incident. That's if they can't identify you immediately at the time of the crime.
When the time comes that civilian disorder is required to change the behavior of government, it will be impossible.
AI is the destruction of individual freedom. It is the destruction of citizens' ability to rebel against power.
We would be far better off without it.
[1] https://inv.thepixora.com/watch?v=CLo3e1Pak-Y
youtube version: https://youtube.com/watch?v=CLo3e1Pak-Y
I am an experienced developer, and, if I know what I am doing, then AI tools are an average junior programmer that I can beckon.
I have also dabbled in music creation with AI, first generating the lyrics, and then the music with vocals. Is it good. Nope. Is it average, some might say so. Is it a great use of my time, sure. Like a paid video game.
This all started with zucks obsession with virtual avatars and you can really see this in VR.
Couple with that the frequent press about “AI is going to replace your job” and the public image problem is pretty bad.
AI replacing jobs is a good thing. I don't know why people want to keep doing stupid jobs that even a machine can do. As for their income, they should vote for politicians that give them benefits rather than take them away.
I am sure there were people for a few decades that complained that horses were no longer in demand, but these people are now extinct, and the AI doomers will be too.
I do not dislike AI. It has potential to change and improve the human condition. With that being said, it has its downsides with workforce displacement being at the top of the list, for me at least. Unemployment, however, has been prevalent in the US for many decades, mostly due to political maneuvering of previous politicians. AI has just made things a bit more difficult for the workforce, especially the recent generations who were already dealing with unemployment due to unmarketable degrees from colleges. I am not ashamed to say that, though I've been in tech for years, I am one of those statistics, unfortunately.
To fix this, AI companies should refocus their goals to account for the displacement of human roles as they continue to improve AIs. They should start doing that sooner rather than later.
The reality is that AI already does things better than some humans ever could. From what some individuals have been telling me, in education, for example, AI is already disrupting the classrooms. Teachers are feeling the AI-burn in the already declining education sector.
Though, I see a decline in human creativity and influence due to AI, I myself have used it to learn certain OS-related concepts or tweaks that would have normally taken me months to figure out had I focused solely on google searches, reddit threads and similar.
If I could do more, I would but I am limited by the lack of better, powerful hardware with the price being what they are.
I've adopted the tools because they're useful, but businesses need to chill. AI seems to amplify existing bottlenecks within organisations, so we should probably tread carefully when it comes to pushing the tech. Fix the organisational problems first and hedge our bets.
I wonder if anyone reading this was around during the dot-com bubble because maybe it felt the same...
I moved to a dot-com right at the tail end of it. We built a pretty decent startup from scratch within the first two months and debuted at one of the largest trade shows in the world. We had our own private label factoring credit card and we did credit card transactions over the internet and with handheld cellular devices. It was built to scale, colocated, and we were getting customers. When the floor dropped out it was done in less than two months. dot-com was a very negative thing for a while after that.
It was watching all the potential being squandered and the internet basically being relegated to click farming and selling people crap they don't need.
All the really cool stuff seems to have died with the bubble...
Apparently "on-device AI models" are a thing. And are downloaded separately after the install of Chrome.
Deeply frustrating on a mobile connection.
Oh and messages https://news.ycombinator.com/item?id=6090712
AI is the same, but amplified and affecting a lot more people.
So I just recall Web 2.0 era and know that this too, shall pass.
And it's only going to get worse. Is this what getting old feels like? Hating everything the rest of society is racing to embrace? I keep waiting for the backlash, for people to get sick of the plastic sheen on everything, but they conveyor belt just keeps moving. Maybe I'm just turning into my parents griping about all the weird music videos on MTV? =P
The linear function do not work any more - we'll all deal with AI on some level, handwritten programs would be like assembly programs, there will be some, but not many.
But everyone is currently focused on the second derivative - using AI to further AI stuff - that's a valid goal but not in of itself, AI is just a tool, a tool that gets better is still a tool. It still needs to build something other than itself.
First derivative is where the money is. Let me grab this tool and do something useful/fun with it. Thanks for the fierce competition to build me the best tool in the mean time.
Like adding erosion to this hydrology simulator that I felt too complex a few years ago: https://aperocky.com/hydrosim
Here's to reading HN projected through the lens of manga comic strips sometime after we solve the GPU shortage..
But AI tool in the hands of professionals that care about what they produce is becoming revolutionary. We are doing things we would never have done. Projects I never would have even started I am doing with new enthusiasm. I and the people I work with are using agents to learn new topics so fast. AI makes mistakes all the time, I found myself getting gaslit last week that refreshing my auth token would update my permissions (authentication and authorization are not the same thing)
If you are just looking at the output in images and garbage posts. Yes it is an abomination that must be stopped. But I cannot imagine a world without it now. And for the better.
I'm a person who loves learning but I don't really understand this claim. My brain quickly reaches a saturation point when learning new topics. I need to leave and come back multiple times until I begin to understand, but this seems to me to be a normal part of the process. It's the struggle that forms the connections in my brain.
Being spoon-fed information isn't the same as learning, to me. Are you also using AI to test you on your new knowledge? Does it administer these tests periodically? Or are you just reviewing notes and saying to yourself "I know this now"?
How are you ensuring you've learned anything at all?
Cramming often feels more satisfying, more like you're learning, but actually leads to worse retention. Spaced repetition that includes the struggle of recalling something just at the edge of being forgotten, on the other hand, feels worse but leads to much higher retention.
It's like it distills it for you. I feel like you're thinking of an example like trying to learning operating systems by reading wikipedia articles (i.e. it gives you a high level summary but nothing more).
The way I see it, code says a lot, but it takes time to scroll through it and cmd+click back and forth. But if you just ask the AI "where's x thing happening around this file" it will just point you right to it. So I feel like less cognitive energy is spent dealing with the syntactic quirks of code and more is spent on the essential algorithmic task.
I don't really like using it to summarize natural language written by one author or group, like a paper for example, that just feels like laziness to me.
Personally I find AI great and where I can , everything is AI enhanced
- Coding / Software dev (obvious one)
- Health ... been super useful as I recently had a thyroidectomy, it's given me a lot of information the drs didn't and also spotted a mistake my dr made in post surgery symptoms. I maintain my own set of .md files documenting all medical things now.
- Shopping. Super useful though still has a way to go, but relative to google I tend to use the AI results more often.
- Random problems... Insanely useful!
- Fact Checking, pretty good for the most. But you have to fact check your fact checking.
- Market Research, surprisingly good
- Philosophy, really good and useful
So basically anything.
There’s something uninspiring about a machine thats supposed to “do the hard things for you” so to speak. I like using my mind and understanding things deeply.
Sure you could say that “managing the AI” can be deeply understood in a way but it’s just not exciting.
Your parents could afford a house, have kids etc etc at a far younger age but now you are single with no kids and choosing food or rent or power. You spin the wheel! Lucky! You get to eat.
Progress!
- Write a new top 40 song no talent required
- Write a business email, a school paper, etc & no talent required
- Design a logo, a website, an app, a billboard, etc and no talent required.
AI is the best thing to happen to humanity as it mimics & steals humanity for a few pie holes and us the majority does nothing to stop it!
> do the hard things for you
The only people advocating for that are the same kind of people which were pitching the cloud as a solution for your hosting needs.
Ime the sweet spot for development with LLMs is to figure out what you need to do and then do that through AI. Yes, it'll still make some decisions there, but did you really get satisfaction from the decisions of eg what to call a class before? At last I didn't.
You can of course try to offload everything to the LLM and not tell it what to do, but only specify what it should enable (spec driven), but at that point youre gambling wherever the output will work and the project becomes unmaintainable - which may be fine too in certain scenarios, that's just pretty rare in a business context
I know that AI has some different characteristics than those technologies, but my point is that I don’t think your issue is that does the hard things for you… there has to be something else going on.
Even the top post on HN about ChatGPT’s image generation is full of a bunch of comments just saying “wow this is epic”, “I can make so many mangas with this”, etc. Or a post about a new model where people are saying bland stuff like “this doesn’t write Typescript as well as Nut43-2.1-Max”. Compare those to a post about language design, for instance, and you’d see a lot more interesting discussion and opinions.
Just my opinion though. It seems like the more interesting topics in AI are related to its divisiveness, and even that is getting super old after years of it going on.
Step 2: insert reference to AI
I remember being frustrated at every company claiming to be "innovative" in a past job search.
PaulHoule•2h ago
Most of all I am sick of people being sick of it!
marciob•2h ago
stellalo•2h ago
socketcluster•2h ago
PaulHoule•1h ago
(2) Worked for a startup trying to teach RNNs to read clinical notes; people typecast me as the idealist but I would have preferred the cynical business plan of a product for medical offices to "rebill" insurance to maximize revenue, like the value is clear and nobody dies if it screws up.
(3) Worked at another startup that was training CNNs to read all sorts of documents and datasets you see in corporate environments. That summer I had a methodology I called "predictive evaluation" and a sheaf full of notes that proved that variations of the system we had wasn't really going to work (but they did get it to work enough for one at least one customer) and there was that meeting when we talked about BERT and I said "that seems to avoid all my objections" but the team was through with developing new models and my methodology would have underestimated what BERT could do because it didn't give credit for getting the right answer by the wrong method! Turned out transformers also fixed problems those RNNs had too!