My autism flavour, I have a weakness in communication, and AI spits out better writing than I do. Personally I love that it helps me. Autism is a disability and AI helps me through it.
Imagine however if you're an expert in communication; then this is a new competitor that's undefeatable.
I don’t have much of a prediction around if llms will conquer agi or other hyped summits. But I’m nearly 100% certain development tooling will be mostly AI driven in 5 years.
I'm quite downvoted, it would seem people disagree with what I posted, i did preface that its a disability for me.
From my pov, AI is amazingly helpful to me.
Also, HN loves to hate things, remember the welcome dropbox got in 2007?
Any result comes at very high relative cost in terms of computing time and energy consumed.
AI is the polar opposite of traditional logic based computing --- instead of highly accurate and reliable facts at low cost, you get unreliable opinions at high cost.
There are valid uses cases for current AI but it is not a universal replacement for logic based programming that we all know and love --- not even close. Suggesting otherwise smacks of snake oil and hype.
Legal liability for AI pronouncements is another on-going concern that remains to be fully addressed in the courts. One example: An AI chatbot accused a pro basketball player of vandalism due to references found of him "throwing bricks" during play.
In other words: Instead of buying a simple hammer for nailing a plank, all the marketing is about buying a bulldozer in a foreign country that they will ship to you, and in the process of using it for hammering the nail, you destroy the whole house.
I haven't seen people negatively comment on simple AI tooling, or cases where AI creates real output.
I do see a lot of hate on hype-trains and, for what it's worth, I wouldn't say it's undeserved. LLMs are currently oversold as this be-all end-all AI, while there's still a lot of "all" to conquer.
1) A keyword to game out investment capital from investors
2) A crutch for developers who should probably the be replaced by AI
I do believe there is some utility and value behind AI, but its still so primitive that its a smarter auto-complete.
Is it 10x smarter than auto-complete on your iPhone or 10000x smarter?
It's a mixed bag, because it often provides plausible but incorrect completions.
Is it totally useless or is it the greatest thing ever? If neither, where in the middle do you put it?
How often does it provide plausible but incorrect completions? Is it every few minutes or is it a couple times a day?
This is my biggest issue with the AI complainers on here. It's always the broadest and most vague complaints. I'd rather somebody just say, "You know what I just don't like AI" rather than try to convince me it's bad through vagueness.
For code completion, I think it's close to useless in my experience, traditional code completion feels much more useful.
> How often does it provide plausible but incorrect completions? Is it every few minutes or is it a couple times a day?
It varies with the workload but closer to every few minutes than a couple times a day. For example while writing rust the majority (like 95%) of the code completion suggestions are incorrect. When writing a python website it gets better but you still get bad suggestions that look good, several times a day.
The killer feature is generating code, not as completion but after an explicit prompt. Most models are okayish on that task. But still you have to pay attention.
That's all in my experience across like a year, your mileage may vary.
So yes, there's a healthy criticism of blindly allowing a few multi-billionnaires to own a tech that can rip off the fabric of our societies
And the results from all that "touching" are mixed at best.
Example: IBM and McDonalds spent 3 years trying to get AI to take orders at drive-thru windows. As far as a "job" goes, this is pretty low hanging fruit.
Here are the results:
https://apnews.com/article/mcdonalds-ai-drive-thru-ibm-bebc8...
That sounds like there's a flawed assumption buried in there. Hype has very little correlation with usefulness. Investment has perhaps slightly more, but only slightly.
Investment tells you that people invested. Hype tells you that people are trying to sell it. That's all. They tell you nothing about usefulness.
it’s a shame that this “thing” has now monopolized tech discussions
1. Failed expectations - hackers tend to dream big and they felt like we're that close to AGI. Then they faced the reality of a "dumb" (yet very advanced) auto-complete. It's very good, but not as good as they wanted it.
2. Too much posts all over the internet from people who has zero idea about how LLMs work and their actual pros/cons and limitations. Those posts cause natural compensating force.
I don't see a fear of losing job as a serious tendency (only in junior developers and wannabes).
It's the opposite - senior devs secretly waited for something that would off load a big part of the stress and dumb work of their shoulders, but it happened only occasionally and in a limited form (see point 1 above)
The "AI" we have now isn't actually "I".
1. Have not kept up with and actively experimented with the tooling, and so dont know how good they are.
2. Have some unconscious concern about the commoditization of their skill sets
3. Are not actively working in AI and so want to just stick their head in the sand
My concerns are:
1) Regardless of whether AI could do this, the corporation leaders are pushing for AI replacement for humans. I don't care whether AI could do it or not, but multiple mega corporations are talking about this openly. This is not going to bode well for us ordinary programmers;
2) Now, if AI could actually do that -- might not be now, or a couple of years, but 5-10 years from now, and even if they could ONLY replace junior developers, it's going to be hell for everyone. Just think about the impact to the industry. 10 years is actually fine for me, as I'm 40+, but hey, you guys are probably younger than me.
--> Anyone who is pushing AI openly && (is not in the leadership || is not financially free || is an ordinary, non-John-Carmack level programmer), if I may say so, is not thinking straight. You SHOULD use it, but you should NOT advocate it, especially to replace your team.
How exactly would someone find hype useful?
Hell, even the investment part is questionable in an industry that's known for "fake it till you make it" and "thanks for the journey" messages when it's inevitably bought by someone else and changes dramatically or is shut down.
2)Energy/Environment. This stuff is nearly as bad as crypto in terms Energy Input & Emissions per Generated Value.
3)A LOT of creatives are really angry at what they perceive as theft, and 'screwing over the little guy'. Regardless of whether you agree with them, you can't just ignore them and expect that their arguments will just go away.
2. Energy is an important issue. We need a sane energy policy and worldwide cooperation. Corporations should pay the full cost of their energy use, including pollution mitigation and carbon offsets. Pragmatism suggests that this is not likely to happen any time soon. The US will be out of any discussion of sane energy policy for the foreseeable future.
3. The training of many (all?) major LLMs included a step that was criminal. That is, downloading Z-Library or Library Genesis. The issue of Fair Use for training models on copyrighted text is unsettled. The legality of downloading pirated ebooks is well-defined. These books were stripped of their DRM, which itself is illegal under DMCA. It's a crime and CEO's should be held accountable. Training an LLM on copyrighted works might be legal, but stealing those copyrighted works is not. At least buy a copy of the book.
I don't claim to be able to predict when such an AI that is much more capable than people will be created beyond saying that if the AI labs are not stopped (i.e., banned by the major governments) it will probably happen some time in the next 45 years.
Majority AI today can create/simulate a "Moment" but not the whole "Process". For example,You can create a "short hollywood movie clip" but not the whole "Hollywood movie". I am pretty sure my reasoning is incorrect so I am commenting here to get valid feedback.
The reactions basically seem to range from "AI is useless because it's inaccurate/can't do this" to "AI is evil because of how it takes jobs from humans, and should never have been invented".
Still, the former is probably the bigger reason here in particular. LLMs can be useful if you're working within very, very general domains with a ton of source material (like say, React programming), but they're usually not as good as a standard solution to the issue would be, especially when said issue isn't as set in stone as programming might be. So most of these solutions just come across as a worse way to solve an already solved problem, except with AI added as a buzzword.
It's even worse when you had made that fun your livelihood. Now it's sucked the fun out of everything and put you out of a job.
actionfromafar•8mo ago
neom•8mo ago
gtirloni•8mo ago
actionfromafar•8mo ago
neom•8mo ago