My autism flavour, I have a weakness in communication, and AI spits out better writing than I do. Personally I love that it helps me. Autism is a disability and AI helps me through it.
Imagine however if you're an expert in communication; then this is a new competitor that's undefeatable.
I don’t have much of a prediction around if llms will conquer agi or other hyped summits. But I’m nearly 100% certain development tooling will be mostly AI driven in 5 years.
I'm quite downvoted, it would seem people disagree with what I posted, i did preface that its a disability for me.
From my pov, AI is amazingly helpful to me.
Also, HN loves to hate things, remember the welcome dropbox got in 2007?
Any result comes at very high relative cost in terms of computing time and energy consumed.
AI is the polar opposite of traditional logic based computing --- instead of highly accurate and reliable facts at low cost, you get unreliable opinions at high cost.
There are valid uses cases for current AI but it is not a universal replacement for logic based programming that we all know and love --- not even close. Suggesting otherwise smacks of snake oil and hype.
Legal liability for AI pronouncements is another on-going concern that remains to be fully addressed in the courts. One example: An AI chatbot accused a pro basketball player of vandalism due to references found of him "throwing bricks" during play.
I haven't seen people negatively comment on simple AI tooling, or cases where AI creates real output.
I do see a lot of hate on hype-trains and, for what it's worth, I wouldn't say it's undeserved. LLMs are currently oversold as this be-all end-all AI, while there's still a lot of "all" to conquer.
1) A keyword to game out investment capital from investors
2) A crutch for developers who should probably the be replaced by AI
I do believe there is some utility and value behind AI, but its still so primitive that its a smarter auto-complete.
Is it 10x smarter than auto-complete on your iPhone or 10000x smarter?
It's a mixed bag, because it often provides plausible but incorrect completions.
So yes, there's a healthy criticism of blindly allowing a few multi-billionnaires to own a tech that can rip off the fabric of our societies
And the results from all that "touching" are mixed at best.
Example: IBM and McDonalds spent 3 years trying to get AI to take orders at drive-thru windows. As far as a "job" goes, this is pretty low hanging fruit.
Here are the results:
https://apnews.com/article/mcdonalds-ai-drive-thru-ibm-bebc8...
That sounds like there's a flawed assumption buried in there. Hype has very little correlation with usefulness. Investment has perhaps slightly more, but only slightly.
Investment tells you that people invested. Hype tells you that people are trying to sell it. That's all. They tell you nothing about usefulness.
it’s a shame that this “thing” has now monopolized tech discussions
1. Failed expectations - hackers tend to dream big and they felt like we're that close to AGI. Then they faced the reality of a "dumb" (yet very advanced) auto-complete. It's very good, but not as good as they wanted it.
2. Too much posts all over the internet from people who has zero idea about how LLMs work and their actual pros/cons and limitations. Those posts cause natural compensating force.
I don't see a fear of losing job as a serious tendency (only in junior developers and wannabes).
It's the opposite - senior devs secretly waited for something that would off load a big part of the stress and dumb work of their shoulders, but it happened only occasionally and in a limited form (see point 1 above)
1. Have not kept up with and actively experimented with the tooling, and so dont know how good they are.
2. Have some unconscious concern about the commoditization of their skill sets
3. Are not actively working in AI and so want to just stick their head in the sand
My concerns are:
1) Regardless of whether AI could do this, the corporation leaders are pushing for AI replacement for humans. I don't care whether AI could do it or not, but multiple mega corporations are talking about this openly. This is not going to bode well for us ordinary programmers;
2) Now, if AI could actually do that -- might not be now, or a couple of years, but 5-10 years from now, and even if they could ONLY replace junior developers, it's going to be hell for everyone. Just think about the impact to the industry. 10 years is actually fine for me, as I'm 40+, but hey, you guys are probably younger than me.
--> Anyone who is pushing AI openly && (is not in the leadership || is not financially free || is an ordinary, non-John-Carmack level programmer), if I may say so, is not thinking straight. You SHOULD use it, but you should NOT advocate it, especially to replace your team.
How exactly would someone find hype useful?
Hell, even the investment part is questionable in an industry that's known for "fake it till you make it" and "thanks for the journey" messages when it's inevitably bought by someone else and changes dramatically or is shut down.
I'm not exactly impressed by the results when it comes to actual work around building software.
For example, instead of spending 15 minutes reading a wikipedia entry about someone or something that has absolutely zero effect on my life besides some curiosity, I can ask chatgpt and learn enough in 2-3 minutes.
However, it usually creates more work when its about something that does have an effect on my life, like my work. After the second or third hallucination, its like GTFO.
2)Energy/Environment. This stuff is nearly as bad as crypto in terms Energy Input & Emissions per Generated Value.
3)A LOT of creatives are really angry at what they perceive as theft, and 'screwing over the little guy'. Regardless of whether you agree with them, you can't just ignore them and expect that their arguments will just go away.
actionfromafar•6h ago
neom•6h ago
gtirloni•5h ago
actionfromafar•4h ago
neom•2h ago