When you see that, call them out on it. Not understanding copy+pasted code is one thing, but not testing it is a whole other level of garbage.
The same objections apply to the written word. A culture that succeeded in not succumbing to writing and reading may indeed have been better off in the short term, depending on the quality of the memes. But it would have been at a competitive disadvantage to cultures that were permissive with knowledge transfer.
The main advantage of AI so far has been as a distiller of the knowledge embedded in the written word. It's another leap in knowledge transfer. That's still a competitive advantage to any culture that doesn't abjure it. This particular consciousness intends to leverage the opportunity.
Though, when we try to use it as a synthesizer of new knowledge (software, article, review), that's when the OP's thinking about protection makes sense.
You can go back and read McLuhan, he's great, but a recent and more approachable book on this is _God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning_.
Way back in 1969 the utopian vision of technology put humans at the centre. The Whole Earth Catalogue's slogan was “access to tools.” Just _tools_. That same year, technology put a man on the moon.
Unfortunately, if you realize the extent and the history of the problem, you see we're so far gone, miles away from getting a grip.
It’s embarrassing. Don’t rely on AI, guys. Have pride in yourselves.
I don't understand why many software engineers are so resistant to exploring AI. It's fascinating!
Relatedly, this blog quote[0] really resonated with me:
> reaching the end state of a task as fast as possible has never been my primary motivation. I code to explore ideas and problem spaces...
Using AI to code is like mountain biking but on a motor scooter, and you're riding in a sidecar looking at a map while a golem drives the bike. It's amazing that the golem can drive the bike at all, to be sure, and yes I'm wearing a helmet so the crashes aren't too bad, but...what are we doing here? I like riding my bike! It might be more physical work for me (but also with the golem I often have to get out and push the motorbike anyway so it's not clear), but when I'm biking, I'm connected to the ground and I can get into a flow state with it. I'm also getting exercise and getting better at biking and learning the trails and it's easy to hop off and explore some useless cave that's not accessible by bike, just because it looks interesting. I know the AI-proponent answer is "you still can!" but when I'm in the sidecar, my modality shifts. I'm no longer independent, I'm using a different kind of agency that's map- and destination- focused, and I'm not paying attention to the unfolding world around me.
So I understand why some people are excited about AI, and I don't think it's necessarily bad (though it does seem insidious in some pretty obvious ways, that even its proponents are aware/wary of). But why are many of those people, like yourself, seemingly unwilling/unable to understand why others of us are bouncing off it?
I feel like people keep explaining this, often in direct replies to your comments like this one, some of which you specifically even respond to. So if you still don't understand, maybe you're reading but not actually listening? Or maybe long-term memory deteriorates as one merges with the AI?
[0]https://handmadeoasis.com/ai-and-software-engineering-the-co...
What frustrates me is when people 1. claim it's entirely useless and that anyone who thinks it's useful is deceiving themselves (still very common, albeit maybe less so now than it was six months ago) or 2. claim that spending time writing about and understanding it "has turned a lot of them into drooling fanboys."
Hence my snappy response to the above comment. I took it a bit personally.
We're too lazy and too obsessed with getting ahead to use this technology responsibly in my opinion.
Particularly when they know that people like the commenter above are making sure it ultimately “works” by covering the incompetence of their colleagues.
The comment you are replying to is in my view a superb observation of the challenge of maintaining quality against systemic pressures to appear to be performing.
Most senior leaders in organisations cannot (or care not to) measure quality. Few (outside big tech I assume but wouldn’t be surprised to also see overlook this?) are even usefully measuring benefits realisation tied back to activity (such as software releases).
What they can measure and are systemically incentivised for is “what does it take to get the approval of the next leader above me”, and most of the time a plausible report that the software has been delivered to/ahead of schedule is the real objective to achieve this goal.
That doesn’t mean morally motivated managers aren’t out there driving quality. But it is at odds with these org systems, at the detriment (or risk of detriment) of their own careers compared to peers who optimise more for what the system rewards, and at the expense of greater energy as they effectively have to hide their pursuit of better outcomes for the organisation under a veneer of performing as the organisation expects (that is, serve two goals simultaneously, one covert and one performative).
Something like this is a good exploration of the subject: https://spakhm.substack.com/p/how-to-get-promoted
or perhaps with others (potentially) getting ahead of us.
Why is that being allowed?
Either case is weird in absolute terms but in relative terms, it all goes to the same place. The human-like nature of AI seems to make people realize this more.
On the negative side: The LLM uses fancy language to try to convince disinfo. The danger is that you do not see this disinfo and it will shape your consciousness that is the trap.
However if you are lucky you learn to distrust the LLMs it's not a educated AI.
On the positive side: You can still use it as search engine or to get some ideas But you should continue on your own to increase your creative skills.
Your consciousness / attention is stolen on a daily basis to keep you occupied to do stuff. However this is already ongoing before LLMs, before the computer age.
I think at some point your consciousness will detect this brain rot at some point and evolve beyond.
Our body's has evolved to copy trait from others from childhood on, moreover it's also in our DNA itself its created to copy. So the LLM is not any different but you should be aware which trait to copy.
Don’t be lazy and stupid.
noir_lord•2h ago
I saw the danger of it as a form of learned helplessness down the line and swore off using LLM's for that reason, that and I feel no need to delegate my thinking to a machine that can't think and I like thinking.
Same reason snacks are upstairs in the kitchen and not in my office on the ground floor - I'm too lazy and if they are easily available I'll eat them.