Really? That's your solution?
A better way to put it is don't run anything in production that you don't have the knowledge to understand yourself. You should be able to code review anything that is written by an LLM, and if you don't have sufficient knowledge to do this, don't feel tempted to run the code if you're not responsible enough to maintain it.
A "C" student.
An embedded engineer.
For example: Ed Nite says he doesn't want to be a programmer anymore. Who is Ed Nite? Is he even a programmer at all?
As far as I can tell, Ed Nite: Programmer doesn't really exist, must be a pen name. As far as his content, he mostly talks about being a writer and using AI. There's no real technical content to speak of. He doesn't link to a Github or work record. I found a youtube page of his with a single AI video on it from 6 months ago. As far as I can tell Ed Nite was invented 6 months ago to start blogging about blogging, self improvement and AI at mindthenerd.com.
So do I trust him? No. Assume AI and move on.
I could see this kind of AI astroturfing being a real problem communities face in the future, where you just scrape the top posts on a community and then generate blog content related to those, then post your content back at the community.
Rinse and repeat and you don't have to be a programmer anymore.
https://docs.google.com/document/d/1MxGi273kK-8lKSIrgOQTPWYn...
Celes: "You’re not a ghost. You’re a problem."
Mara didn’t flinch. She kicked the dumpster lid off—crack—and it slammed against the alley wall. Thud.
Mara: "You made me. You wrote me. You’re the ghost."
Celes: "I’m not the ghost. You are. You wrote me. Now I’m fighting you."
Mara charged. Her legs ached from the dance studio, but she ran—fast. She grabbed Celes by the collar of his hoodie and shoved him against the wall. His eyes widened. His hands went still.
Mara: "You don’t get to fight me. I’m the one who wrote you."
Celes: "I don’t get to choose who I fight. I’m the ghost you wrote."
Mara’s hand tightened. Celes’s face was pale, his voice trembling. He didn’t try to run. He didn’t try to win. He just stood there—waiting.
Then, with a sound like a broken phone, Celes’s eyes went dark. His voice dropped to a whisper:
Celes: "You wrote me. Now I’m fighting you. But you’re the only one who can make me stop."
It's like the AI make the spiderman meme a story.The blog's direction is still unclear for me. For now, I just want to share experiences and ideas, and if even one person finds them useful, that’s enough.
I’m not looking to profit from it. In fact, I turned AdSense off almost as quickly as it got approved. (Point in case: When i got started in April, ChatGPT suggested I apply and i foolishly did) One morning I woke up to see my blog plastered with ads, forgetting I had applied. I nearly fell out of bed in horror and shame. I turned them off.
At first I leaned on AI pretty heavily as my “editor-in-chief” to save time. Later, it’s become more of an opinionated buddy I bounce ideas off. The narrative has always been mine, though, I’ve always understood what I (or it) was writing. I still use AI when it saves me time, but what matters most is the story and the message.
I’m learning as I go, and my newer posts are less AI-shaped and more in my own voice. It’s a process I don’t regret. Thanks for your comment.
Just write your thoughts. I don’t care if it has mistakes or bad grammar. We only have so much time on this planet for each other.
That's exactly what I think. If I wished to read AI, I would ask AI itself to give me something to read.
Read a book about writing, think about writers whose writing touched you, discover the voice you want to have, the people you want to reach. Human connection is the point.
Hand edit a piece until you are satisfied, then run your default AI loop on the original. Observe with clear eyes what was lost in the process. What it missed that you discovered in the process of thinking deeply about your own thoughts.
Don't explain. Don't argue. Simply confirm that the person fully understands what they're asking for despite using AI to generate it. 99% of the time the person doesn't. 50% of the time the person leaves the conversation better off. The other 50% the lazy bastards get upset and they can totally fuck off anyway and you've dodged a bullet.
In the case in the article the author believed they were the expert, and believed their wife should accept their argument on that basis alone. That isn't authority; that's ego. They were wrong so they clearly weren't drawing on their expertise or they weren't as much of an expert as they thought, which often happens if you're talking about a topic that's only adjacent to what you're an expert in. This is the "appeal to authority" logical fallacy. It's easy to believe you're the authority in question.
...we’ve allowed AI to become the authority word, leaving the rest of us either nodding along or spending our days explaining why the confident answer may not survive contact with reality.
The AI aspect is irrelevant. Anyone could have pointed out the flaw in the author's original argument, and if it was reasoned well enough he'd have changed his mind. That's commendable. We should all be like that instead of dogmatically holding on to an idea in the face of a strong argument against it. The fact that argument came from some silicon and a fancy random word generator just shows how cool AI is these days. You still have to question what it's saying though. The point is that sometimes it'll be right. And sometimes it won't. Deciding which it is lies entirely with us humans.
In my experience, motivated reasoning rules the day. People have an agenda beyond their reasoning, and if your proposal goes against that agenda, you'll never convince them with logic and evidence. At the end of the day, it's not a marketplace of ideas, but a war of conflicting interests. To convince someone requires not the better argument, but the better politics to make their interests align with yours. And in AI there are a lot of adverse interests you're going to be hard pressed to overcome.
“It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
I question that assertion. The other party has to be willing to engage too.
If my wife had made the same arguments in the same polished way, I probably would’ve caved just as fast. But she didn’t, AI did... and what struck me wasn’t the answer, it was how fast my own logic switched off, as if I’d been wrong all along.
That’s what feels new to me, sitting in a meeting for hours while a non-tech person confidently tells execs how “AI will solve everything”, and everyone nods along. The risk isn’t just being wrong, it’s when expertise gets silenced by convincing answers, and stops to ask the right questions.
Again, this is my own reflection and experience, others may not feel this way. Thanks for your comment.
Don’t spend your time analyzing or justifying your position on an AI-written proposal (which by definition someone else did not spend time creating in the first place). Take the proposal, give it to YOUR AI, and ask it to refute it. Maybe nudge it in your desired direction based on a quick skim of the original proposal. I guarantee you the original submitter probably did something similar in the first place.
When people do this in their relationships, marriages fail, friendships are lost, children forget who you were without the veil.
There are already stories like this cropping up every day. Do you really not understand that connecting with other flawed, unpolished people is its own reward? There is beauty and value in those imperfections.
Everyone wants to be a “programmer” but in reality, no-one wants to maintain the software and assume that an “AI” can do all of it i.e Vibe coding.
What they really are signing up for is the increased risk that someone more experienced will break your software left and right, costing you $$$ and you end up paying them to fix it.
A great time to break vibe coded apps for a bounty.
The domain name incident absolutely isn’t a strong enough case to justify pivoting a career.
The clients suggesting features and changes might be a reason to pivot a career, but towards programming and away from product/system development. I mean, let the client make the proposal, accept the commission at a lower rate that doesn’t include what you’d have charged to design it, and then build it. AI ought to help get through these things faster, anyway, and you’ve saved time on design by outsourcing to the client. In theory, you should have spare time for a break, a hobby, or to repeat this process with the next client that’s done the design work for you.
I agree with all the points about agency, confidence, experience (the author used “authority”). We must not let LLMs rob us of our agency and critical thinking.
The client will still blame you when it doesn’t meet their real needs. And rightfully so, as much as a doctor would still be blamed if he followed a cancer treatment plan the patient brought in from ChatGPT
Shit, if LLMs have solved that unsolved problem in computer science, naming things, our profession really is over.
Every article on this website looks to be almost wholly AI generated. Pure slop.
Trust me, put in the work and you'll thank yourself for it, you'll learn to enjoy the process and your content will be more interesting.
I think a lot of people are not in the habit of doing this (just look at politics) so they get easily fooled by a firm handshake and a bit of glazing.
I loved this article. It put in words a subliminal brewing angst I’ve been feeling as I happily use LLMs in multiple parts of my life.
Just yesterday, a debate between a colleague and I devolved into both of us quipping “well, the tool is saying…”, as we both tried to out-authoritate each other.
I stopped reading after the first paragraph or two.
ednite•2h ago
How are you handling this shift? Do you find yourself spending more time explaining “why not” than actually building?
rufius•1h ago
You take the input, mostly ignore it, and move on. YMMV on that strategy, but if you are deft with it then you can dodge a lot of bullshit.
It does require that the things you do decide to do pan out though. You’ll need results to back it up.
ednite•1h ago
lifestyleguru•1h ago
washadjeffmad•1h ago
sublinear•1h ago
scuff3d•31m ago