frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Swift SDK for Android

https://www.swift.org/blog/nightly-swift-sdk-for-android/
528•gok•14h ago•200 comments

Unlocking Free WiFi on British Airways

https://www.saxrag.com/tech/reversing/2025/06/01/BAWiFi.html
288•vinhnx•19h ago•78 comments

People with blindness can read again after retinal implant and special glasses

https://www.nbcnews.com/health/health-news/tiny-eye-implant-special-glasses-legally-blind-patient...
110•8bitsrule•4d ago•24 comments

Valetudo: Cloud replacement for vacuum robots enabling local-only operation

https://valetudo.cloud/
273•freetonik•4d ago•79 comments

First shape found that can't pass through itself

https://www.quantamagazine.org/first-shape-found-that-cant-pass-through-itself-20251024/
384•fleahunter•20h ago•104 comments

Context engineering is sleeping on the humble hyperlink

https://mbleigh.dev/posts/context-engineering-with-links/
88•mbleigh•1d ago•39 comments

Meet the real screen addicts: the elderly

https://www.economist.com/international/2025/10/23/meet-the-real-screen-addicts-the-elderly
109•johntfella•6h ago•72 comments

Key IOCs for Pegasus and Predator Spyware Removed with iOS 26 Update

https://iverify.io/blog/key-iocs-for-pegasus-and-predator-spyware-cleaned-with-ios-26-update
75•transpute•7h ago•30 comments

Harnessing America's Heat Pump Moment

https://www.heatpumped.org/p/harnessing-america-s-heat-pump-moment
146•ssuds•14h ago•302 comments

Normalize.css

https://csstools.github.io/normalize.css/
36•Leftium•4d ago•21 comments

Luau's Performance

https://luau.org/performance
18•todsacerdoti•1d ago•3 comments

I invited strangers to message me through a receipt printer

https://aschmelyun.com/blog/i-invited-strangers-to-message-me-through-a-receipt-printer/
219•chrisdemarco•5d ago•83 comments

Euro cops take down cybercrime network with 49M fake accounts

https://www.itnews.com.au/news/euro-cops-take-down-cybercrime-network-with-49-million-fake-accoun...
56•ubutler•3h ago•14 comments

Public Montessori programs strengthen learning outcomes at lower costs: study

https://phys.org/news/2025-10-national-montessori-early-outcomes-sharply.html
293•strict9•2d ago•162 comments

Study: MRI contrast agent causes harmful metal buildup in some patients

https://www.ormanager.com/briefs/study-mri-contrast-agent-causes-harmful-metal-buildup-in-some-pa...
154•nikolay•13h ago•112 comments

What Is Intelligence? (2024)

https://whatisintelligence.antikythera.org/
79•sva_•9h ago•46 comments

Code Like a Surgeon

https://www.geoffreylitt.com/2025/10/24/code-like-a-surgeon
156•simonw•18h ago•97 comments

The persistence of tradition: the curious case of Henry Symeonis

https://blogs.bodleian.ox.ac.uk/archivesandmanuscripts/2023/12/13/the-persistence-of-tradition-th...
3•georgecmu•2d ago•0 comments

Twake Drive – An open-source alternative to Google Drive

https://github.com/linagora/twake-drive
328•javatuts•1d ago•188 comments

How to make a Smith chart

https://www.johndcook.com/blog/2025/10/23/smith-chart/
128•tzury•17h ago•22 comments

The Geometry of Mathematical Methods

https://books.physics.oregonstate.edu/GMM/book.html
22•kalind•5d ago•3 comments

Advice for New Principal Tech ICs (I.e., Notes to Myself)

https://eugeneyan.com/writing/principal/
74•7d7n•7h ago•49 comments

Why formalize mathematics – more than catching errors

https://rkirov.github.io/posts/why_lean/
189•birdculture•6d ago•66 comments

Fast TypeScript (Code Complexity) Analyzer

https://ftaproject.dev/
10•hannofcart•4h ago•1 comments

Deepagent: A powerful desktop AI assistant

https://deepagent.abacus.ai
31•o999•8h ago•1 comments

The fix wasn't easy, or C precedence bites

https://boston.conman.org/2025/10/20.1
19•ingve•3d ago•15 comments

Modern Perfect Hashing

https://blog.sesse.net/blog/tech/2025-10-23-21-23_modern_perfect_hashing.html
97•bariumbitmap•1d ago•16 comments

Mesh2Motion – Open-source web application to animate 3D models

https://mesh2motion.org/
206•Splizard•23h ago•34 comments

Diamond Thermal Conductivity: A New Era in Chip Cooling

https://spectrum.ieee.org/diamond-thermal-conductivity
3•rbanffy•4d ago•2 comments

Conductor (YC S24) Is Hiring a Founding Engineer in San Francisco

https://www.ycombinator.com/companies/conductor/jobs/MYjJzBV-founding-engineer
1•Charlieholtz•13h ago
Open in hackernews

"ChatGPT said this" Is Lazy

https://terriblesoftware.org/2025/10/24/chatgpt-said-this-is-lazy/
65•ragswag•18h ago

Comments

pavel_lishin•18h ago
I've come down pretty hard on friends who, when I ask for advice about something, come back with a ChatGPT snippet (mostly D&D-related, not work-related).

I know ChatGPT exists. I could have fucking copied-and-pasted my question myself. I'm not asking you to be the interface between me and it. I'm asking you, what you think, what your thoughts and opinions are.

einsteinx2•18h ago
I’ve noticed this trend in comments across the internet. Someone will ask or say something, the someone else will reply with “I asked ChatGPT and it says…” or “According to AI…”

ChatGPT is free and available to everyone, and so are a dozen other LLMs. If the person making the comment wanted to know what ChatGPT had to say, they could just ask it themselves. I guess people feel like they’re being helpful, but I just don’t get it.

Though with that said, I’m happy when they at least say it’s from an LLM. At least then I know I can ignore It. Worse is replying as if it’s their own answer, but really it’s just copy pasted from an LLM. Those are more insidious.

minimaxir•17h ago
The irony is that the disclosure of “I asked ChatGPT and it says…” is done as a courtesy to let the reader be informed. Given the increasing backlash against that disclosure, people will just stop disclosing which is worse for everyone.

The only workaround is to just text as-is and call it out when it's wrong/bad, AI-generated or otherwise, as we've done before 2023.

einsteinx2•17h ago
That’s true. Unfortunately the ideal takeaway from that sentiment should be “don’t reply with copy pasted LLM answers”, but I know that what you’re saying will happen instead.
StrandedKitty•12h ago
I think it's fine to not disclose it. Like, don't you find "Sent from my iPhone" that iPhones automatically add to emails annoying? Technicalities like that don't bring anything to the conversation.

I think typically, the reason people are disclosing their usage of LLMs is that they want offload responsibility. To me it's important to see them taking responsibility for their words. You wouldn't blame Google for bad search results, would you? You can only blame the entity that you can actually influence.

XorNot•10h ago
Except it isn't. It's a disclosure to say "If I'm wrong, it's not my fault".

Because if they'd actually read the output, then cross-checked it and developed some confidence in the opinion, they wouldn't put what they perceive as the most important part up front ("I used ChatGPT") - they'd put the conclusion.

Leherenn•15h ago
Isn't it the modern equivalent of "let me Google that for you"?

My experience is that the vast majority of people do 0 research (AI assisted or not) before asking questions online. Questions that could have usually been answered in a few seconds if they had tried.

If someone preface a question by saying they've done their research but would like validation, then yes it's in incredibly poor taste.

einsteinx2•15h ago
> Isn't it the modern equivalent of "let me Google that for you"?

When you put it that way I guess it kind of is.

> If someone preface a question by saying they've done their research but would like validation, then yes it's in incredibly poor taste.

100% agree with you there

nitwit005•12h ago
There's seemingly a difference in motive. The people sharing AI responses seem to be from people fascinated by AI generally, and want to share the response.

The "let me Google that for you" was more trying to get people to look up trivial things on their own, rather than query some forum repeatedly.

thousand_nights•11h ago
exactly, the "i asked chatgpt" people give off 'im helping' vibes but in reality they are just annoying and clogging up the internet with spam that nobody asked for

they're more clueless than condescending

pessimizer•11h ago
Let me google that for you was when a person e.g. asked "what's a tomato?", and you'd paste in the link http://www.google.com/search?q=what's+a+tomato

That's not like pasting in a screenshot or a copy/paste of an AI answer, it's being intentionally dismissive. You weren't actually doing the "work" for them, you were calling them lazy.

The way I usually see the AI paste being used is from people trying to refute something somebody said, but about a subject that they don't know anything about.

kbelder•11h ago
>Isn't it the modern equivalent of "let me Google that for you"?

Which was just as irritating.

plorkyeran•11h ago
It is the modern equivalent of "let me Google that for you" except for that most of the people doing it don't seem to realize that they're telling the person to fuck off, while that absolutely was the intent with lmfgtfy.
noir_lord•13h ago
To modifying a hitchism.

> What can be asserted without evidence can also be dismissed without evidence.

Becomes

> That which can be asserted without thought can be dismissed without thought.

Since no current AI thinks but humans do I’m just going to dismiss anything an AI says out of hand because you are pushing the cost of parsing what it said onto me and off you and nah, ain’t accepting that.

greazy•9h ago
That's wonderfully succinct argument.
lostmsu•8h ago
It hinges assuming that ChatGPT does not thinking, which is clearly false.

Hell, Feynman said as much in 1985. https://www.youtube.com/watch?v=ipRvjS7q1DI

zenoprax•6h ago
Elegant and correct. It seems so obvious to me that if someone wanted a ChatGPT answer they would have sought it out for themselves and yet... it's happened to me more than a few times. I think some people think they are being clever and resourceful (or 'efficient') but it just dilutes their own authority on that which they were asked to opine.
globular-toast•11h ago
It must be the randomness built into LLMs that makes people think it's something worth sharing. I guess it's no different from sharing a cool Minecraft map with your friends or something. The difference is Minecraft is fun, reading LLM content is not.
tonyspiff•10h ago
Indeed. On the other hand, there's a difference between "I one-prompted some mini LLM" and "A deep-thinking LLM aided me through research with fact-checking, agents, tools and lots of input from me." While both can be phrased with “I asked ChatGPT and it says…” or “According to AI…”, the latter would not annoy me.
JumpCrisscross•10h ago
> someone else will reply with “I asked ChatGPT and it says…” or “According to AI…”

I had a consultant I’m working with have an employee do that to me. I immediately insisted that every hour they’ve billed on that person’s name be refunded.

uberman•18h ago
This is an honest question. Did you try pasting your PR and the ChatGPT feedback into Claude and asking it for an analysis of the code and feedback?
verdverm•18h ago
Careful with this idea, I had someone take a thread we were engaged in and feed it to an LLM, asking it to confirm his feelings about the conversation, only to post it back to the group thread. It was used to attack me personally in a public space.

Fortunately

1. The person was transparent about it, even posting a link to the chat session

2. They had to follow on prompt to really engage the sycophancy

3. The forum admins stepped in to speak to this individual even before I was aware of it

I actually did what you suggested, fed everything back into another LLM, but did so with various prompts to test things out. The responses where... interesting, the positive prompt did return something quite good. A (paraphrased) quote from it

"LLMs are a powerful rhetorical tool. Bringing one to a online discussion is like bringing a gun to a knife fight."

That being said, how you prompt will get you wildly different responses from the same (other) inputs. I was able to get it to sycophant my (not actually) hurt feelings.

pavel_lishin•17h ago
Does that particularly matter in the context of this post? Either way, it sounds like OP was handed homework by the responder, and farming that out to yet another LLM seems kind of pointless, when OP could just ask the LLM for its opinion directly.
uberman•16h ago
While LLM code feedback might be wordy and dubious, I have personally found that asking Claude to review a PR and related feedback to provide some value. From my perspective anyways, Claude seems able to cut through the BS and say if a recommendation is worth the squeeze or in what contexts the feedback has merit or is just pedantic. Of course, your mileage my vary as they say.
pavel_lishin•15h ago
Sure. But again, that's not what OP's post is about.
blitzar•12h ago
"Google said this" ... "Wikipedia said this" ... "Encyclopedia Britannica said this"
ahofmann•11h ago
It is not the same. It needs some searching, reading and comprehension to cite Google etc. Copying a LLM output "costs" almost no energy.
FlameRobot•10h ago
It is similar enough. People would just find the first thing in a disagreement that had headline that corroborated their opinion, this was often Wikipedia or the Summary on google.

People did this with code as well. DDG used to show you the first Stackoverflow post that was close to what you searched. However sometimes this was obviously wrong, people have just copied and pasted that wholesale.

Groxx•10h ago
well. "Google said this" is pretty close nowadays.

the other two are still incomparably better in practice though.

KalMann•11h ago
I think the difference is people use those as citations for specific facts, not to logically analyze your code. If you're asked how technical detail of C++ works then simply citing Google is acceptable. If you're asked about broader details that depend on certain technicalities specific to your codebase, Googling would be silly.
spot5010•11h ago
The scenario the author describes is bound to happen more and more frequently, and IMO the way to address it is by evolving the culture and best practices for code reviews.

A simple solution would be to mandate that while posting coversations with AI in PR comments is fine, all actions and suggested changes should be human generated.

They human generated actions can’t be a lazy: “Please look at AI suggestion and incorporate as appropriate. ”, or “what do you think about this AI suggestion”.

Acceptable comments could be: - I agree with the AI for xyz reasons, please fix. - I thought about AIs suggestions, and here’s the pros and cons. Based on that I feel we should make xyz changes for abc reasons.

If these best practices are documented, and the reviewer does not follow them, the PR author can simply link to the best practices and kindly ask the reviewer to re-review.

globular-toast•11h ago
It's kinda hilarious to watch people make themselves redundant. Like you're essentially saying "you don't need me, you could have just asked ChatGPT for a review".

I wrote before about just sending me the prompt[0], but if your prompt is literally my code then I don't need you at all.

[0] https://blog.gpkb.org/posts/just-send-me-the-prompt/

lrvick•11h ago
I do not use AI for engineering work and never will, because doing the work of thinking for myself is how I maintain the neural capacity for forming my own original thoughts and ideas no AI has seen before.

If anyone gives me an opinion from an AI, they disrespect me and themselves to a point they are dead to me in an engineering capacity. Once someone outsources their brain they are unlikely to keep learning or evolving from that point, and are unlikely to have a future in this industry as they are so easily replaceable.

If this pisses you off, ask yourself why.

GaryBluto•11h ago
Is copy-pasting from Wikipedia an "opinion" from Wikipedia?
lrvick•11h ago
Even that annoys me because who knows how accurate that is at any moment. Wikipedia is great for getting a general intro to a thing, but it is not a source.

I would rather people go find the actual whitepaper or source in the footnotes and give me that, and/or give me their own opinion on it.

XorNot•11h ago
No, but it's also equally not a useful contribution. If wikipedia says something then I'm going to link the article, then give a quick summary of what in the article relates to whatever my point is.

Not write "Wikipedia says..." and paste the entire article verbatim.

paulcole•11h ago
> If this pisses you off, ask yourself why.

Why would it piss me off that you’re so closed minded about an incredible technology?

lrvick•10h ago
Using an AI to think for me would be like going to a gym and paying a robot to lift weights for me.

Like sure that is cool that is possible, but if I do not do the work myself I will not get stronger.

Our brains are the same way.

I also do not use a GPS because there are literally studies with MRI scans proving it makes an entire section of our brain go dark compared to London taxi drivers required by law to navigate with their brains.

I also navigate life without a smartphone at all, and it has given me what feels like focus super powers compared to those around me, when in reality probably most people had that level of focus before smartphones were a thing.

All said AI is super interesting when doing specialized work at scale no human has time for, like identifying cancer by training on massive datasets.

All tools have uses and abuses.

paulcole•10h ago
Sounds fun!
lostmsu•8h ago
> Our brains are the same way.

How many IQ points do you gain per year of subjecting yourself to this?

lrvick•4h ago
No idea if it actually makes me smarter, but I have noticed I have an atypically high level of mental pain tolerance to pursue things many told me were impossible and quickly gave up on.
lcnPylGDnU4H9OF•7h ago
> Using an AI to think for me

People are using LLMs to generate code without doing this.

markfeathers•11h ago
I do not use books for engineering work and never will, because doing the work of thinking for myself is how I maintain the neural capacity for forming my own original thoughts and ideas no writer has seen before.

If anyone gives me an opinion from a book, they disrespect me and themselves to a point they are dead to me in an engineering capacity. Once someone outsources their brain they are unlikely to keep learning or evolving from that point, and are unlikely to have a future in this industry as they are so easily replaceable.

If this pisses you off, ask yourself why.

(You can replace AI with any resource and it sounds just as silly :P)

bgwalter•10h ago
What is this new breed of interactive books that give you half baked opinions and incorrect facts in response to a prompt?
theamk•10h ago
Yes, if you find a book that is as bad as AI advice, you should definitely throw it away and never read it. If someone is quoting a known-bad book, you should ignore their advice (and as a courtesy, tell them their book is bad)

It's so strange that pro-AI people don't see this obvious fact and keep trying to compare AI with things that are actually correct.

simonw•10h ago
It's so strange that anti-AI people think the advice and information you can get from a good model (if you know how to operate it well) is less valuable than the advice and information you can get from a book.
saulpw•10h ago
That "a good model (if you know how to operate it well)" is doing a lot of lifting. To be sure, there are a lot of bad books, and you can get negative advice from them, but a book has fixed content that can gain and lose a reputation, whereas a model (even a good one!) has highly variable content dependent on "if you know how to operate it well". So when someone or some group that I respect recommends a book, I can read the words with some amount of trust that the content is valuable. When someone quotes a model's response without any commentary or affirmation, it does not inspire any trust that the content is valuable. It just indicates that the person has abdicated their thought process.
simonw•9h ago
I agree that quoting a model's answer to someone else is bad form - you can get a model to say ANYTHING if you prompt it to, so a screenshot of a ChatGPT conversation to try and prove a point is meaningless slop.

I find models vastly more useful than most technical books in my own work because I know how to feed in the right context and then ask them the right questions about it.

There isn't a book on earth that could answer the question "which remaining parts of my codebase still use the .permission_allowed() method and what edge-cases do they have that would prevent them from being upgraded to the new .allowed() mechanism"?

theamk•9h ago
And as long as you don't copy-paste its advice into comments, that's fine.

No one really cares how you found all those .permission_allowed() calls to replace - was it grep, or intense staring, or AI model. All that matters is you stand behind it, and act as an author. Original post said it very well:

> ChatGPT isn’t on the team. It won’t be in the post-mortem when things break. It won’t get paged at 2 AM. It doesn’t understand the specific constraints, tech debt, or your business context. It doesn’t have skin in the game. You do.

000ooo000•7h ago
>There isn't a book on earth that could answer the question "which remaining parts of my codebase still use the .permission_allowed() method and what edge-cases do they have that would prevent them from being upgraded to the new .allowed() mechanism"?

You're so close to realising why the book counter argument doesn't make any sense!

JumpCrisscross•10h ago
> anti-AI people think the advice and information you can get from a good model (if you know how to operate it well) is less valuable than the advice and information you can get from a book

Those people exist and they’re wrong.

More frequently, however, I find I’m judging the model less than its user. If I get an email that smells of AI, I ignore it. That’s partly because I have the luxury to do so. It’s largely because engaging has commonly proven fruitless.

You see a similar effect on HN. Plenty of people use AI to think through problems. But the comments that quote it directly are almost always trash.

throwaway-0001•10h ago
So if anyone with below 120 iq gives you their opinion is disrespectful because they are stupid?

—-

It’s interesting that we have to respect human “stupid” opinions but anything from AI is discarded immediately.

I’d advocate for respecting any opinion. And consider good or at least good willed opinion.

theamk•9h ago
Of course I respect humans, I am a human myself! And I learned a lot from others, asking them (occasionally stupid) questions and listening to their explanations. Doing the same to other is just being fair. Explain a thing and make someone more knowledgeable! Maybe next time _they_ will help you!

This does not apply to AI of course. In most cases, if a person did an AI PR/comment once, they will keep doing AI PRs/comments, so your explanation will be forgotten next time they clear context. Might as well not waste your time and dismiss it right away.

throwaway-0001•9h ago
You seem to be subjectively mad about AI.

Same as white people thought “black” were not worth listening to - a couple of hundred of years ago.

000ooo000•7h ago
The same, you say?
conartist6•10h ago
Yeah except it's not quite the same thing, is it?

The fact that you're presenting this as a comically absurd comparison tells me that you know well that it's an absurd comparison.

throwaway-0001•10h ago
At least you can counter with an argument. You just seem to agree both are absurd.
conartist6•7h ago
Nah, I thought OP was spot on. A book isn't in the same class of things as an automated bullshit generator.
satisfice•8h ago
Congratulations on misunderstanding and misrepresenting the point. (This is sarcasm, btw.)

It’s not the source that matters. It’s not the source that he’s complaining about. It’s the nature of the interaction with the source.

I’m not against watching video, but I won’t watch TikTok videos, because they are done in a way that is dangerously addictive. The nature of engagement with TikTok is the issue, not “I can’t learn from electrical devices.”

Each of us must beware of the side effects of using tools. Each kind of tool has its hazards.

ninkendo•10h ago
Eh… your complaint describes every single piece of information available on the internet.

Let’s try it with other stuff:

“Looking at solutions on stack overflow outsources your brain”

“Searching arxiv for literature on a subject outsources your brain”

“Reading a tutorial on something outsources your brain”

There’s nothing that makes ChatGPT et al appreciably different from the above, other than the tendency to hallucinate.

ChatGPT is a better search engine than search engines for me, since it gives links to cite what it’s talking about and I can check those, but it pays attention to precisely what I asked about and generally doesn’t include unrelated crap.

The only complaint I have is the hallucinations, but it just means I have to check its sources, which is exactly the case already for something as mundane as Wikipedia.

Ho hum. Maybe take some time to reevaluate your conclusions here.

rileymat2•10h ago
For me, I am not sure it has eliminated thinking.

I have recently started to use codex on the command line. Before I put the prompt in, I get an idea in my head of what should happen.

Then I give it the instructions, sometimes clarifying my own thoughts while doing it. These are high level instructions, not "change this file". Then it bumps away for minutes at a time, after which I diff the results and consider if it matches up to what I would expect. At that point lower level instructions if appropriate.

Consider whether it was a better solution or not, then ask questions around the edges that I thought were wrong.

It turns my work from typing code in to pretty much code design and review. These are the hard tasks.

brookst•10h ago
Is it OK to outsource to engineers who are either more senior or more junior, or must one do every aspect of every project entirely oneself?
theamk•9h ago
Sure it is, as long as those engineers apply and honest effort and learn from their mistakes. Even if they don't do things faster then you initially, at least they learned something.

Unfortunately that logic does not apply to models.

brookst•9h ago
Then I’m lost. I thought this was about the laziness of outsourcing thinking. Why would the outsourcee’s ability to learn impact whether it’s lazy or not?
skylurk•1h ago
LLM are plenty useful, don't get me wrong, but:

If your interaction with the junior dev is not much different than interacting with an LLM, something is off.

Training a junior dev will make you a better dev. Teaching is learning. And a junior dev will ask questions that challenge your assumptions.

It's the opposite of "outsourcing."

jasonlotito•10h ago
> I do not use AI for engineering work and never will

So, working with CLAUDE doesn't count. Gotcha.

> If this pisses you off, ask yourself why.

It doesn't piss me off, but your comment is disingenuous at best.

OptionOfT•5h ago
I really dislike how the companies try to antromorphisize their software offerings.

At my previous company they called it 'sparring with <name of the software>'. You don't 'work' with Claude.

You use the software, you instruct it what to do. And it gives you an output that you can then (hopefully) utilize. It's not human.

lrvick•4h ago
I actually do not use any proprietary software of any kind in my work. Any tools I can not alter to my liking are not -my- tools and could be taken away from me or changed at any time.
conartist6•10h ago
<3
sanswork•10h ago
I will not use Jr developers for engineering work and never will, because doing the work of a Jr.....

You don't have to outsource your thinking to find value in AI tools you just have to find the right tasks for them. The same as you would with any developer jr to you.

I'm not going to use AI to engineer some new complex feature of my system but you can bet I'm going to use it to help with refactoring or test writing or a second opinion on possible problems with a module.

> unlikely to have a future in this industry as they are so easily replaceable.

The reality is that you will be unlikely to compete with people who use these tools effectively. Same as the productivity difference between a developer with a good LSP and one without or a good IDE or a good search engine.

When I was a kid I had a text editor and a book and it worked. But now that better tools are around I'm certainly going to make use of them.

satisfice•8h ago
FFS stop it with the “it’s just the same as a human” BS. It’s not just like working with a junior engineer! Please spend 60 seconds genuinely reflecting on that argument before letting it escape like drool from the lips of your writing fingers.

We work with junior engineers because we are investing in them. We will get a return on that investment. We also work with other humans because they are accountable for their actions. AI does not learn and grow anything like the satisfying way that our fellow humans do, and it cannot be held responsible for its actions.

As the OP said, AI is not on the team.

You have ignored the OP’s point, which is not that AI is a useless tool, but that merely being an AI jockey has no future. Of course we must learn to use tools effectively. No one is arguing with that.

You fanboys drive me nuts.

sanswork•6h ago
I'm not saying it's the same as working with a jr developer. I'm saying that not using something less skilled than yourself for less skilled tasks is stupid and self defeating.

Yes, when someone builds a straw man you ignore it. There is a huge canyon between never use AI in engineer(op proposal) and only use AI for all your engineering(op complaint).

lrvick•4h ago
> The reality is that you will be unlikely to compete with people who use these tools effectively.

If you looked me or my work up, I think you would likely feel embarrassed by this statement. I have a number of world firsts under my belt that AI would have been unable to meaningfully help with.

It is also unlikely I would have every developed the skill to do any of that aside from doing everything the hard way.

sanswork•4h ago
I just looked and I'm not sure what I'm meant to be seeing that would cause me to feel embarrassed but congrats on whatever it is. How much more could you have developed or achieved if you didn't limit yourself?

Do you do all your coding in ed or are you already using technology to offload brain power and memory requirements in your coding?

lrvick•3h ago
AI would have been near useless when I was creating https://stagex.tools https://codeberg.org/stagex/stagex, for instance.

Also I use VIM. Any FOSS tools with predictable deterministic behavior I can fully control are fine.

sanswork•3h ago
I don't know, just a quick glance at that repo and I feel like AI could have written your shell scripts which took up several tries from multiple people to get right about as well as the humans did.

So your ok with using tools to offload thinking and memory as long as they are FOSS?

lrvick•1h ago
Take this one for example https://codeberg.org/stagex/stagex/src/branch/main/src/compa...

It took some iteration and hands on testing to get that right across multiple operating systems. Also to pass shellcheck, etc.

Even if an LLM -could- do that sort of thing as well as my team and I can, we would lose a lot of the arcane knowledge required to debug things, and spot sneaky bugs, and do code review, if we did not always do this stuff by hand.

It is kind of like how writing things down helps commit them to memory. Typing to a lesser extent does the same.

Regardless those scripts are like <1% of the repo and took a few hours to write by hand. The rest of the repo requires extensive knowledge of linux internals, compiler internals, full source bootstrapping, brand new features in Docker and the OCI specs, etc.

Absolutely 0 chance an LLM could have helped with bootstrapping a primitive c toolchain from 180 bytes of x86 machine code like this: https://codeberg.org/stagex/stagex/src/branch/main/packages/...

That took a lot of reasoning from humans to get right, in spite of the actual code being just a bunch of shell commands.

There are just no significant shortcuts for that stuff, and again if there were, taking them is likely to rob me of building enough cache in my brain to solve the edge cases.

Also yes, I only use FOSS tools with deterministic behavior I can modify, improve, and rely on to be there year after year, and thus any time spent mastering them is never wasted.

ebb_earl_co•10h ago
I’m in the same boat, and what tipped me there is the ethical non-starter that OpenAI and Anthropic represent. They strip-mined the Web, ripped off copyrighted works in neat space, admitting that going through the proper channels was a waste of business resources.

They believe that the entirety of human ingenuity should be theirs at no cost, and then they have the audacity to SELL their ill-gotten collation of that knowledge back to you? All the while persuading world governments that their technology is the new operating system of the 21st century.

Give me a dystopian break, honestly.

beached_whale•10h ago
Like any reference or other person, one needs to question whether those ideas fit into their mental models and verify things anyhow. One never could just trust that something is true without at least quick mental tests. AI is no different than other sources here. As was drilled into us in high school, use multiple sources and verify them.
OptionOfT•6h ago
This is absolutely spot on how I feel about all of it.
insin•11h ago
I'm starting to run into the other end of this as a reviewer, and I hate it.

Stories full of nonsensical, clearly LLM-generated acceptance requirements containing implementation details which are completely unrelated to how the feature actually needs to work in our product. Fine, I didn't need them anyway.

PRs with those useless, uniformly-formatted LLM-generated descriptions which don't do what a PR description should do, with a half-arsed LLM attempt at summary of the code changes and links to the files in the PR description. It would have been nice if you had told me what your PR is for and what your intent as the author is, and maybe to call out things which were relevant to the implementation I might have "why?" questions about. But fine, I guess, being able to read, understand and evaluate the code is part of my job as a reviewer.

---- < the line

PRs littered with obvious LLM comments you didn't care enough to take out, where something minor and harmless, but _completely pointless_ has been added (as in if you'd read and understood what this code does, you'd have removed it), with an LLM comment left in above it AND at the end of the line, where it feels like I'm the first person to have tried to read and understand the code, and I feel like asking open-ended questions like "Why was this line added?" to get you to actually read and think about what's supposed to be your code, rather than a review comment explaining why it's not needed acting as a direct conduit from me to your LLM's "You're absolutely right!" response.

stickfigure•10h ago
Counterpoint: "Chatgpt said this" is an entirely legitimate approach in many contexts and this attitude is toxic.

One example: Code reviews are inherently asymmetrical. You may have spent days building up context, experimenting, and refactoring to make a PR. Then the reviewer is expected to have meaningful insight in (generously) an hour? AI code reviews help bring balance; it may notice stuff a human wouldn't, and it's ok for the human reviewer to say "hey, chatgpt says this is an issue but I'm not sure - what do you think?"

We run all our PRs through automated (claude) reviews automatically, and it helps a LOT.

Another example: Lots of times we have several people debugging an issue and nobody has full context. Folks are looking at code, folks are running LLM prompts, folks are searching slack, etc. Sometimes the LLMs come up with good ideas but nobody is sure, because none of us have all the context we need. "Chatgpt says..." is a way of bringing it to everyone's attention.

I think this can be generalized to forum posts. "Chatgpt says" is similar to "Wikipedia says". It's not the end of the conversation, but it helps get everyone on the same page, especially when nobody is an expert.

627467•10h ago
We - humans - are getting ready for A"G"I
ottah•9h ago
Relying heavily on information supplied by LLMs is a problem, but so is this toxic negativity towards technology. It's a tool, sometimes useful, and other times crap. Critical thinking and literacy is the key skill that helps you tell the difference, and a blanket rejection (just like absolute reliance) is the opposite of critical thinking.
Terretta•1h ago
Counterpoint: asking GPT can provide useful calibration not to facts but to median mental models

Think of it as a dynamic opinion poll -- the probabilistic take on this thing is such and such.

As a bonus you can prime the respondent's persona.

// After posting, I see another comment at bottom opening with "Counterpoint:"... Different point though.