frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

I'm Not Consulting an LLM

https://lr0.org/blog/p/gpt/
29•birdculture•1h ago

Comments

mark_l_watson•48m ago
I understand the author’s sentiment but I would like to give a counter example:

I like to read philosophy and after I read a passage and think about it, I find it useful to copy the passage into a decent model and ask for its interpretation, or if it is something old ask about word choice or meaning.

I realize that I may not be getting perfect information, but LLM output gives me ideas that are a combination of live web searching and whatever innate knowledge the LLM holds in its weights.

Another counter example: I have never found runtime error traces from languages like Haskell and Common Lisp to be that clear. If the error is not clear to me, sometimes using a model gets me past an error quickly.

All that said, I think the author is right-on correct that using LLMs should not be an excuse to not think for oneself.

XenophileJKO•45m ago
I mean it can also depend on scale. I use hundreds of sub-agent instances to do analysis that I just would not be able to do in a reasonable timeframe. That is a TON of thinking done for me.
4ndrewl•2m ago
Is "not having to think" a good metric now?
zaphirplane•18m ago
I was hoping for a deeper article
NitpickLawyer•13m ago
Yawn. Just a post++ about white-horse attitudes regarding "muh expertise". And yet the top of the top experts in their fields (Terrence Tao, Karpathy, hell even Linus) are finding ways to make them useful for them. That's the crux imo. If you can't find a way to make these tools useful for you, you are the problem. Not the LLMs. THere's something there, even if currently not much, but there's something there for everyone at this point.
xyzal•7m ago
"If you can't find a way to make today's tech du jour useful for you, you are the problem".

Hmmmmm.

simianwords•8m ago
If people used LLMs more we would have fewer instances of misinformation. Lots of comments in social media could easily be dispelled by a single LLM search.
chrysoprace•7m ago
LLMs can be injected with biases as well. Just look at Grok's responses any time it's tagged in anything mildly political.
simianwords•1m ago
That’s like saying journalism is not useful because journalists are biased.

Bias is useful and inevitable

Nursie•1m ago
Or it could give out bad information and make everything worse because a subset of people seem to think LLMs are infallible or gods rather than aggregates of the knowledge they’ve consumed.