frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

A new project aims to predict how quickly AI will progress

https://www.economist.com/science-and-technology/2025/11/10/a-new-project-aims-to-predict-how-qui...
1•bananis•1m ago•0 comments

An AI-Generated Country Song Is Topping a Billboard Chart

https://www.whiskeyriff.com/2025/11/08/an-ai-generated-country-song-is-topping-a-billboard-chart-...
1•CharlesW•2m ago•0 comments

Don't Fall into TrustPilot's Trap

https://archie6.com/trustpilot/
1•Archie627•4m ago•0 comments

You need an AI policy for docs

https://passo.uno/ai-docs-policy-contributions/
1•theletterf•4m ago•0 comments

Stolen Foot

https://www.stolenfoot.com/
1•julian-shalaby•6m ago•1 comments

"Why are my stakeholders so unreasonable?"

https://thoughtfuleng.substack.com/p/why-are-my-stakeholders-so-unreasonable
1•zdw•6m ago•0 comments

The end of 0% interest rates: what it means for tech startups and the industry

https://newsletter.pragmaticengineer.com/p/zirp
1•gmays•6m ago•0 comments

The 4.5T dollar elephant in the room

https://stevenadler.substack.com/p/the-45-trillion-dollar-elephant-in
1•DustinEchoes•8m ago•0 comments

VelociDB a modern high-performance SQLite reimplementation in Rust

https://github.com/niklabh/velocidb
1•niklabh•8m ago•0 comments

CSS Extraction Library for Vite and Preact

https://github.com/aziis98/preact-css-extract
1•todsacerdoti•9m ago•0 comments

Show HN: Exploring the boundaries of AI-generated creativity

https://kinkora.fun
1•heavenlxj•9m ago•0 comments

Politico.eu: Brussels knifes privacy to feed the AI boom

https://www.politico.eu/article/brussels-knifes-privacy-to-feed-the-ai-boom-gdpr-digital-omnibus/
1•purpleKiwi•10m ago•0 comments

Minisforum Stuffs an Arm Homelab in the MS-R1

https://www.jeffgeerling.com/blog/2025/minisforum-stuffs-entire-arm-homelab-ms-r1
1•corvad•12m ago•0 comments

Rapid method recycles nylon from fishing nets and car parts

https://phys.org/news/2025-10-rapid-method-recycles-nylon-fishing.html
1•PaulHoule•13m ago•0 comments

Show HN: Tracking AI Code with Git AI

https://usegitai.com/blog/introducing-git-ai
1•addcn•13m ago•0 comments

Show HN: Keep your personal notes private with MySecureNote

https://apps.microsoft.com/detail/9phmrjpnvp6s?hl=en-US&gl=US
1•mdimec4•14m ago•0 comments

Show HN: Complex ζ(s) and Γ(s) in pure JavaScript (works for Re(s)<0)

https://github.com/cpuXguy/vanilla_zeta
1•cpuXguy•14m ago•0 comments

ChezScheme v10.3.0

https://github.com/cisco/ChezScheme/releases/tag/v10.3.0
1•swatson741•14m ago•0 comments

Marriott broke up with Sonder. Now, guests who've been booted are scrambling

https://www.businessinsider.com/marriott-sonder-licensing-termination-guest-apartment-new-booking...
1•jaredwiener•15m ago•0 comments

Ask HN: Raycast Notes Alternative for Windows?

1•moonAA•18m ago•0 comments

Complex Analysis

https://complex-analysis.com/
1•griffzhowl•18m ago•0 comments

The Write Last, Read First Rule

https://tigerbeetle.com/blog/2025-11-06-the-write-last-read-first-rule/
1•ashvardanian•19m ago•0 comments

Patches Proposed for Radeon GCN 1.1 GPUs to Use Amdgpu Linux Driver by Default

https://www.phoronix.com/news/AMD-GCN-1.1-Driver-Default-Prop
1•speckx•19m ago•0 comments

Don't Eat Before Reading This (1999)

https://www.newyorker.com/magazine/1999/04/19/dont-eat-before-reading-this
2•mitchbob•21m ago•1 comments

Why do browsers match CSS selectors from right to left?

https://stackoverflow.com/questions/5797014/why-do-browsers-match-css-selectors-from-right-to-left
1•jameslk•22m ago•0 comments

Synchronicty Engine

1•van_lizard•23m ago•1 comments

Show HN: Local-first observability for AI SDK on Next.js in 10 lines of code

https://github.com/The-Context-Company/observatory
1•armank-dev•24m ago•0 comments

Tariff Rebates?

https://thehill.com/homenews/administration/5597771-what-to-know-trumps-2k-tariff-check-proposal/
1•jqpabc123•26m ago•1 comments

Reexamination of the 9–10 November 1975 Storm Using Today's Technology (2006) [pdf]

https://www.michiganseagrant.org/lessons/wp-content/uploads/sites/3/2019/07/Reexamination-of-the-...
1•shagie•26m ago•0 comments

Show HN: A daily newsletter for random content across the web

https://randomdailyurls.com/
1•RandomDailyUrls•26m ago•0 comments
Open in hackernews

LLMs are steroids for your Dunning-Kruger

https://bytesauna.com/post/dunning-kruger
59•gridentio•2h ago

Comments

Brendinooo•1h ago
>I think LLMs should not be seen as knowledge engines but as confidence engines.

This is a good line, and I think it tempers the "not just misinformed, but misinformed with conviction" observation quite a bit, because sometimes moving forward with an idea at less than 100% accuracy will still bring the best outcome.

Obviously that's a less than ideal thing to say, but imo (and in my experience as the former gifted student who struggles to ship) intelligent people tend to underestimate the importance of doing stuff with confidence.

shermantanktop•1h ago
Confidence has multiple benefits. But one of those benefits is social - appearing confident triggers others to trust you, even when they shouldn’t.

Seeing others get burned by that pattern over and over can encourage hesitation and humility, and discourage confident action. It’s essentially an academic attitude and can be very unfortunate and self-defeating.

Chabsff•1h ago
> I feel like LLMs are a fairly boring technology. They are stochastic black boxes. The training is essentially run-of-the-mill statistical inference. There are some more recent innovations on software/hardware-level, but these are not LLM-specific really.

This is pretty ironic, considering the subject matter of that blog post. It's a super-common misconception that's gained very wide popularity due to reactionary (and, imo, rather poor) popular science reporting.

The author parroting that with confidence in a post about Dunner-Krugering gives me a bit of a chuckle.

miningape•1h ago
I also find it hard to get excited about black boxes - imo there's no real meat to the insights they give, only the shell of a "correct" answer
yannyu•1h ago
What's the misconception? LLMs are probabilistic next-token prediction based on current context, right?
Chabsff•1h ago
Yeah, but that's their interface. That informs surprisingly little about their inner workings.

ANNs are arbitrary function approximators. The training process uses statistical methods to identify a set of parameters that approximate the function as best as possible. That doesn't necessarily mean that the end result is equivalent to a very fancy multi-stage linear regression. It's a possible outcome of the process, but it's not the only possible outcome.

Looking at a LLMs I/O structure and training process is not enough to conclude much of anything. And that's the misconception.

yannyu•1h ago
> Yeah, but that's their interface. That informs surprisingly little about their inner workings.

I'm not sure I follow. LLMs are probabilistic next-token prediction based on current context, that is a factual, foundational statement about the technology that runs all LLMs today.

We can ascribe other things to that, such as reasoning or knowledge or agency, but that doesn't change how they work. Their fundamental architecture is well understood, even if we allow for the idea that maybe there are some emergent behaviors that we haven't described completely.

> It's a possible outcome of the process, but it's not the only possible outcome.

Again, you can ascribe these other things to it, but to say that these external descriptions of outputs call into question the architecture that runs these LLMs is a strange thing to say.

> Looking at a LLMs I/O structure and training process is not enough to conclude much of anything. And that's the misconception.

I don't see how that's a misconception. We evaluate all pretty much everything by inputs and outputs. And we use those to infer internal state. Because that's all we're capable of in the real world.

parineum•1h ago
I'm not sure what claim your disputing or making with this.

What more are LLMs than statistical inference machines? I don't know that I'd assert that's all they are with confidence but all the configurations options I can play with during generation (Top K, Top P, Temperature, etc.) are all ways to _not_ select the most likely next token which leads me to believe that they are, in fact, just statistical inference machines.

sho_hn•1h ago
I'm not sure this is something I really worry about. Whenever I use an LLM I feel dumber, not smarter; there's a sensation of relying on a crutch instead of having done the due diligence of learning something myself. I'm less confident in the knowledge and less likely to present it as such. Is anyone really cocksure on the basis of LLM received knowledge?

> As I ChatGPT user I notice that I’m often left with a sense of certainty.

They have almost the opposite effect on me.

Even with knowledge from books or articles I've learned to multi-source and question things, and my mind treats the LLMs as a less reliable averaging of sources.

deadbabe•1h ago
If you feel dumber, it’s because you’re using the LLM to do raw work instead of using it for research. It should be a google/stackoverflow replacement, not a really powerful intellisense. You should feel no dumber than using google to investigate questions.
Insanity•1h ago
I don't think this is entirely accurate. If you look at this: https://www.media.mit.edu/publications/your-brain-on-chatgpt..., it shows that search engines do engage your brain _more_ than LLM usage. So you'll remember more through search engine use (and crawling the web 'manually') than by just prompting a chatbot.
Insanity•1h ago
I remember back when I was in secondary school, something commonly heard was

"Don't just trust wikipedia, check it's resources, because it's crowdsourced and can be wrong".

Now, almost 2 decades later, I rarely hear this stance and I see people relying on wikipedia as an authoritative source of truth. i.e, linking to wikipedia instead of the underlying sources.

In the same sense, I can see that "Don't trust LLMs" will slowly fade away and people will blindly trust them.

aabhay•1h ago
There’s also the fact that both Wikipedia and LLMs are non-stationary. The quality of wikipedia has grown immensely since its inception and LLMs will get more accurate (if not explicitly “smarter”)
derektank•1h ago
I'm not entirely convinced that the quality of Wikipedia has improved substantially in the last decade.
sho_hn•1h ago
I don't think the cases are really the same. With Wikipedia people have learned to trust that the probability of the information being at least reasonably good is pretty high because there's an editing crucible around it and the ability to correct misinformation surgically. No one can hotpatch a LLM in 5mins.
tayo42•1h ago
Wikipedia is usually close enough and most users don't require perfection for their "facts"

Ive noticed things like gemini summaries on Google searches are also generally close enough.

rsynnott•1h ago
> Now, almost 2 decades later, I rarely hear this stance and I see people relying on wikipedia as an authoritative source of truth. i.e, linking to wikipedia instead of the underlying sources.

That's a different scenario. You shouldn't _cite wikipedia in a paper_ (instead you should generally use its sources), but it's perfectly fine in most circumstances to link it in the course of an internet argument or whatever.

miningape•1h ago
> "Don't just trust wikipedia, check it's resources, because it's crowdsourced and can be wrong"

This comes from decades of teachers misremembering what the rule was, and eventually it morphed into the Wikipedia specific form we see today - the actual rule is that you cannot cite an encyclopaedia in an academic paper. full stop.

Wikipedia is an encyclopaedia and therefore should not be cited.

Wikipedia is the only encyclopaedia most people have used in the last 20 years, therefore Wikipedia = encyclopaedia in most people's minds.

There's nothing wrong with using an encyclopaedia for learning or introducing yourself to a topic (in fact this is what teachers told students to do). And there's nothing specifically wrong about Wikipedia either.

freejazz•1h ago
> Is anyone really cocksure on the basis of LLM received knowledge?

Yeah, the stupid.

code_for_monkey•1h ago
unfortunately im like you and we are in the minority. The manager class loves the llm and doesnt seem to consider its flaws like that.
lukan•1h ago
Nah, I feel smart to use it in a smart way to get stuff done faster than before.
everdrive•1h ago
This captures my experience quite well. I can "get a lot more done," but it's not really me doing the things, and I feel like a bit of a fraud. And as the workday and the workweek roll on, I find myself needing to force myself to look things up and experiment rather than just asking the LLM. It's quite clear that for most people LLMs will make the more dependent. People with better discipline I think will really benefit in big ways, and you'll see this become a new luxury belief; the disciplined geniuses around us will genuinely be perplexed why people are saying that LLMs have made them less capable, much in the same way they wonder why people can't just limit their drug use recreationally.
Brendinooo•1h ago
>it's not really me doing the things, and I feel like a bit of a fraud

I've been thinking about this a bit. We don't really think this way in other areas, is it appropriate to think this way here?

My car has an automatic transmission, am I a fraud because the machine is shifting gears for me?

My tractor plows a field, am I a fraud because I'm not using draft horses or digging manually?

Spell check caught a word, am I a fraud because I didn't look it up in a dictionary?

everdrive•1h ago
I've been thinking about that comparison as well. A common fantasy is that civilization will collapse and the guy who knows how to hunt and start a fire will really excel. In practice, this never happens and he's sort of left behind unless he also has other skills relevant to the modern world.

And, for instance, I have barely any knowledge of how my computer works, but it's a tool I use to do my job. (and to have fun at home.)

Why are these different than using LLMs? I think at least for me the distinction is whether or not something enables me to perform a task, or whether it's just doing the task for me. If I had to write my own OS and word processor just to write a letter, it'd never happen. The fact that the computer does this for me facilitates my task. I could write the letter by hand, but doing it in a word processor is way better. Especially if I want to print multiple copies of the letter.

But for LLMs, my task might be something like "setting up apache is easy, but I've never done it so just tell me how do it so I don't fumble through learning and make it take way longer." The task was setting up Apache. The task was assigned to me, but I didn't really do it. There wasn't necessarily some higher level task that I merely needed Apache for. Apache was the whole task! And I didn't do it!

Now, this will not be the case for all LLM-enabled tasks, but I think this distinction speaks to my experience. In the previous word processor example, the LLM would just write my document for me. It doesn't allow me to write my document more efficiently. It's efficient, only in the sense that I no longer need to actually do it myself, except for maybe to act as an editor. (and most people don't even do much of that work) My skill in writing either atrophies or never fully develops since I don't actually need to spend any time doing it or thinking about it.

In a perfect world, I use self-discipline to have the LLM show me how to set up Apache, then take notes, and then research, and then set it up manually in subsequent runs; I'd have benefited from learning the task much more quickly than if I'd done it alone, but also used my self-discipline to make sure I actually really learned something and developed expertise as well. My argument is that most people will not succeed in doing this, and will just let the LLM think for them.

Brendinooo•1h ago
I remember seeing a tweet awhile back that talked about how modernity separated work from physicality, and now you have to do exercise on purpose. I think the Internet plus car-driven societies had done something similar to being social, and LLMs are doing something to both thinking, as well as the kind of virtue that enables one to master a craft.

So, while it's an imperfect answer that I haven't really nailed down yet, maybe the answer is just to realize this and make sure we're doing hard things on purpose sometimes. This stuff has enabled free time, we just can't use it to doomscroll.

everdrive•1h ago
>Internet plus car-driven societies had done something similar to being social,

That's an interesting take on the loneliness crisis that I had not considered. I think you're really onto something. Thanks for sharing. I don't want to dive into this topic too much since it's political and really off-topic for the thread, but thank you for suggesting this.

agumonkey•1h ago
Most of the time it feels like a crutch to me. There has been a few moments where it unlocked deep motivation (by having a feel for the size of a solution based on chatgpt output) and one time a research project where any crazy idea I threw, it would imagine what it would entail in terms of semantics and then I was inspired even more.

The jury is Still out on what value these things will bring

rsynnott•1h ago
> Is anyone really cocksure on the basis of LLM received knowledge?

Some people certainly seem to be. You see this a lot on webforums; someone spews a lot of confident superficially plausible-looking nonsense, then when someone points out that it is nonsense, they say they got it from a magic robot.

I think this is particularly common for non-tech people, who are more likely to believe that the magic robots are actually intelligent.

cachius•1h ago
From the title I thought this was a repost of 'AI is Dunning-Kruger as a service ' https://news.ycombinator.com/item?id=45851483

It is not.

gopheryourshelf•1h ago
>“the problem with the world is that the stupid are cocksure, while the intelligent are full of doubt.”

Is it me or does everyone find that dumb people seem to use this statement more than ever?

stevenwoo•1h ago
It appears to be a paraphrasing of William Butler Yeats https://en.wikipedia.org/wiki/The_Second_Coming_(poem)
0xdeadbeefbabe•1h ago
Ugh. You can be cocksure of your doubts. It's still confidence, duh.
aeve890•1h ago
Everyone thinks they're the intelligent ones, of course. Which reinforces the repetition ad nauseam of Dunning Kruger. Which is on itself dumb AF because the effect described by Dunning and Kruger has been repeatedly exaggerated and misinterpreted. Which in turn is even dumber because Dunning-Kruger effect is debatable and reproducibility is weak at best.
the_af•1h ago
Yeah, nobody who ever mentions the DK effect (myself included) ever stops to consider they might be in the "dumb" cohort ;)

We are all geniuses!

vehemenz•1h ago
I hate to comment on just a headline—thought I did read the article—but it's wrong enough to warrant correcting.

This is not what the Dunning-Kruger effect is. It's lacking metacognitive ability to understand one's own skill level. Overconfidence resulting from ignorance isn't the same thing. Joe Rogan propagated the version of this phenomenon that infiltrated public consciousness, and we've been stuck with it ever since.

Ironically, you can plug this story into your favorite LLM, and it will tell you the same thing. And, also ironically, the LLM will generally know more than you in most contexts, so anyone with a degree epistemic humility is better served taking it at least as seriously as their own thoughts and intuitions, if not at face value.

lukev•1h ago
I very much agree. I've been telling folks in trainings that I do that the term "artificial intelligence" is a cognitohazard, in that it pre-consciously steers you to conceptualize a LLM as an entity.

LLMs are cool and useful technology, but if you approach them with the attitude you're talking with an other, you are leaving yourself vulnerable to all sorts of cognitive distortions.

roywiggins•1h ago
It certainly isn't helped by the RLHF and chat interface encouraging this. LLM providers have every incentive to make their users engage it like an other. It was much harder to accidentally do when it was just a completion UI and not designed to roleplay as a person.
cgriswald•1h ago
I don't think that is actually a problem. For decades people have believed that computers can't be wrong. Why, now, suddenly, would it be worse if they believed the computer wasn't a computer?

The larger problem is cognitive offloading. The people for whom this is a problem were already not doing the cognitive work of verifying facts and forming their own opinions. Maybe they watched the news, read a Wikipedia article, or listened to a TEDtalk, but the results are the same: an opinion they felt confident in without a verified basis.

To the extent this is on 'steroids', it is because they see it as an expert (in everything) computer and because it is so much faster than watching a TED talk or reading a long form article.

roywiggins•16m ago
It can also dispense agreeable confirmation on tap, with very little friction and hardly any chance of accidentally encountering something unexpected or challenging. Even TED talks occasionally have a point of view that isn't perfectly crafted for each hearer.
mmaunder•1h ago
Use an agent to create something with a non-negotiable outcome. Eg software that does something useful, or fails to, in a language you don’t program in. This is a helpful way to calibrate your own understanding of what LLMs are capable of.
AndrewKemendo•1h ago
Humans broadly have a tenuous grasp of “reality” and “truth.” Propagandists, spies and marketers know what philosophers of mind prove all too well: most humans do not perceive or interact with reality as it is, rather their perception of it as it contributes or contradicts their desired future.

Provide a person confidence in their opinion and they will not challenge it, as that would risk the reward of lend you live in a coherent universe.

The majority person has never heard the term “epistemology” despite the concept being central to how people derive coherence. Yet all these trite pieces written about AI and its intersectionality with knowledge claim some important technical distinction.

I’m hopeful that a crisis of epistemology is coming, though that’s probably too hopeful. I’m just enjoying the circus at this point

balderdash•1h ago
I ascribe the effect of LLMs as similar to reading the newspaper, when I learn about something I have no knowledge base in I come away feeling like I learned a lot. When I interact with a newspaper or LLM in an area where I have real domain expertise I realize they don’t know what they are talking about - which is concerning about the information I get from them about topics I don’t have that high level of domain expertise.
moffkalast•1h ago
And why stop at newspapers, it's been a while since one could say books have any integrity, pretty much anyone can get anything into print these days. From political shenanigans to self help books designed to confirm people's biases to sell more units. Video's by far the hardest to fake but that's changing as well.

Regardless of what media you get your info from you have to be selective of what sources you trust. It's more true today than ever before, because the bar for creating content has never been lower.

avree•1h ago
The title makes this incomprehensible. The author seemingly defines Dunning-Kruger as the... opposite of the Dunning-Kruger effect.
kraftman•1h ago
I feel like when I talk to someone and they tell me a fact, that fact goes into a kind of holding space, where I apply a filter of 'who is this person that is telling me this thing to know what the thing they are telling me is'. There's how well I know them, there's the other beleifs I know they have, there's their professional experience and their personal experience. That fact then gets marked as 'probably a true fact' or 'mark beleives in aliens'.

When I use chatGPT I do the same before I've asked for the fact: how common is this problem? how well known is it? How likely is that chatgpt both knows it and can surface it? Afterwards I don't feel like I know something, I feel like I've got a faster broad idea of what facts might exist and where to look for them, a good set of things to investigate, etc.

giraffe_lady•1h ago
The important part of this is the "I feel like" bit. There's a fair but growing bit of research that the "fact" is more durable in your memory than the context, and over time, across a lot of information, you will lose some of the mappings and integrate things you "know" to be false into model of the world.

This more closely fits our models of cognition anyway. There is nothing really very like a filter in the human mind, though there are things that feel like them.

kraftman•1h ago
Maybe but then thats the same wether I talk to chatGPT or a human isnt it? except with chatgpt i instantly verify what im looking for, whereas with a human i cant do that.
giraffe_lady•13m ago
I wouldn't assume that it's the same, no. For all we knock them unconscious biases seem to get a lot of work done, we do all know real things that we learned from other unreliable humans, somehow. Not a perfect process at all but one we are experienced at and have lifetimes of intuition for.

The fact that LLMs seem like people but aren't, specifically have a lot of the signals of a reliable source in some ways, I'm not sure how these processes will map. I'm skeptical of anyone who is confident about it in either way, in fact.

medstrom•1h ago
Reminds me of "default to null":

> The mental motion of “I didn’t really parse that paragraph, but sure, whatever, I’ll take the author’s word for it” is, in my introspective experience, absolutely identical to “I didn’t really parse that paragraph because it was bot-generated and didn’t make any sense so I couldn’t possibly have parsed it”, except that in the first case, I assume that the error lies with me rather than the text. This is not a safe assumption in a post-GPT2 world. Instead of “default to humility” (assume that when you don’t understand a passage, the passage is true and you’re just missing something) the ideal mental action in a world full of bots is “default to null” (if you don’t understand a passage, assume you’re in the same epistemic state as if you’d never read it at all.)

https://www.greaterwrong.com/posts/4AHXDwcGab5PhKhHT/humans-...

jancsika•41m ago
> Afterwards I don't feel like I know something, I feel like I've got a faster broad idea of what facts might exist and where to look for them, a good set of things to investigate, etc.

Can you cite a specific example where this happened for you? I'm interested in how you think you went from "broad idea" to building actual knowledge.

kraftman•31m ago
Sure. I wanted to tile my bathroom, from chatgpt i learned about laser levels, ledger boards, and levelling spacers (id only seen those cross corner ones before).
chaostheory•1h ago
There are so many guardrails now that are being improved daily. This blog post is a year out of date. Not to mention that people know how to prompt better these days.

To make his point, you need specific examples from specific LLMs.

jakubmazanec•1h ago
It's possible that the Dunning-Kruger effect is not real, only a measurement or statistical artefact [1]. So it probably needs more and better studies.

[1] https://www.mcgill.ca/oss/article/critical-thinking/dunning-...

travisgriggs•1h ago
8 months or so ago, my quip regarding LLMs was “stochastic parrot.”

The term I’ve been using of late is “authority simulator.” My formative experiences with “authority figures” was a person who can speak with breadth and depth about a subject and who seems to have internalized it because they can answer quickly and thoroughly. Because LLMs do this so well, it’s really easy to feel like you’re talking to an authority in a subject. And even though my brain intellectually knows this isn’t true, emotionally, the simulation of authority is comforting.

GMoromisato•1h ago
Speaking of uncertainty, I wish more people would accept their uncertainty with regards to the future of LLMs rather than dash off yet another cocksure article about how LLMs are {X}, and therefore {completely useless}|{world-changing}.

Quantity has a quality of its own. The first chess engine to beat Gary Kasparov wasn't fundamentally different than earlier ones--it just had a lot more compute power.

The original Google algorithm was trivial: rank web pages by incoming links--its superhuman power at giving us answers ("I'm feeling lucky") was/is entirely due to a massive trove of data.

And remember all the articles about how unreliable Wikipedia was? How can you trust something when anyone can edit a page? But again, the power of quantity--thousands or millions of eyeballs identifying errors--swamped any simple attacks.

Yes, LLMs are literally just matmul. How can anything useful, much less intelligent, emerge from multiplying numbers really fast? But then again, how can anything intelligent emerge from a wet mass of brain cells? After all, we're just meat. How can meat think?

svieira•17m ago
> How can meat think?

Some of us used to think that meat spontaneously generated flies. Maybe someday we'll (re-)learn that meat doesn't spontaneously generate thought either?

phamson02•1h ago
I partly share the author's point that ChatGPT users (myself included) can "walk away not just misinformed, but misinformed with conviction". Sometimes I want to criticise aloud, write a post blaming this technology for those colourful, sophisticated, yet empty bullshits I hear from a colleague or read in an online post.

But I always resist the urge. Because I think: Isn't it always going to have some kinds of people like that? With or without this LLM thing.

If there is anything to hate about this technology, for the more and more bullshits we see/hear in daily life, it is: (1) Its reach: More people of all ages, of different backgrounds, expertise, and intents are using it. Some are heavily misusing it. (2) Its (ever increasing) capability: Yes, it has already become pretty easy for ChatGPT or any other LLMs to produce a sophisticated but wrong answer on a difficult topic. And I think the trend is that with later, more advanced versions, it would become harder and take more effort to spot a hidden failure lurking in a more information-dense LLM's answer.

bryanlarsen•37m ago
My opinion: if LLM's speed you up, you're doing it wrong. You have to carefully review and audit every line that comes out of an LLM. You have to spend a lot of time forcing LLM's to prove that the code it wrote is correct. You should be nit-picking everything.

Despite, LLM's are useful. I could write the code faster without an LLM, but then I'd have code that wasn't carefully reviewed line-by-line because my coworkers trust me (the fools). It'd have far fewer tests because nobody forced me to prove everything. It'd have worse naming because every once in a while the LLM does that better than me. It'll be missing a few edge cases the LLM thought of that I didn't. It'd have forest/trees problems because if I was writing the code I'd be focused on the code instead of the big picture.

pants2•26m ago
I've seen this! Following some Math and Physics subreddits it's a regular occurrence for a new submitter to come in and post some 40 pages of incomprehensible bullshit and claim that they developed a unifying theory of physics with ChatGPT and that ChatGPT has told them it's a breakthrough in the field. Of course that used to happen regularly before LLMs but not nearly as often.