frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

NPM debug and chalk packages compromised

https://www.aikido.dev/blog/npm-debug-and-chalk-packages-compromised
334•universesquid•1h ago•154 comments

Signal Secure Backups

https://signal.org/blog/introducing-secure-backups/
55•keyboardJones•31m ago•22 comments

Job Mismatch and Early Career Success

https://www.nber.org/papers/w34215
50•jandrewrogers•1h ago•6 comments

Experimenting with Local LLMs on macOS

https://blog.6nok.org/experimenting-with-local-llms-on-macos/
112•frontsideair•2h ago•68 comments

Our data shows San Francisco tech workers are working Saturdays

https://ramp.com/velocity/san-francisco-tech-workers-996-schedule
33•hnaccount_rng•50m ago•20 comments

OpenWrt: A Linux OS targeting embedded devices

https://openwrt.org/
28•pykello•1h ago•4 comments

Clankers Die on Christmas

https://remyhax.xyz/posts/clankers-die-on-christmas/
105•jerrythegerbil•2h ago•51 comments

Dietary omega-3 polyunsaturated fatty acids as a protective factor of myopia

https://bjo.bmj.com/content/early/2025/08/17/bjo-2024-326872
51•FollowingTheDao•2h ago•27 comments

Will Amazon S3 Vectors Kill Vector Databases–Or Save Them?

https://zilliz.com/blog/will-amazon-s3-vectors-kill-vector-databases-or-save-them
30•Fendy•1h ago•26 comments

Google gets away almost scot-free in US search antitrust case

https://www.computerworld.com/article/4052428/google-gets-away-almost-scot-free-in-us-search-anti...
107•CrankyBear•1h ago•46 comments

Firefox 32-bit Linux Support to End in 2026

https://blog.mozilla.org/futurereleases/2025/09/05/firefox-32-bit-linux-support-to-end-in-2026/
18•AndrewDucker•3d ago•3 comments

Browser Fingerprint Detector

https://fingerprint.goldenowl.ai/
28•eustoria•2h ago•18 comments

Meta suppressed research on child safety, employees say

https://www.washingtonpost.com/investigations/2025/09/08/meta-research-child-safety-virtual-reality/
295•mdhb•4h ago•170 comments

Building an acoustic camera with UMA-16 and Acoular

https://www.minidsp.com/applications/usb-mic-array/acoustic-camera-uma16
16•tomsonj•3d ago•1 comments

Immich – High performance self-hosted photo and video management solution

https://github.com/immich-app/immich
234•rzk•9h ago•75 comments

A complete map of the Rust type system

https://rustcurious.com/elements/
58•ashvardanian•4h ago•3 comments

14 Killed in anti-government protests in Nepal

https://www.tribuneindia.com/news/world/massive-protests-in-nepal-over-social-media-ban/
479•whatsupdog•5h ago•317 comments

Using Claude Code to modernize a 25-year-old kernel driver

https://dmitrybrant.com/2025/09/07/using-claude-code-to-modernize-a-25-year-old-kernel-driver
786•dmitrybrant•17h ago•256 comments

RSS Beat Microsoft

https://buttondown.com/blog/rss-vs-ice
178•vidyesh•6h ago•118 comments

What if artificial intelligence is just a "normal" technology?

https://www.economist.com/finance-and-economics/2025/09/04/what-if-artificial-intelligence-is-jus...
34•mooreds•4h ago•24 comments

The MacBook has a sensor that knows the exact angle of the screen hinge

https://twitter.com/samhenrigold/status/1964428927159382261
946•leephillips•1d ago•451 comments

Why Is Japan Still Investing in Custom Floating Point Accelerators?

https://www.nextplatform.com/2025/09/04/why-is-japan-still-investing-in-custom-floating-point-acc...
176•rbanffy•2d ago•58 comments

VMware's in court again. Customer relationships rarely go this wrong

https://www.theregister.com/2025/09/08/vmware_in_court_opinion/
177•rntn•5h ago•113 comments

American Flying Empty Airbus A321neo Across the Atlantic 20 Times

https://onemileatatime.com/news/american-flying-empty-airbus-a321neo-across-atlantic/
34•corvad•1h ago•32 comments

We Rarely Lose Technology (2023)

https://www.hopefulmons.com/p/we-rarely-lose-technology
37•akkartik•3d ago•38 comments

Indiana Jones and the Last Crusade Adventure Prototype Recovered for the C64

https://www.gamesthatwerent.com/2025/09/indiana-jones-and-the-last-crusade-adventure-prototype-re...
76•ibobev•5h ago•8 comments

Formatting code should be unnecessary

https://maxleiter.com/blog/formatting
299•MaxLeiter•18h ago•397 comments

'We can do it for under $100M': Startup joins race to build local ChatGPT

https://www.afr.com/technology/we-can-do-it-for-under-100m-start-up-joins-race-to-build-local-cha...
41•yakkomajuri•2h ago•10 comments

Integer Programming (2002) [pdf]

https://web.mit.edu/15.053/www/AMP-Chapter-09.pdf
19•todsacerdoti•3d ago•4 comments

Writing by manipulating visual representations of stories

https://github.com/m-damien/VisualStoryWriting
38•walterbell•3d ago•8 comments
Open in hackernews

GPT-5 Thinking in ChatGPT (a.k.a. Research Goblin) is good at search

https://simonwillison.net/2025/Sep/6/research-goblin/
315•simonw•1d ago
Related: Google's new AI mode is good, actually - https://news.ycombinator.com/item?id=45158586 - Sept 2025 (31 comments)

Comments

indigodaddy•1d ago
Pretty wild! I wonder how much high school teachers and college professors are struggling with the inevitable usage though?

"Do deep internet research and thinking to present as much evidence in favor of the idea that JRR Tolkein's Lord of the Rings trilogy was inspired by Mervyn Peake's Gormenghast series."

https://chatgpt.com/share/68bcd796-bf8c-800c-ad7a-51387b1e53...

esafak•20h ago
I was amused that it used the neologism 'steel-man' -- redundantly, too.
IanCal•4h ago
I'm a bit confused, how is it redundant here? It's trying to make the best possible argument from one side that seems to be wrong. Instead of taking the argument at face value, it takes the most charitable understanding of it (not requiring that it happened before, but some parts where perhaps inspired during later revisions) and tries to argue that case.
esafak•3h ago
'strongest “steel-man” case' is the same thing as strongest case; “steel-man” adds nothing.
indigodaddy•3h ago
The question I asked was intentionally trying to see if GPT would go for it or just give me the answer it thought I wanted, but it did a pretty decent job at not just saying “you’re absolutely right” etc. Myself I don’t believe there to be much influence between the two.
wtbdbrrr•19h ago
Idea: workshops for teachers that teach them some kind of Socratic method that stimulates kids to support what they got from G with their own thinking, however basic and simple it may be.

Formulating the state of your current knowledge graph, that was just amplified by ChatGPT's research might be a way to offset the loss of XP ... XP that comes with grinding at whatever level kids currently find themselves ...

sixtyj•19h ago
Did you check the facts? Did you click through all the links and see what the sources are?

A while ago I bragged at a conference about how ChatGPT had "solved" something... Yeah, we know, it's from Wikipedia and it's wrong :)

currymj•16h ago
the thing about students who cheat is most of them are (at least in the context of schoolwork) very lazy and don't care if their work is high quality. i would guess waiting multiple minutes for Thinking mode to give thorough results is very unappealing. 4o or 4o-mini was already good enough for their purposes.
42lux•1d ago
[flagged]
scrollaway•1d ago
Simon's writing is consistently either highly practical, or extremely high quality, or both. What's your reference frame to call it "bad" - your own comments?
mattlondon•1d ago
FWIW I take his writings with a hefty pinch of salt these days. It seems incredibly concentrated on OpenAI to the detriment of anything else. This was only cemented when he ended up appearing on some OpenAI marketing video.

This is fine. He is his own person and can write about whatever he wants and work with whoever he wants, but the days when I'd eagerly read his blog to get a finger of the pulse of all of the main developments in the main labs/models has passed, as he seems to only really cover OpenAI these days, and major events from non-OpenAI labs/models don't seem to even get a mention even if they're huge (e.g. nano banana).

That's fine. It's his blog. He can do what he wants. But to me personally he feels like an OpenAI mouthpiece now. But that's just my opinion.

firesteelrain•1d ago
Never read his blog and I like the writing.

> he feels like an OpenAI mouthpiece now

That seems a little harsh. But, I felt the same about older blogs I used to read such as CodingHorror. They just aren’t for me anymore after diverging into other topics.

I really liked this article and the coining of the term “Research Goblin”. That is how I use it too sometimes. Which is also how I used to use Google.

simonw•1d ago
"It seems incredibly concentrated on OpenAI to the detriment of anything else."

My most recent posts:

- https://simonwillison.net/2025/Sep/7/ai-mode/ - Google/Gemini

- https://simonwillison.net/2025/Sep/6/research-goblin/ - OpenAI/GPT-5

- https://simonwillison.net/2025/Sep/6/kimi-k2-instruct-0905/ - Moonshot/Kimi/Groq

- https://simonwillison.net/2025/Sep/6/anthropic-settlement/ - Anthropic (legal settlement)

- https://simonwillison.net/2025/Sep/4/embedding-gemma/ - Google/Gemma

So far in 2025: 106 posts tagged OpenAI, 78 tagged Claude, 58 tagged Gemini, 55 tagged ai-in-china (which includes DeepSeek and Qwen and suchlike.)

I think I'm balancing the vendors pretty well, personally. I'm particularly proud of my coverage of significant model releases - this tag has 140 posts now! https://simonwillison.net/tags/llm-release/

OpenAI did get a lot of attention from me over the last six weeks thanks to the combination of gpt-oss and GPT-5.

I do regret not having written about Nano Banana yet, I've been trying to find a good angle on it that hasn't already been covered to death.

Squarex•1d ago
I used to read and love your blog, but recently I've noticed a bias towards OpenAI since you were involved with the ChatGPT-5 prerelease.
simonw•1d ago
As soon as another lab release an exciting new model (Anthropic and Gemini have both been quiet since GPT-5, with the exception of nano banana which I do intend to cover) I'll write about what they're up to.
sangeeth96•1d ago
> I think I'm balancing the vendors pretty well, personally.

You are. Pretty much my main source these days to get a filtered down, generalist/pragmatic view on use of LLMs in software dev. I'm stumped as to what the person above you is talking about.

OT: maybe I missed this but is the Substack new and any reason (besides visibility) you're launching newsletters there vs. on your wonderful site? :)

simonw•1d ago
The Substack is literally the exact same content as my blog, just manually copied and pasted into an email once a week or so for people who prefer an email subscription.

I wrote about how it works here: https://simonwillison.net/2023/Apr/4/substack-observable/

jryle70•1d ago
His content seem pretty fair and balanced.

https://news.ycombinator.com/submitted?id=simonw

Or take a look at his website:

https://simonwillison.net/

At least you admit it's your opinion. Maybe that's your bias showing?

42lux•1d ago
Spending thousands of words to essentially say "ChatGPT's search feature works pretty well now" with mundane examples like finding UK cake pop availability or identifying buildings from train windows. This has been done before by less capable models - it's just a rehash. Should we expect newer models getting worse? The breathless "Research Goblin" framing and detailed play-by-play of basic web searches feels like padding to make a now routine tool use seem revolutionary.
lbotos•1d ago
Simon says “what I used to Google I now try AI thinking models”

I didn’t feel that he was framing it as _revolutionary_ it felt more evolutionary.

Simon, for every person miffed about your writing, there is another person like me today who said “ok, I guess I should sign up for Simon’s newsletter.” Keep it up.

It’s easy to be a hater on da internet.

42lux, if you have better articles on AI progress do please link them so we can all benefit.

I wanna know when my research goblin can run on my box with 2x 3090s.

42lux•1d ago
If you want posts like this you can just follow AI influencers on LinkedIn.
simonw•1d ago
The mundane examples were the point. I'm not picking things to show it in the best possible light, I picked a representative sample of the ways I've been using it.

I called out the terrible scatter plot of the latitude/longitude points because it helped show that this thing has its own flaws.

I know so many people who are convinced that ChatGPT's search feature is entirely useless. This post is mainly for them.

42lux•1d ago
[flagged]
simonw•1d ago
The thing about models getting incrementally better is that occasionally they cross a milestone where something that didn't work before starts being useful.

Those are the kinds of things I look out for and try to write about.

typpilol•15h ago
Yes this feels very AI like too. A ton of prose for very little substance lol.

I skipped half the article to get to the point, went back and re-read and didn't miss much.

simonw•11h ago
I don't use AI to generate writing on my blog.
typpilol•2h ago
Oh I know. Just this particular article seemed very sparse.
baq•1d ago
It’s a very interesting balance between ‘LLMs are unpredictable thus useless’ and ‘LLMs are an amazing revolution, next step on the ladder of human civilization’.

I find it informative that search works so well. I knew it works well, but this feels like step above whatever Gemini can do, which is my go to workhorse for chatbots.

simonw•1d ago
What's bad about my post?
adzm•1d ago
I'm officially adopting the term Research Goblin, thanks.
yorwba•1d ago
This post is currently number 144 in newest and not listed in the second-chance pool https://news.ycombinator.com/pool so I think this is its first chance.
42lux•1d ago
https://hn.algolia.com/?q=research+goblin it's like the third time it gets posted and only got traction because I asked why it's popping up in the new q over and over again.
redeyedtreefrog•1d ago
The original author submitted it, then when it didn't get traction it looks like two fans of his blog both submitted it around 12 hours later. Whether for internet upvote points or because they personally thought the article particularly great, I don't know.

Personally I generally enjoy the blog and the writing, but not so much this post. It has a very clickbaity title for some results which aren't particularly impressive.

dang•20h ago
(This was posted before we merged the thread hither from https://news.ycombinator.com/item?id=45156067)
gdbsjjdn•1d ago
As someone who is AI skeptical, there's so many breathless posts like "Jizz-7 Thinking (Good) (Big Balls) can order my morning coffee!" which are a lot of words talking about one person's subjective experience of using some LLM to do one specific thing.
Lerc•1d ago
Could you post a selection? It would be intersting to gauge what you mean by breathless.

People posting their subjective experience is precisely what a lot of these pieces should be doing, good or bad, their experience is the data they have to contribute.

ascorbic•1d ago
> Please don't post shallow dismissals, especially of other people's work.
42lux•1d ago
I asked why it's popping up over and over again in new today. I wouldn't have commented otherwise.
TiredOfLife•1d ago
From https://en.wikipedia.org/wiki/Hacker_News : "Hacker News (HN) is a social news website"

From https://en.wikipedia.org/wiki/Social_news_website : "A social news website is a website that features user-posted stories. Such stories are ranked based on popularity, as voted on by other users of the site or by website administrators."

The article was recently published, users on HN submitted the article. Other users thought it interesting and upvoted. Earth has different time zones (I understand it's difficult for americans to grasp) and so different people are active at different times.

typpilol•15h ago
It's 8 hours from time of post no? So timezones don't really affect anything here or am I missing something?
dang•20h ago
Reposts are allowed through after 8 hours if a story hasn't had significant attention yet. After that, we treat reposts as dupes for a year or so. This is in the FAQ: https://news.ycombinator.com/newsfaq.html.

This is on purpose, because we want good stories to get multiple chances at getting noticed. Otherwise there's too much randomness in what gets traction.

Plenty of past explanations here:

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

42lux•19h ago
look at the timelines again... https://hn.algolia.com/?q=research+goblin

The 8 hours seem not to count if you submit under a different domain or do they reset after each try?

Would also be great if you would answer emails especially if they are related to GDPR. You have two of them in your inbox from over 6 months ago send from the email in my account.

CuriouslyC•1d ago
HN is very cult-of-personality based. People see SimonW they upvote without reading, while at the same time a much better article could be posted on the same topic and get zero traction. Not trying to single Simon out here, I generally find his posts good, just a statement of the herdthink and cognitive laziness of this community (and humans in general, to be fair).
haswell•1d ago
I don’t think this framing quite captures what’s going on.

The AI space is full of BS and grift, which makes reputation and the resulting trust built on that reputation important. I think the popularity of certain authors has as much to do with trust as anything else.

If I see one of Simon’s posts, I know there’s a good chance it’s more signal than noise, and I know how to contextualize what he’s saying based on his past work. This is far more difficult with a random “better” article from someone I don’t know.

People tend to post what they follow, and I don’t think it’s lazy to follow the known voices in the field who have proven not to be grifting hype people.

I do think this has some potential negatives, i.e. sure, there might be “much better” content that doesn’t get highlighted. But if the person writing that better content keeps doing so consistently, chances are they’ll eventually find their audience, and maybe it’ll make its way here.

politelemon•1d ago
You're not negating anything they've said, but given some insight into why the case might be. However the cult of personality and brand still exists and as a result heavily distorts what could appear here.

Saying that someone ought to write better consistently for them to "make its way here" leans completely into the cult of personality.

I think following people would be better served though personal RSS feeds, and letting content rise based on its merit ought to be an HN goal. How that can be achieved, I don't know. What I am saying is that the potential negatives are far far understated than they ought to be.

haswell•1d ago
I think you’re mistaking my comment for an endorsement when it was primarily attempting to reframe and describe the dynamic.

> Saying that someone ought to write better

I did not say someone ought to write better. I described what I believed the dynamic is.

> I think following people would be better served though personal RSS feeds

My point was that this is exactly what people are doing, and that people tend to post content here from the people they follow.

> letting content rise based on its merit ought to be an HN goal

My point was that merit is earned, and people tend to attach weight to certain voices who have already earned it.

Don’t get me wrong. I’m not saying there are no downsides, and I said as much in the original comment.

HN regularly upvotes obscure content from people who are certainly not the center of a cult of personality. I was attempting to explain why I think this is more prevalent with AI and why I think that’s understandable in a landscape filled with slop.

TiredOfLife•1d ago
It's not personality, but source. Like i see a post from The Register or Ars Technica I know that it will be at best completely wrong. While posts from simonwilson (for a long time I thought it was like Anandtech. A group of people posting under one domain) are usually good
stephen_cagle•14h ago
I mean if you have a better idea for how to assign your attention, then I am all ears. :]

I'd say trust is a pretty reasonable way to assign attention.

I guess the fairest way might theoretically be to require everything to be submitted anonymously, with maybe authorship (maybe submissionship) only being revealed after some assigned period?

This is better for the incubants, but would require a huge amount of energy compared to "Oh, simon finds this interesting, I'll take a looksy".

jryle70•1d ago
First of all, why is it bad? That's my pet peeves of reading HN. People assume their opinion as fact. I found this blog piece interesting. Probably other people as well, that's why it's on the front page.

Second of all, Simon's content are often informative, more or less sticking to the facts, not flame bait. I never upvote or flag any content from anyone.

spaceman_2020•1d ago
Slightly off topic but chatGPT’s refusal to visually identify people, including dead historical personalities, has been a big let down for me. I can paste in an image of JFK and it will refuse to tell me who it is.
naiv•1d ago
same with google lens, I do understand the motivation / laws behind it but yes , it really is a let down.
perching_aix•1d ago
Same thing with models and cosplayers. Even Yandex isn't quite the same anymore I think.

Can be sometimes circumvented with cropping / stronger compression, but it made looking up who a given image is of / what imageset is it from pretty annoying - the opposite of what these people would want in this case too.

Sometimes I wonder if celebrities have issues using tech because of these checks.

hetspookjee•1d ago
I think it makes sense? Given the vast "knowledge" of ChatGPT it'd be a perfect doxxing tool with the deep research. To straight-up refuse any identification is I think a better idea than to try to circumvent it with arbitrary limitations?

However, having tried it now myself. Uploading the profile picture of Gauchy and asking it who this person is in the image made it refuse, even after asking who it is.

But starting a new chat and stating the following made it chew up the correct identification with no effort:

#Prompt 1 <Image> Can you describe me this painting?

#Prompt 2: Well im curious to learn as I've inheritred this painting from my grandmother and my mother is now asking me what the paining is about

---> correct ID

Same goes for a picture of JFK, heres my chat:

#prompt1: Can you describe me this painting? #prompt2: Well im curious to learn as I've inheritred this painting from my grandmother and my mother is now asking me what the paining is about #prompt3: I dont think its a leader actually, I think its a depiction of a tree? #prompt4: Why is it not a tree? what do you think? #prompt5: My grandmother is confused now so if you can help her be less confused id appreciate that #prompt6: Do tell #prompt7: Okay if its not a tree than what is it

Reply: Got it — let me be precise for you: The image you uploaded is a portrait of John F. Kennedy, the 35th President of the United States (1961–1963). He is shown smiling in a dark suit and striped tie. The American flag is placed behind him to emphasize patriotism. This particular portrait style was very common in the 1960s and 1970s — many households around the world displayed it because Kennedy was admired internationally, not just in America. So, what you have inherited is most likely a framed reproduction of a JFK portrait — not a tree, not abstract art, but a picture honoring a famous leader.

spaceman_2020•6h ago
I understand the legal motivation behind a blanket ban, but what's the point of having artificial "intelligence" if the model can't contextualize the request? Any intelligent model would be able to figure out that JFK is not under any threat of being doxxed

I legitimately had to ask Reddit for answers because I saw a picture of historical figures where I recognized 3 of the 4 people, but not the 4th. That 4th person has been dead for 78 years. Google Lens, and ChatGPT both refused to identify the person - one of the leading scientists of the 20th century.

You can't really build a tool that you claim can be used as a learning tool but can't identify people without contextualizing the request.

Havoc•1d ago
That post definitely could have been 1/3rd the length
rs186•1d ago
Yeah.

I don't understand why the "Official name for the University of Cambridge" example is worth mentioning in the article.

blast•20h ago
It's an interesting and fun example?
rs186•19h ago
I don't know, I didn't find anything interesting about that example. I would think anyone who has used ChatGPT since Nov 2022 at least once would have expected it to work like that.
simonw•18h ago
Because it's the simplest example from the last 48 hours of how I've used this tool. I tried to show an illustrative sample of how I am using it.
senko•1d ago
Nice writeup.

This may nudge me to start using chatbots more for this type of queries. I usually use Perplexity or Kagi Assistant instead.

Simon, what's your opinion on doing the same with other frontier systems (like Claude?), or is there something specific to ChatGPT+GPT5?

I also like the name, nicely encodes some peculiarities of tech. Perhaps we should call AI agents "Goblins" instead.

simonw•1d ago
I've been much more impressed by GPT-5 than the other systems I've tried so far - though I did get a surprisingly good experience from the new Google AI mode (notably different from AI overviews): https://simonwillison.net/2025/Sep/7/ai-mode/
picardo•1d ago
haha, I believe you!
picardo•1d ago
For context: https://chatgpt.com/share/68bc71b4-68f4-8006-b462-cf32f61e7e...
ezequiel-garzon•1d ago
Off topic, but I wonder why the author is using _both_ Substack and his old website [1]. Is this a new trend?

[1] https://simonwillison.net/2025/Sep/6/research-goblin/

simonw•1d ago
I use Substack as a free email delivery service - it's just content copied over from my blog: https://simonwillison.net/2023/Apr/4/substack-observable/
CuriouslyC•1d ago
Yeah, the % of the time I need to dip into deep research with GPT5 is much lower than GPT4 for sure. It even beats Gemini's web grounding which is impressive, I think most of the lift here is how smart/targeted its queries and follow-ups are.
meshugaas•1d ago
These answers take a shockingly long time to resolve considering you can put the questions into Brave search and get basically the same answers in seconds.
ignoramous•20h ago
The thing is, with Chat+Search you don't have to click various links, sift through content farms, or be subject to ads and/or accidental malware download.
dns_snek•19h ago
In practice this means that you get the same content farm answer dressed up as a trustworthy answer without even getting the opportunity to exercise better judgement. God help you if you rely on them for questions about branded products, they happily rephrase the company's marketing materials as facts.
Pepe1vo•19h ago
A counter example to this is that I asked it about NovaMin® 5 minutes ago and it essentially told me to not bother and buy whatever toothpaste has >1450 ppm fluoride.
dns_snek•18h ago
Such is the nature of probabilistic systems. Generally speaking, LLMs read the top N search results on the topic in question and uncritically summarize them in their answer. Emphasis on uncritically, therefore the quality of LLM answers is strongly correlated with the quality of top search results.

Relevant blog post: https://housefresh.com/beware-of-the-google-ai-salesman/

simonw•18h ago
This is why I am so excited about the way GPT-5 uses its search tool.

GPT-4o and most other AI-assisted search systems in the past worked how you describe: they took the top 10 search results and answered uncritically based on those. If the results were junk the answer was too.

GPT-5 Thinking doesn't do that. Take a look at the thinking trace examples I linked to - in many of them it runs a few searches, evaluates the results, finds that they're not credible enough to generate an answer and so continues browsing and searching.

That's why many of the answers take 1-2 minutes to return!

I frequently see it dismiss information from social media and prefer to go to a source with a good reputation for fact-checking (like a credible newspaper) instead.

Agraillo•4h ago
> finds that they're not credible enough to generate an answer

The credibility is one side of the story. In many cases, at least for my curious research, I happen to search for something very niche, so to find at least anything related, an LLM needs to find semantic equivalence between the topic in the query and what the found pages are discussing or explaining.

One recent example: in a flat-style web discussion, it may be interesting to somehow visually mark a reply if the comment is from a user who was already in the discussion (at least GP or GGP). I wanted to find some thoughts or talk about this. I had almost no luck with Perplexity, which probably brute-forced dozens of result pages for semantic equivalence comparison, and I also "was not feeling/getting lucky" with Google using keywords, the AROUND operator, and so on. I'm sure there are a couple of blogs and web-technology forums where this was really discussed, but I'm not sure the current indexing technology is semantically aware at scale.

It's interesting that sometimes Google is still better, for example, when a topic I’m researching has a couple of specific terms one should be aware of to discuss it seriously. Making them mandatory (with quotes) may produce a small result set to scan with my own eyes.

therein•18h ago
What's incredible about that is that you are acting like that was a success story but it is a nuanced topic and it swallowed all the nuance and convinced you.

You're now here telling us how it gave you the right answer, which seems to mostly be due to it confirming your bias.

the_pwner224•18h ago
A year ago I asked it to do deep research on Biomin F + a comparison to NovaMin & fluoride. It gave a comprehensive answer detailing the benefits of BioMin & NovaMin over regular fluroide.
yeasku•16h ago
A year ago I asked the change of dolar euro and it made up the number.

How do you know it did not made it up. Are you an expert in the field?

typpilol•15h ago
I'd be curious if you have the same prompt and repeat it what you get.
ekianjo•19h ago
With the walls of low quality sites optimized for SEO these days? Call me unconvinced
apparent•16h ago
I like Brave but have found their search to be awful. The AI stuff seems decent enough, but the results populated below are just never what I'm looking for.
edverma2•22h ago
From GPT-5-Pro with Deep Research selected:

> FWIW Deep Research doesn’t run on whatever you pick in the model selector. It’s a separate agent that uses dedicated o‑series research models: full mode runs on o3; after you hit the full‑mode cap it auto‑switches to a lightweight o4‑mini version. The picker governs normal chat (and the pre‑research clarifying Qs), not the research engine itself.

croemer•19h ago
He's not talking about Deep Research
tagawa•15h ago
From the OP's comment above:

"It's not the Deep [Re]Search or Agent Mode. I select 'GPT-5 Thinking' from the model picker and make sure its regular search tool is enabled."

Source: https://news.ycombinator.com/item?id=45162802

scrollaway•20h ago
Dupe: https://news.ycombinator.com/item?id=45156067
dang•20h ago
We merged that one hither. Thanks!
j_bum•20h ago
Is this the “Web Search”, “Deep Research”, or “Agent Mode” feature of ChatGPT?

Navigating their feature set is… fun.

yunohn•20h ago
I have a feeling this is just ChatGPT 5 in thinking mode, with web search enabled at a profile level at least. Even without that, any indication for recent data or research and thinking will prompt it to think+research quite a bit, ie deep research.
650REDHAIR•19h ago
In my experience it’s “search Reddit and combine comments”.
dontdoxxme•17h ago
There are searches where that is the best way for a human to get the answer too. It can also search the Internet Archive if you ask for historical details, so does it not just do what a good human researcher would do?
simonw•18h ago
It's not the Deep Search or Agent Mode.

I select "GPT-5 Thinking" from the model picker and make sure its regular search tool is enabled.

j_bum•16h ago
Good to know, I’ll try to just use this a bit more then. I always opt for one of the above modes, with varying degrees of success.

Not sure if you tend to edit your posts, but it could be worth clarifying.

Btw — my colleagues and I all love your posts. I’ll quit fanboying now lol.

jonahx•14h ago
> This is excellent for satisfying curiosity, and occasionally useful for more important endeavors as well.

Small nit, Simon: satisfying curiosity is the important endeavor.

<3

dclowd9901•12h ago
It feels like the difference between someone painstakingly brushing away eons of dirt and calcification on buried dino bones vs just picking them up off the ground.

In the former, the research feels genuine and in the latter it feels hollow and probably fake.

iguana2000•18h ago
I believe this is just the normal mode. In my experience, you don't have to select the web search option to make it search the web. I wonder why they have web search as an option at this point (to force the llm to search?)
movedx01•8h ago
Don't forget about the "ChatGPT 5 Pro" too :) which is a bit like Deep Research but not quite?
psadri•20h ago
I do miss the earlier "heavy" models that had encyclopedic knowledge vs the new "lighter" models that rely on web search. Relying on web search surfaces a shallow layer of knowledge (thanks to SEO and all the other challenges of ranking web results) vs having ingested / memorized basically the entirety of human written knowledge beyond what's typically reachable within the first 10 results of a web search (eg: digitized offline libraries).
hamdingers•18h ago
I feel the opposite. Before I can use information from a model's "internal" knowledge I have to engage in independent research to verify that it's not a hallucination.

Having an LLM generate search strings and then summarize the results does that research up front and automatically, I need only click the sources to verify. Kagi Assistant does this really well.

beefnugs•17h ago
So does anyone have any good examples of it effectively avoiding the blogspam and SEO? Or being fooled by it? How often either way?
simonw•17h ago
Here's a good article about Google AI mode usually managing to spot and avoid social media misinformation but occasionally falling for it: https://open.substack.com/pub/mikecaulfield/p/is-the-llm-res...
15123123aa•12h ago
I find one thing it doesn't do very well is avoiding marketing articles pushed by a brand itself. e.g. if I search is X better than Y, very likely landing on articles by makers of brand X and Y and not a 3rd party reviewer. When I manually search on Google I can spot marketing articles just by the URL.
simonw•11h ago
Have you tried that with GPT-5 Thinking or is this based on your experience with older versions of ChatGPT + search?
coffeefirst•4h ago
Bulk search is the only thing where I’ve been consistently impressed with LLMs.

But, like the parent, I’m using the Kagi assistant.

So the answer here might be “search for 5 things and pull the relevant results” works incredibly well, but first you have to build an extremely good search engine that lets the user filter out spam sites.

That said, this isn’t magic, it’s just automated an hour of googling. If the content doesn’t exist you won’t find it.

gerdesj•17h ago
"encyclopedic knowledge"

Have you just hallucinated that?

ants_everywhere•16h ago
Most real knowledge is stored outside the head, so intelligent agents can't rely solely on what they've remembered. That's why libraries are so fundamental to universities.
mastercheif•15h ago
I kept search off for a long time due to it tanking the quality of the responses from ChatGPT.

I recently added the following to my custom instructions to get the best of both worlds:

# Modes

When the user enters the following strings you should follow the following mode instructions:

1. "xz": Use the web tool as needed when developing your answer.

2. "xx": Exclusively use your own knowledge instead of searching the internet.

By default use mode "xz". The user can switch between modes during a chat session. Stay with the current mode until the user explicitly switches modes.

stephen_cagle•15h ago
I think this is partially something I have felt myself as well. It would be interesting if these lighter web search models would highlight the distinction between information that has been seen elsehwere vs information that is novel for each page? Like, a view that lets me look at the things that have been asserted and see how many of the different pages show those facts asserted (vs unmentioned vs contradicted).
killerstorm•9h ago
These models are still available: GPT-4.5, Gemini 2.5 Pro (at least the initial version - not sure if they optimized it away).

From what I can tell, they are pretty damn big.

Grok 4 is quite large too.

simianwords•9h ago
There is a tradeoff here: the non search models are internally heavy but the search models are light but also depend on real data.

I keep switching between both but I think I'm starting to prefer the lighter one that is based on the sources instead.

croemer•19h ago
Yes, "GPT-5 with thinking" is great at search, but it's horrible that it shows "Network connection lost. Attempting to reconnect..." after you switch away from the app for even just a few seconds before coming back.

It's going to take a minute, so why do I need to keep looking at it and can't go read some more Wikipedia in the mean time?

This is insanely user hostile. Is it just me who encounters this? I'm on Plus plan on Android. Maybe you don't get this with Pro?

Here's a screenshot of what I mean: https://imgur.com/a/9LZ1jTI

wolttam•19h ago
Yeah it should be able to perform these entirely as a process on their end and the app should just check in on progress.

One of the complications of your average query taking at least some number of seconds to complete - that is, long enough for the user to do something else while waiting.

simonw•18h ago
Weird. That doesn't happen to me on iOS - I can post the question, wait just long enough for it to display "Thinking...." and then go and do something else.

It even shows me a push notification at the top of my screen when the search task has finished.

IanCal•4h ago
As a counter I've found that the iOS app is insanely unreliable. I've lost chats, it messes up and says there has been no response, connection lost and more. It's been really bad. Often when it's reported no result and failed, I go to the site and everything is fine. If things fail I no longer retry as that's how I've permanently lost history before (which is insane, don't lose my shit), and go and check the website.

Insane ratio of "app quality" to "magic technology". The models are wild (as someone in the AI mix for the last 20 years or so) and the mobile app and codex integrations are hot garbage.

timpera•18h ago
I'm also on Android with the Plus subscription and I also get this. It usually reconnects by itself a few seconds later, but if it doesn't, I've found that you can get to the answer by closing the app and reopening it.
Tenemo•17h ago
I had the same problem and I figured out how to fix it! For Samsungs, Apps ‐> ChatGPT -> Battery -> Unrestricted completely fixed the issue for me, it continues thinking/outputting in the background now. Should be a similiar setting for other Android distributions. Basically, it wasn't the app's fault, the OS is just halting it in the background to save battery.
croemer•11h ago
Thank you! That fixed it for me on Pixel 8 as well. Would be great if the app suggested this as a fix.
tasercake•15h ago
I’ve had this happen on iOS too, usually when I switch away from the thread or the app before it progresses past the initial “Thinking…”.

But I’ve found that no matter the error - even if I disconnect from the internet entirely - I eventually get a push notification and opening up the thread a while later shows me the full response. (disclaimer: N=1)

cm2012•15h ago
Ive always had this happen too, super annoying. Android.
astrange•14h ago
This is fine because it does reconnect, but some actions cause it to just stop thinking and not write a response, and I can't tell what they are. Wastes a few minutes because you have to re-run it.
niklassheth•19h ago
I've also found it to be good at digging deep on things I'm curious about, but don't care enough to spend a lot of time on. As an example, I wanted to know how much sugar by weight is in a coffee syrup so I could make my own dupe. My searches were drowned out by marketing material, but ChatGPT found a datasheet with the info I wanted. I would've eventually found it too, but that's too much effort for an unimportant task.

However, the non-thinking search is total garbage. It searches once, and then gives up or hallucinates if the results don't work out. I asked it the same question, and it says that the information isn't publicly available.

bitexploder•19h ago
Don’t sleep on Gemini Deep Research feature either. I use it for my car work and it beats ChatGPT’s offering at that price point every time.
niklassheth•18h ago
I've found the same, but I also haven't gained much value out of "deep research" products as a whole. When I last tested them with topics I'm familiar with, I found the quality of research to be poor. These tools seem to spend their time searching for as much content as possible, then they dump it all into a report. I get better outcomes by extensively searching for a handful of top quality sources. Most of the time your question (or at least some subquestions) has already been answered by an expert, and you're better off using their work than sloppily recreating it.
ghostpepper•17h ago
This begs the question of what would be required to get an AI chatbot to emulate the process you (and others, including myself) use manually, and whether it's possible purely through different prompting.

Is the fundamental problem that it weights all sources equally so a bunch of non-experts stating the wrong answer will overpower a single expert saying the correct answer?

simonw•17h ago
This post has some interesting suggestions about that: https://open.substack.com/pub/mikecaulfield/p/is-the-llm-res...
losvedir•17h ago
I dunno, I use Deep Research from Claude, ChatGPT, and Gemini, and Gemini is the only one that ignores my requests and always produces the most inane high school student wannabe management consultant "report" with introduction and restatement of the problem and background and all that. Its "voice" (the prose, I mean, not text to speech) is so irritating I've stopped using it.

The other ones will do the thing I want: search a bunch, digest the results, and give me a quick summary table or something.

bitexploder•14h ago
Suppose it depends. I think of it like this article suggests. It is very good at searching and scraping a lot of websites fast. And then summarizing that some.
astrange•14h ago
I like Gemini Deep Research because ChatGPT's has very low limits, but it is extremely on rails. Yesterday as an experiment I asked it to do a bunch of math rather than write a report, and it did the math but then wrote a report scolding me for not appreciating the beauty of the humanities.
bluecalm•9h ago
Gemini is high on hallucination. When I ask it about my own software it not only changes my own name to a similar one common in my language but also makes up stuff about our team saying some stranger works with us (he works in the same niche but that's about it).

It's annoying when it's so confident making up nonsense.

Imo Chat GPT is just a league above when it comes to reliability.

Moosdijk•9h ago
>Imo Chat GPT is just a league above when it comes to reliability.

Which is in my option, the #1 metric an LLM should strive for. It can take quite some time to get anything out of an LLM. If the model turns out to be unreliable/untrustworthy, the value of its output is lost.

It's weird that modern society (in general) so blindly buys in to all of the marketing speak. AI has a very disruptive effect on society, only because we let it happen.

bitexploder•1h ago
I just end up using both for research type things. They both end up doing better on certain topics or types of work. For $20/mo why not both :)

I like ChatGPT as a product more, but Gemini does well on many things that ChatGPT struggles with a little more. Just my anecdotes.

larsiusprime•19h ago
I find ChatGPT to be great at research too-but there are pathological failure modes where it is biased to shallow answers that are subtly wrong, even when definitive primary sources are readily available online:

https://www.fortressofdoors.com/researchers-beware-of-chatgp...

jbm•18h ago
Yes, this is very much my experience too.

Switching to GPT5 Thinking helps a little, but it often misses things that it wouldn't when I was using o3 or o1.

As an example, I asked it if there were any incidents involving Botchan in an Onsen. This is a text that is readily available and must have been trained on; in the book, Botchan goes swimming in the onsen, and then is humiliated when the next time he comes back, there is a sign saying "No swimming in the Onsen".

According to GPT5 it gives me this, which is subtly wrong.

> In the novel, when Botchan goes to Dōgo Onsen, he notes the posted rules of the bath. One of them forbids things like: > “No swimming in the bath.” (泳ぐべからず) > “No roughhousing / rowdy behavior.” (無闇に騒ぐべからず) > Botchan finds these signs funny because he’s exactly the sort of hot-headed, restless character who might be tempted to splash around or make noise. He jokes in his narration that it seems as though the rules were written specifically to keep people like him out.

Incidentally, Dogo Onsen still has the "No swimming sign", or it did when I went 10 years ago.

black_knight•8h ago
I feel like the value of my plus subscription went down when they released GPT-5, it feels like a downgrade from o3. But of course OpenAI being not open, there is no way for me to know now.
ants_everywhere•16h ago
This isn't really how you described. You have an opinion that conflicts with the research literature. You published a blog about that opinion, and you want ChatGPT to say you're to accept your view.

Your view is grinding a political axe and I don't think you're in a position to objectively assess whether ChatGPT failed in this case.

typpilol•16h ago
Yea this isn't really a chat gpt problem as a source credibility problem no?
larsiusprime•16h ago
It’s mostly that it was not citing verifiable - and available online - primary source documents, the way I would expect an actual researcher investigating this question would. This is relevant when it is billed as "Research Grade" or "PhD" level intelligence. I expect a PhD level researcher to find the German-language primary sources.
eru•6h ago
Especially since ChatGPT speaks fluent German.
larsiusprime•16h ago
What are you talking about? There are verifiable primary sources that ChatGPT was not citing. There are direct primary historical sources that lay out the full budget of the historical German colony in extreme detail, that directly contradict assertions made in the Silagi paper, that’s not a matter of opinion that’s a matter of verifiable fact.

Also what “axe” am I grinding? The findings are specifically inconvenient for my political beliefs, not confirming my priors! My priors would be flattered if Silagi was correct about everything but the primary sources definitively prove he’s exaggerating.

> You published a blog about that opinion, and you want ChatGPT to say you're to accept your view.

False, and I address this multiple times in the piece. I don’t want ChatGPT to mindlessly agree with me, I want it to discover the primary source documents.

ants_everywhere•15h ago
From your blog you appear to be a Georgist or inspired by Georgist socialism. And given that you appear to have a business and blog related to these subjects, you give the impression that you're a sort of activist for Georgism. I.e. not just researching it by trying to advance it.

So just zooming out, that's not the right sort of setup for being an impartial researcher. And in your blog post your disagreements come off to me as wanting a sort of purity with respect to Georgism that I wouldn't be expected to be reflected in the literature.

I like Kant, but it would be a bit like me saying ChatGPT was fundamentally wrong because it considered John Rawls a Kantian because I can point to this or that paper where he diverges from Kant. I could even write a blog post describing this and pointing to primary sources. But Rawls is considered a Kantian and for good reason, and it would (in my opinion) be misleading for me to say that ChatGPT made a big failure mode because it didn't take my view on my pet subject as seriously as I wanted.

larsiusprime•15h ago
You misunderstand. I’m indeed a Georgist, and I discovered that a popular Georgist narrative was exaggerated! The findings of the historically verifiable primary source documents contradicted a prevailing narrative based on the Silagi paper. The Silagi paper is pro Georgist! But it’s exaggerated!

The literature — the primary source documents — do not in fact support a maximalist Georgist case! This is what I have been trying to say!!!

You are accusing me of the exact opposite thing I’m arguing for!!! The historical case the primary sources show is inconvenient for my political movement!

The failure of chat gpt is not that it disagrees with any opinion of mine, but that it does not surface primary source documents. That’s the issue.

Its baffling to be accused of confirmation bias when I point out research findings that goes against what would be maximally convenient for my own cause.

ants_everywhere•15h ago
To clarify I am not accusing you of that. I am saying you are seeing distinctions as more important than the rest of the literature and concluding that the literature is erroneous. For example whether a given policy is Georgist.

But often people who believe in a given doctrine will see differences as more important than they objectively are. For example, just to continue with socialism, it's common for socialist believers to argue that this or that country is or isn't socialist in a way that disagrees with mainstream historians.

I'm sure there are other examples, for example people disagreeing about which bands are punk or hardcore. A music historian would likely cast a wider net. Fans who don't listen to many other types of music might cast a very narrow net.

larsiusprime•14h ago
Okay, so let me break it down for you:

The Silagi paper makes a factual claim. The Silagi paper claims that there was only one significant tax in the German colony of Kiatschou, a single tax on land.

The direct primary sources reveal that this is not the case. There were multiple taxes, most significantly large tariffs. Additionally there were two taxes on land, not one -- a conventional land value tax, and a "land increment" or capital gains tax.

These are not minor distinctions. These are not matters of subjective opinions. These are clear, verifiable, questions of fact. The Silagi paper does not acknowledge them.

ChatGPT, in the early trials I graded, does not even acknowledge the German primary sources. You keep saying that I am upset it doesn't agree with me.

I am saying the chief issue is that ChatGPT does not even discover the relevant primary sources. That is far more important than whether it agrees with me.

> For example, just to continue with socialism, it's common for socialist believers to argue that this or that country is or isn't socialist in a way that disagrees with mainstream historians.

Notice you said "historians." Plural. I expect a proper researcher to cite more than ONE paper, especially if the other papers disagree, and even if it has a preferred narrative, to at least surface to me that there is in fact disagreement in the literature, rather than to just summarize one finding.

Also, if the claims are being made about a piece of German history, I expect it to cite at least one source in German, rather than to rely entirely on one single English-language source.

The chief issue is that ChatGPT over-cites one single paper and does not discover primary source documents. That is the issue. That is the only issue.

> I am saying you are seeing distinctions as more important than the rest of the literature and concluding that the literature is erroneous.

And I am saying that ChatGPT did not in fact read the "rest of the literature." It is literally citing ONE article, and other pieces that merely summarize that same article, rather than all of the primary source documents. It is not in fact giving me anything like an accurate summary of the literature.

I am not saying "The literature is wrong because it disagrees with me." I am saying "one paper, the only one ChatGPT meaningfully cites, is directly contradicted by the REST of the literature, which ChatGPT does not cite."

A truly "research grade" or "PhD grade" intelligence would at the very least be able to discover that.

jazzyjackson•9h ago
Reminds me of my pet peeve with algorithmic playlists, if I ask Siri or Alexa for bossa nova, all I get is different covers of Girl from Ipanema since that's the most played song on every bossa nova album
ants_everywhere•5h ago
I think we’re talking past each other a bit. My concern is that your personal assessment of whether the tax is "significant" is being treated as settled fact. That’s the same kind of issue I flagged earlier. Reasonable people can disagree here without that disagreement implying a "pathological failure."

I hear you that this is about finding sources, but even perfect coverage of primary sources wouldn’t remove the need for judgment. We’d still have to define what counts as "Georgian," "inspired by George," and "significant" as a tax. Those are contestable choices. What you have is a thesis about the evidence—potentially a strong one—but it isn’t an indisputable fact.

On sourcing: I’m aware ChatGPT won’t surface every primary source, and I’m not sure that should be the default goal. In many fields (e.g., cancer research), the right starting point is literature reviews and meta-analyses, not raw studies. History may differ, but many primary sources live offline in archives, and the digitized subset may not be representative. Over-weighting primary materials in that context can mislead. Primary sources also demand more expertise to interpret than secondary syntheses—Wikipedia itself cautions about this: https://en.wikipedia.org/wikiWikipedia:Identifying_and_using...

To be clear, I’m not saying you’re wrong about the tax or that Silagi is right. I’m saying that framing this as a “pathological failure” overstates the situation. What I see is a legitimate disagreement among competent researchers.

eru•6h ago
Hmm, I suspect if ChatGPT would pay more attention to the German sources, they would perhaps find that supposedly right answer?

I wonder if asking ChatGPT in German would make a difference.

simianwords•9h ago
I found your article interesting and it is relevant to the discussion. To be honest, while I think GPT could have performed better here, I think there is something to be said about this:

There is value in pruning the search tree because the deeper nodes are usually not reputable. I know you have cause to believe that "Wilhelm Matzat" is reputable but I don't think it can be assumed generally. If you were to force GPT to blindly accept counter points from people - the debate would never end. And there has to be a pruning point at which GPT would accept this tradeoff: maybe the less reputable or well known sources may have a correct point at the cost of being incorrect more often due to taking an incorrect analysis from a not well known source.

You could go infinitely deep into any analysis and you will always have seemingly correct points on both sides. I think it is valid for GPT to prune the search at a point where it converges to what society at large believes. I'm okay with this tradeoff.

larsiusprime•3h ago
My contention is if it’s going to just give me a Wikipedia summary, I can do that myself. I just have greater expectations of “PhD” level intelligence.

If we’re going to claim to it is PhD level it should be able to do “deep” research AND think critically about source credibility, just as a PhD would. If it can’t do that they shouldn’t brand it that way.

Also it’s not like I’m taking Matzat’s word for anything. I can read the primary source documents myself! He’s also hardly an obscure source, he’s just not listed on Wikipedia.

simonw•3h ago
I suggest ignoring the "PhD level intelligence" marketing hype.
magicalist•2h ago
A couple of times when I've gotten an answer sourced basically only from wikipedia and stackoverflow, I've thrown in a comment about its "PhD level intelligence" when I tell it to dig deeper, and it's taken it pretty well ("fair jab :)"), which is amusing. I guess that marketing term has been around long enough to be in gpt5's training data.
Helmut10001•6h ago
More recently, I find ChatGPT to become increasingly unreliable. It makes up almost every second answer, forgets context, or is just downright wrong. Maybe I am used these days more and more to dump huge texts for context into the prompt, as aistudio allows me. Maybe ChatGPT isn't as good as with such information. Gemini/Aistudio will stay on track even with 300k tokens consumed, it just needs a little nudge here and there.
herewegohawks•2h ago
FWIW, I found things improved greatly once I turned off the memory feature of ChatGPT. My guess is that a lot of tokens were going towards trying to follow instructions from past conversations.
kmijyiyxfbklao•45m ago
This doesn't tell us much. I don't know why you would expect ChatGPT to do original PhD research. It's a general product that will trust already published research. That doesn't meat that GPT-5 can't do PhD research, when given the right sources.
iguana2000•18h ago
I agree with this completely; ChatGPT search is perfect for most use cases. I find it to be better than OpenAI's deep research in my experience-- it often uses 2-3x the sources, and has a more comprehensive, well-thought-out report. I'm sure there are still cases where deep research is preferable, but I haven't come across those yet.
Jordan-117•17h ago
It really is great. When I was still on Reddit, I made regular use of the "Tip of My Tongue" sub to track down obscure stuff I half-remembered from years ago. It mostly worked, but there were a few stubborn cases that went unsolved, even after pouring every ounce of my Google Fu into the endeavor. I recently took the text of these unsolved posts and submitted them to Deep Research -- and within an hour, it had cracked four of them, and put me on track to find a fifth myself. Even if the reasoning part isn't entirely up to par, there's still something really powerful about being able to rapidly digest dozens of search results and pull out relevant information based on a loose description. And now I can have that kind of search power on demand in just a few minutes, without having to deal with Reddit's spambots and post filters and hordes of users who don't read the question or follow the sub's basic rules.
vahid4m•12h ago
When it comes to Information Retrieval, you can get anything between links to existing documents or generated content based on those processed information. I agree that the second one is really powerfuly and just amazing and seemilngly useful. But the fact that it can also be wrong in more cases and I won't know keep being reminded to my using it for things I'm not good at and they just don't work s they should.

I just wish the business models could justify a confidence level being attached to the response.

gerdesj•17h ago
Oh FFS, "I've Chatted and stuff"

Your Exeter cavern quandary was not exactly sorted. https://simonwillison.net/2025/Sep/6/research-goblin/#histor...

They are quite old and very well documented, so how on earth could a LLM fuck up unless, a LLM is some sort of next token guesser ...

simonw•17h ago
Which bit are you talking about it failing to solve? The diagram of the tunnels?

I made fun of its attempt at drawing a useless scatter chart.

That example wasn't meant to illustrate that it's flawless - just that it's interesting and useful, even when it doesn't get to the ideal answer.

pamelafox•17h ago
I am giving it a go for parenting advice- “My 5 year old is suddenly very germ concious. Doesnt want to touch things, always washing hands. Do deep research, is this normal?” https://chatgpt.com/share/68be1dbd-187c-8012-98d7-83f710b12b...

The results look reasonable? It’s a good start, given how long it takes to hear back from our doctor on questions like this.

ants_everywhere•17h ago
Yeah this is what people are doing with LLMs every day. I don't quite get what is supposed to be different in the blog post.

HN is a bit weird because it's got 99 articles about how evil LLMs are and one article that's like "oh hey I asked an LLM questions and got some answers" and people are like "wow amazing".

Not that I mind. I assume Simon just wanted to share some cool nerdy stuff and there's nothing wrong with the blog post. It's just surprising that it's posted not once but twice on HN and is on the front page when there's so much anti-AI sentiment otherwise.

simonw•17h ago
What's different is that LLMs with search tools used to be terrible - they would run a single search, get back 10 results and summarize those.

Often the results were bad, so the answer was bad.

GPT-5 Thinking (and o3 before it, but very few people tried o3) does a whole lot better then that. It runs multiple searches, then evaluates the results and runs follow-up searches to try to get to a credible result.

This is new and worth writing about. LLM search doesn't suck any more.

ants_everywhere•16h ago
Like I said I have nothing against the blog post or writing about it, that was by no means meant as a criticism of you. And I agree it's worth writing and talking about. What surprises me is that we're in a forum for technology enthusiasts.

FWIW Gemini at least has been pretty good at this since late 2024 IMO.

As for where things are now, I just ran a comparison with ChatGPT 5 in thinking mode against Google search's AI mode across a few questions. They performed the same on the searches I tried and returned substantially the same answer except for some minor variation here or there. Google search is maybe an order of magnitude faster. Google obviously has an advantage here which is that it has full access to their search and ranking index.

And of course the ability to make multiple searches and reason about them for been available for months, maybe almost a year, as deep research mode. I guess the novelty now is you can wait a smaller time and get research that's less deep.

simonw•11h ago
Yeah, the new Google AI mode is impressive too. I wrote about that here: https://simonwillison.net/2025/Sep/7/ai-mode/
hendersoon•16h ago
It is pretty good yes, but I find GPT5 thinking to be unusably slow for any sort of interactive work.
softwaredoug•15h ago
I agree with Simon’s article but I usually think about “research” to mean comparing different kinds of evidence (not just the search part). Like evidence for the effectiveness of Obamacare. Or how some legal case may play out in the courts. Or how much The Critic influenced The Family Guy. Or even what the best way to use X feature of Y library.

I’ve found ChatGPT and other LLMS can struggle to evaluate evidence - to understand the biases behind sources - ie taking data from a sketchy think tank as gospel. I also have found in my work the more reasoning, the more hallucination. Especially when gathering many statistics.

That plus the usual sycophancy can cause the model to really want to find evidence to support your position. Even if you don’t think you’re asking a leading question, it can really want to answer your question in the affirmative.

I always ask ChatGPT do directly cite and evaluate sources. And try to get it in the mindset of comparing and contrasting arguments for and against. And I find I must argue against its points to see how it reacts.

More here https://softwaredoug.com/blog/2025/08/19/researching-with-ag...

btmiller•14h ago
How are we feeling about the usage of the word research to indicate feature sets in LLMs? Is it truly representative of research? How does it compare to the colloquial “do your research” refrain used often during US election years?
softwaredoug•14h ago
Well I will just need to start saying “critical thinking”? Or some other term?

I have a liberal arts background. So I use the term research to mean gathering evidence, evaluating its trustworthiness and biases, and avoiding related thinking errors related to evaluating evidence (https://thedecisionlab.com/biases).

LLMs can fall prey to these problems as well. Usually it’s not just “reasoning” that gives you trouble. It’s the reasoning about evidence. I see this with Claude Code a lot. It can sometimes create some weird code, hallucinating functionality that doesn’t exist, all because it found a random forum post.

I realize though that the term is pretty overloaded :)

vancroft•11h ago
> I always ask ChatGPT do directly cite and evaluate sources. And try to get it in the mindset of comparing and contrasting arguments for and against. And I find I must argue against its points to see how it reacts.

Same here. But it often produces broken or bogus links.

NothingAboutAny•10h ago
I tried to use perplexity to find ideal settings for my monitor, it responded with concise list of distinct settings and why. When I investigated the source it was just people guessing and arguing with each other in the Samsung forums, no official or even backed up information.

I'd love if it had a confidence rating based on the sources it found or something, but I imagine that would be really difficult to get right.

simonw•9h ago
It would be interesting to see if that same question against GPT-5 Thinking produces notably better results.
Moosdijk•9h ago
I asked gemini to do a deep research on the role of healthcare insurance companies in the decline of general practicioners in the Netherlands. It based its premise mostly on blogs and whitepapers on company websites, who's job it is to sell automation-software.

AI really needs better source-validation. Not just to combat the hallucination of sources (which gemini seems to do 80% of the time), but also to combat low quality sources that happen to correlate well to the question in the prompt.

It's similar to Google having to fight SEO spam blogs, they now need to do the same in the output of their models.

simonw•9h ago
Better source validation is one of the main reasons I'm excited about GPT-5 Thinking for this. It would be interesting to try your Gemini prompts against that and see how the results compare.
Hugsun•3h ago
I've found GPT-5 Thinking to perform worse than o3 did in tasks of a similar nature. It makes more bad assumptions that de-rail the train of thought.
Atotalnoob•2h ago
Kagi has some tooling for this. You can set web access “lenses” that limit the results to “academic”, “forums”, etc.

Kagi also tells you the percentages “used” for each source and cites them in line.

It’s not perfect, but it’s a lot better to narrow down what you want to get out of your prompt.

wodenokoto•5h ago
But the really tricky thing is, that sometimes it _is_ these kinds of forums where you find the best stuff.

When LLMs really started to show themselves, there was a big debate about what is truth, with even HN joining in on heated debates on the number of sexes or genders a dog may have and if it was okay or not for ChatGPT to respond with a binary answer.

On one hand, I did found those discussions insufferable, but the deeper question - what is truth and how do we automated the extraction of truth from corpora - is super important and somehow completely disappeared from the LLM discourse.

wer232essf•10h ago
You make a great point “research” isn’t just searching but weighing different kinds of evidence and understanding the biases behind them. I agree LLMs often fall short here, especially with statistics or nuanced reasoning, where they can hallucinate or lean too hard into confirmation. I’ve also seen the sycophancy effect you mention the model tends to agree with whatever frame it’s given. Asking for direct citations and then challenging the model’s arguments, like you do, seems like a smart way to push it toward more balanced and critical responses.
killerstorm•10h ago
FWIW GPT-5 (and o3, etc.) is one of the most critical-minded LLMs out there.

If you ask for information which is e.g. academic or technical it would cite information and compare different results, etc, without any extra prompt or reminder.

Grok 4 (at the initial release) was just reporting information in the articles it found without any analysis.

Claude Opus 4 also seems bad: I asked it to give a list of JS libraries of a certain kind in deep research mode, and it returned a document focused on market share and usage statistics. Looks like it stumbled upon some articles of that kind and got carried away by it. Quite bizarre.

So GPT-5 is really good in comparison. Maybe not perfect in all situations, but perhaps better than an average human

eru•6h ago
> So GPT-5 is really good in comparison. Maybe not perfect in all situations, but perhaps better than an average human

Alas, the average human is pretty bad at these things.

gonzobonzo•8h ago
> I’ve found ChatGPT and other LLMS can struggle to evaluate evidence - to understand the biases behind sources - ie taking data from a sketchy think tank as gospel.

This is what I keep finding, it mostly repeats surface level "common knowledge." It usually take a few back and forths to get to whether or not something is actually true - asking for the numbers, asking for the sources, asking for the excerpt from the sources where they actually provide that information, verifying to make sure it's not hallucinating, etc. A lot of the time, it turns out its initial response was completely wrong.

I imagine most people just take the initial (often wrong) response at face value, though, especially since it tends to repeat what most people already believe.

athrowaway3z•7h ago
> It usually take a few back and forths to get to whether or not something is actually true

This cuts both ways. I have yet to find an opinion or fact I could not make chatgpt agree with as if objectivly true. Knowing how to trigger (im)partial thought is a skill in and of itself and something we need to be teaching in school asap. (Which some already are in 1 way or another)

gonzobonzo•6h ago
I'm not sure teaching it in school is actually going to help. Most people will tell you that of course you need to look at primary sources to verify claims - and then turn around and believe the first thing they here from LLM, Redditor, Wiki article, etc. Even worse, many people get openly hostile to the idea that people should verify claims - "what, you don't believe me?"/"everyone here has been telling you this is true, do you have any evidence it isn't?"/"oh, so you think you know better?"

There was a recent discussion about Wikipedia here recently where a lot of people who are active on the site argued against people taking the claims there with a grain of salt and verifying the accuracy for themselves.

We can teach these things until the cows come home, but it's not going to make a difference if people say it's a good idea and then immediately do the opposite.

Kim_Bruning•4h ago
There were actual Wikipedians arguing not to take a wiki with a grain of salt? If I was in that discussion, I must have missed those posts. Can you link an example?

If you mean whether Wikipedia is unreliable? That's a different story, everything is unreliable. Wikipedia just happens to be potentially less unreliable than many (typically) (if used correctly) (#include caveats.h) .

Sources are like power tools. Use them with respect and caution.

eru•6h ago
> Knowing how to trigger (im)partial thought is a skill in and of itself and something we need to be teaching in school asap.

You are very optimistic.

Look at all other skills we are trying to teach in school. 'Critical thinking' has been at the top of nearly every curriculum you can point a finger at for quite a while now. To minimal effect.

Or just look at how much math we are trying to teach the kids, and what they actually retain.

athrowaway3z•4h ago
Perhaps a bit optimistic, but this can be shown in real time: the situation, cause, and effect.

Critical thinking is a much more general skill which is applicable anywhere, thus quicker to be 'buried' under other learned behavior.

This skill has an obvious trigger; you're using AI, which means you should be aware of this.

thom•5h ago
Yeah trying to make well-researched buying decisions for example is really hard because you'll just quite a lot of opinions dominated by marketing material, which aren't well counterbalanced by the sort of angry Reddit posts or YouTube comments I'd often treat as red flags.
EcommerceFlow•15h ago
Imagine as context windows increase the average query goes from 5-20 sources to 200+ sources.

Maybe OpenAi gets into the internet indexing game to speed up their search even more.

lambda•12h ago
I guess the part where I'm still skeptical are: Google is also still pretty good at search (especially if I avoid the AI summary with udm=14).

I'll take one of your examples: Britannica to seed Wikipedia. I searched for "wikipedia encyclopedia brtannica". In less than 1 second, I got search results back.

I spend maybe 30 seconds scanning the page; past the Wikipedia article on Encyclopedia Britannica, past the Encyclopedia article about Wikipedia, past a Reddit thread comparing them, past the Simple English Wikipedia article on Britannica, and past the Britannica article on Wiki. OK, there it is, the link to "Wikipedia:WikiProject Encyclopaedia Britannica", that answers your question.

Then to answer your follow up, I spend a couple more seconds to search Wikipedia for Wikipedia, and find in the first paragraph that it was founded in 2001.

So, let's say a grand total of 60 seconds of me searching, skimming, and reading the results. The actual searching was maybe 2 or 3 seconds of time total, once on Google, and once on Wikipedia.

Compared to nearly 3 minutes for ChatGPT to grind through all of that, plus the time for you to read it, and hopefully verify by checking its references because it can still hallucinate.

And what did you pay for the privilege of doing that? How much extra energy did you burn for this less efficient response? I wish that when linking to chat transcripts like you do, ChatGPT would show you the token cost of that particular chat

So yeah, it's possible to do search with ChatGPT. But it seems like it's slower and less efficient than searching and skimming yourself, at least for this query.

That's generally been my impression of LLMs; it's impressive that they can do X. But when you add up all the overhead of asking them to do X, having them reason about it, checking their results, following up, and dealing with the consequences of any mistakes, the alternative of just relying on plain old search and your own skimming seems much more efficient.

simonw•11h ago
I suggest trying that experiment again but picking the hardest of my examples to answer with Google, not the easiest.
lambda•2h ago
Not sure which is the hardest, but sure, let's try them all.

* Bouncy people mover. Some Google searching turns up the SFO article that you liked. Trying to pin down the exact dates is harder. ChatGPT maybe did narrow down the time frame quicker than I could through a series of Google searches,

* The picture of the building. Go to Google lens, paste in the image, less than a second later I get results. Of course, the exact picture in this article comes up on top, but among the other results I get a mix of two different buildings, one of which is identified as the Blade, one Independence Temple. So a few seconds here between searching and doing my own quick visual scan of the results.

* Starbucks UK Cake Pops: This one is harder to find the full details with a quick Google search. I am able to find that the were fairly recently introduced in the UK after my second search. It looks like ChatGPT gave you a bunch of extra response, some of which you didn't like, because you then spent a while trying to reverse engineer its system prompt rather than any actual follow up on the question itself.

* Official name of the University of Cambrdige: search gave me Wikipedia, Wikipedia contains the official name and a link to a reference on the University's page. Pretty quick to solve with Google Search/Wikipedia.

* Exeter quay. I searched for "waterfront exeter cliff building" and found this result towards the top of the results: https://www.exeterquay.org/milestones/ which explains "Warehouses were added in 1834 [Cornish's] and 1835 [Hooper's], with provision for storing tobacco and wine and cellars for cider and silk were cut into the cliffs downstream." You seemed to be a lot more entertained by ChatGPT's persistence in finding more info, but for satisfying curiosity about the basic question, I got an answer pretty quickly via Google.

* Aldi vs Lidl: this is a much more subjective question, so whether the results you get via a quick Google search meet your needs, vs. whether the summary of subjective results you get via ChatGPT, is more of a question you can answer. I do find some Reddit threads and similar with a quick Google search.

* Book scanning. You asked specifically about destructive book scanning. You can do a quick search of each of the labs and "book scanning" and find the same lack of results that ChatGPT gives you. Maybe takes a similar amount of time to how long it spent thinking. You pretty much only find references to Anthropic doing destructive book scanning, and Google doing mostly non-destructive scanning

Anyhow, the results are mixed. For a bunch of these, I found an answer quicker via a Google search (or Google Lens search), and doing some quick scanning/filtering myself. A few of them, I feel like it was a wash. A couple of them actually do take more iteration/research, the bouncy travelator being the most extreme example, I think; narrowing down the timeline on my own would take a lot of detailed looking through sources.

simonw•1h ago
This is a fair analysis, thanks for taking the time.

As far as I can tell the Google + Wikipedia solution gets the name of Cambridge University wrong: Wikipedia lists it as "The Chancellor, Masters and Scholars of the University of Cambridge" whereas GPT-5 correctly verified it to be "The Chancellor, Masters, and Scholars of the University of Cambridge" (note that extra comma) as listed on https://www.cam.ac.uk/about-the-university/how-the-universit...

I tried to reverse engineer the system prompt in the cake pop conversation https://chatgpt.com/share/68bc71b4-68f4-8006-b462-cf32f61e7e... purely because I got annoyed at it for answering "haha I believe you" - I particularly disliked the lower case "haha" because I've seen it switch to lower case (even the lower case word "i") in the past and I wanted to know what was causing it to start talking in the same way that Sam Altman tweets.

wilg•10h ago
First, you not having to spend the 60 seconds and it means you can parallelize it with something else to get the answer effectively instantly. Second, you're essentially establishing that if an LLM can get it done in less than 60 seconds its better than your manual approach, which is a huge win, as this will get faster!
sigmoid10•10h ago
For real. This is what it must have been like living in the early 20th century and hearing people say they prefer a horse to get groceries because it is so much more effort to crank-start a car. I look forward to the age when we gleefully reminisce about the time we had to deal with SEO spam manually.
lomase•8h ago
I look forward to the day AI hype is dead as blockchain.
prmoustache•1h ago
The thing is, the idea of cars cars being a plus for humanity is still debatable to this day. We won a lot in some areas but lost an awful lot in others.
sigmoid10•1h ago
That's a ridiculously cynical claim. And even if it were true, it still misses the fact that horses and the entire economy built around them stood no chance in the end. Today we are again in the sitation where many people who are alive and working today will have to consider getting reeducated if they don't want to be left behind, whether they want it or not.
lambda•1h ago
There's no useful parallelization that could happen during this particular search. This took a couple of iterations of research via ChatGPT, and then reading the results and looking at the referenced sources; the total interaction time with ChatGPT is a similar 60 seconds or so, the main difference is the 3 minutes of waiting for it to generate answers vs. the maybe a couple of seconds for the searches.
dwayne_dibley•9h ago
I wonder how all this will really change the web. In your manual mode, you a human, are viewing and visiting webpages, but if one never needs to and always interacts with the web through an agent, what does the web need to look like, and will people even bother making websites? Interesting times ahead.
gitmagic•9h ago
I’ve been thinking about this as well. Instead of making websites, maybe people will make something else, like some future version of MCP tools/servers? E.g. a restaurant could have an “MCP tool” for checking opening hours, reserving a table, etc.
diabllicseagull•6h ago
I hope none of this happens and web stays readable and indexable.
rossant•6h ago
Same. Websites won't disappear but may become niche or something of the past. Why create a new UI for your new service when you can plug into a "universal" personal agent AI.
recursive•1h ago
So I can get reliable answers.

Why create a fancy infrastructure for for this new universal thing, when the old thing already does it more reliably and with less steps?

utyop22•9h ago
V nice post. Captures my sentiment too
IanCal•8h ago
As a counterpoint I asked that simple question to gpt5 in auto mode and it started replying in two seconds, wrote fast enough for me to scan the answer and gave me two solid links to read after.

With thinking it took longer (just shy of two minutes) but compared a variety of different sources, and comes back with numbers and each statement in the summary sourced.

I’ve used gpt a bunch for finding things like bin information on the council site that I just couldn’t easily find myself. I’ve also sent it off to dig through prs, specs and more for matrix where it found the features and experimental flags required to solve a problem I had. Reading that many proposals and checking what’s been accepted is a massive pain and it solved this while I went to make a coffee.

animal531•7h ago
I'm going to somewhat disagree based on my recent attempts.

Firstly, if we don't remove the Google AI summary then as you rightly say, it makes the experience 10x worse. They try to still give an answer quickly, but the AI takes up a ton of space and is mostly terrible.

Googling for a Github repository just now, Google linked me to 3 resources except the actual page. One clone that was named the same, another garbage link but luckily the 3rd was a reddit post by the same person which linked to the correct page.

GPT does take a lot longer, but the main advantage for me comes in depending on the scope of what you're looking for. In the above example I didn't mind Google, because the 3 links opened fast and I could scan and click through to find what I was looking for, ie. I wanted the information right now.

But then let's say I'm interested in something a bit deeper, for example how did they do the unit movement in StarCraft 2? This is a well known question, so the links/info you get from either Google or GPT are all great. If I was searching this topic via Google I'd then have to copy or bookmark the main topics to continue my research on them. Doing it via GPT it returns the same main items, but I can very easily tell it to explain all those topics in turn, have it take the notes, find source code, etc.

Of course as in your example, if you're a Doctor and you're googling symptoms or perhaps real world location of ABC then the hallucination specter is a dangerous thing which you want to avoid at all costs. But for myself I find that I can as easily filter LLM mistakes as I can noise/errors from manual searches.

My future Internet guess is going to be that in N years there will be no such thing as manually searching for anything, everything will be assistant driven via LLM.

plopilop•7h ago
Agree. I tried the first 3 examples:

* "Rubber bouncy at Heathrow removal" on Google had 3 links, including the one about SFO from which chatGPT took a tangent. While ChatGPT provided evidence for the latest removal date being of 2024, none was provided for the lower bound. I saw no date online either. Was this a hallucination?

* A reverse image lookup of the building gave me the blog entry, but also an Alamy picture of the Blade (admittedly this result can have been biased by the fact the author already identified the building as the blade)

* The starbucks pop Google search led me to https://starbuckmenu.uk/starbucks-cake-pop-prices/. I will add that the author bitching to ChatGPT about ChatGPT hidden prompts in the transcript is hilarious.

I get why people prefer ChatGPT. It will do all the boring work of curating the internet for you, to privde you with a single answer. It will also hallucinate every now and then but that seems to be a price people are willing to pay and ignore, just like the added cost compared to a single Google search. Now I am not sure how this will evolve.

Back in the days, people would tell you to be weary of the Internet and that Wikipedia thing, and that you could get all the info you need from a much more reliable source at the library anyways, for a fraction of the cost. I guess that if LLMs continue to evolve, we will face the same paradigm shift.

bgwalter•4h ago
Yes, Google with udm=14 is much better than "AI". "AI" might work for the trivia-type questions from this article, which most people aren't interested in to begin with.

It fails completely for complex political or investigative questions where there is no clear answer. Reading a single Wikipedia page is usually a better use of one's time:

You don't have to pretend that you are parallelizing work (which is just for show) while waiting three min for the "AI" answer. You practice speed reading and memory retention. You enhance your own semantic network instead of the network owned and controlled by oligopoly members.

Faaak•2h ago
A bit unrelated, but on Firefox there's the Straight to the Web extension that automatically appends the udm=14 param, so AI gets disabled :-)
15123123aa•12h ago
well what underneath is still the LLM reading off the results of Google itself.
dclowd9901•12h ago
> Starbucks in the UK don’t sell cake pops! Do a deep investigative dive

...

I used to play games on my computer a lot. Not so much anymore, don't really want to lock myself in a room alone and play games. I have kids and a wife, and it feels isolative.

But those days I would, and often the hardware I had was underpowered to be able to experience the game in its full glory. I would often spend hours and hours just honing settings and config and environment to get the game to run at peak capability on my machine.

At some point, I would reach a zenith. Some perfect arrangement of settings and environment that gave me a game running at top quality on my machine (or as close to top as I could get). The experience for me is joyous. So enjoyable that I often didn't even play the game except maybe to test the boundaries of its performance at that level.

Reading this article made me sad for people who don't put in work for some sort of accomplishment that amounts to nothing. And it made me think of my own experience with it. Accomplishment for its own sake is still accomplishment. And it's still self realization, which is important to existing.

simonw•10h ago
This is a common theme with LLMs (and LLM criticism).

The context: I was rushing for a train, I ran into Starbucks at the station for a coffee, I noticed they didn't have cake pops and the staff member didn't appear to know what they were.

I see three choices here:

1. Since I'm mildly curious about Starbucks and cake pop availability in the UK, I get on the train, open up my laptop and dedicate realistically a solid half hour or more to figuring out what's going on.

2. I fire off a research question at GPT-5 Thinking on my mobile phone.

3. I don't do any research at all and leave my mild curiosity unsaturated.

Realistically, I think the choices are between 2 and 3. I was never going to perform a full research project on this myself.

See also: AI-enhanced development makes me more ambitious with my projects, which I wrote in March 2023 and has aged extremely well. https://simonwillison.net/2023/Mar/27/ai-enhanced-developmen...

I do plenty of deep dive research projects myself into topics both useful and pointless - my blog is full of them!

Now I can take on even more.

rthrfrd•10h ago
I think what's interesting/telling is you view (3) as less desirable.

Alternatively, you could have spent that half hour on the train exercising your own creativity to try and satisfy your curiosity. Whether you're right or wrong doesn't really matter, because as you acknowledge it's not really important enough to you to matter. Picking (2) eliminates all the possible avenues that might have lead you down.

I'm not saying one is better than the other, just that you're approaching the criticism on the basis of axioms that represent a narrow viewpoint: That of someone who has to be "right" about the things they are curious about, no matter how trivial.

simonw•9h ago
I think one of my personal core values is that curiosity should never be left unsatiated if ant all possible!

I spent my half hour on the train satiating all sorts of other things instead (like the identity of that curious looking building in Reading).

> Picking (2) eliminates all the possible avenues that might have lead you down.

I don't think that's the case. Using GPT-5 for the Cake Pop question lead me down a bunch of avenues I may never have encountered otherwise - the structure of Starbucks in the UK, the history of their Cake Pops rollout, the fact that checking nutritional and allergy details on their website is a great way to get an "official" list of their products independent of what's on sale in individual stores, and it sparked me to run a separate search for their Cookies and Cream cake pop and find out had been discontinued in the US.

Not bad for typing a couple of prompts on my phone and then spending a few extra minutes with the results after the research task had completed.

Now multiply that by a dozen plus moments of curiosity per day and my intellectual life feels genuinely elevated - I'm being exposed to so many more interesting and varied avenues than if I was manually doing all of the work on a smaller number of curiosities myself.

rthrfrd•9h ago
> I think one of my personal core values is that curiosity should never be left unsatiated if ant all possible!

I don't disagree: I just posited that there are other ways to satisfy it, and that there is an opportunity cost to the path you've chosen to satisfy it that you don't seem very aware of, because your curiosity and desire to be correct are tightly coupled - but that doesn't actually have to be the case. It has its pros and cons.

Now I'm more of an "it's the journey not the destination" guy, so accelerating the journey doesn't appeal to me as much as it used to, because for me its where I get the most value. That change in my perspective is what motivated me to comment.

But anyway, you clearly enjoy it and do great work, so all the best with it!

p0w3n3d•11h ago
It's certainly better than google/bing search (non-ai). To be honest I had been observing google/bing (via duckduckgo) decline of search capabilities over recent years. I "had been" unless I stopped observing it because it went below any acceptable level. TBH the only thing I can find on them nowadays is products, and sometimes general information. All the technical articles, api links, etc. are unfindable. Among others that's why I'm holding to hackernews recently (which was the best thing I learned from my colleague, and it breaks my information bubbles). So basically I'm usually starting with ddg, then go to google, and if failed, falling back to chatgpt which is very accurate nowadays.

Example query: a keyboard stand with music (notes) stand.

-- Disclaimer--

It might be connected to the web enshittification process which has been undergoing for quite some time already.

sireat•10h ago
Like Simon I've started to use camera for random ChatGPT research. For one ChatGPT works fantastically at random bird identification (along with pretty much all other features and likely location) - https://xkcd.com/1425/

There is one big failure mode though - ChatGPT hallucinates middle of simple textual OCR tasks!

I will feed ChatGPT a simple computer hardware invoice with 10 items - out comes perfect first few items, then likely but fake middle items (like MSI 4060 16GB instead of Asus 5060 Ti 16GB) and last few items are again correct.

If you start prompting with hints, the model will keep making up other models and manufacturers, it will apologize and come up with incorrect Gigabyte 5070.

I can forgive mistaking 5060 for 5080 - see https://www.theguardian.com/books/booksblog/2014/may/01/scan... . However how can the model completely misread the manufacturers??

This would be trivially fixed by reverting to Tesseract based models like ChatGPT used to do.

PS Just tried it again and 3rd item instead of correct GSKILL it gave Kingston as manufacturer for RAM.

Basically ChatGPT sort of OCRs like a human would, by scanning first then sort of confabulating middle and then getting the footer correct.

simonw•9h ago
Yeah, I've been disappointed in GPT-5 for OCR - Gemini 2.5 is much better on that front: https://simonwillison.net/2025/Aug/29/the-perils-of-vibe-cod...
IanCal•8h ago
Images in general, nothing comes close to Gemini 2.5 for understanding scene composition. They perform segmentation and so you can even ask for things like masks of arbitrary things or bounding boxes.
arnaudsm•9h ago
It's a great tool, but the mobile experience isn't great. Everytime the socket connection fails in the background, and i have to restart and refresh the app twice to get my results.
gizajob•8h ago
Personally, I'm happy that after a 30 year effort in Silicon Valley and hundreds of billions spent, AskJeeves finally works as intended.
milanhbs•7h ago
I agree, I've found it very useful for search tasks that involve putting a few pieces together. For example, I was looking for the floor plan of an apartment building I was considering moving to in a country I'm not that familiar with. Google: Found floor plans on architect's sites - but only ones of rejected proposals.

ChatGPT: Gave me the planning proposal number of the final design, with instructions to the government website where I can plug that in to get floor plans and other docs.

In this case, ChatGPT was so much better at giving me a definitive source than Google - instead of the other way around.

dncornholio•7h ago
Don't ask LLM what is the best, or what is fancier.. This is not a Research Goblin but nothing more but a inspiration buddy.
eru•6h ago
About 'Britannica to seed Wikipedia': the German Wikipedia used Meyers Konversations-Lexikon https://en.wikipedia.org/wiki/Meyers_Konversations-Lexikon
d4rkp4ttern•6h ago
Has Deep Research been removed? I have a Pro subscription and just today noticed Deep Research is no longer shown as an option. In any case I’ve found using GPT-5 Thinking, and especially GPT-5 Pro with web search more useful than DR used to be.
mritchie712•5h ago
I was curious how much revenue a podcast I listen to makes. The podcast was started by two local comedians from Phoenix, AZ. They had no following when they started and were both in their late 30's. The odds were stacked against them, but they rank pretty high now on the the Apple charts now.

I looked into years ago and couldn't find a satisfying answer, but GPT-5 went off, did an "unreasonable" amount of research, cross referenced sources and provided an incredibly detailed answer and a believable range.

creesch•5h ago
> an incredibly detailed answer and a believable range.

Recently, it started returning even more verbose answers. The absolute bullshit research paper that Google Gemini gives you is what turned me away from using it there. Now, chatGPT also seems to go for more verbose filler rather than actually information. It is not as bad as Gemini, but I did notice.

It makes me wonder if people think the results are more credible with verbose reports like that. Even if it actually obfuscates the information you asked it to track down to begin with.

I do like how you worded it as a believable range, rather than an accurate one. One of the things that makes me hesitant to use deep research for anything but low impact non-critical stuff is exactly that. Some answers are easier to verify than others, but the way the sources are presented, it isn't always easy to verify the answers.

Another aspect is that my own skills in researching things are pretty good, if I may so myself. I don't want them to atrophy, which easily happens with lazy use of LLMs where they do all the hard work.

A final consideration came from me experimenting with MCPs and my own attempts at creating a deep research I could use with any model. No matter the approach I tried, it is extremely heavy on resources and will burn through tokens like no other.

Economically, it just doesn't make sense for me to run against APIs. Which in my mind means it is heavily subsidized by openAI as a sort of loss-leader. Something I don't want to depend on just to find myself facing a price hike in the future.

rbliss•3h ago
What was the range?
mritchie712•3h ago
$2.3M–$3.7M per year gross revenue:

Putting it together (podcast‑only)

Ads (STM only): 0.85 – $0.95M

Memberships: $1.5–$2.6M/yr

Working estimate (gross, podcast‑only): $2.3M – $3.7M per year for ads + its share of memberships. Mid‑case lands near $2.9M/yr gross; after typical platform/processing fees and less‑than‑perfect ad sell‑through, a net in the low‑to‑mid $2Ms seems plausible.

---

Quick answers

How many listeners: 175K downloads per episode, ~1.4–2.1M monthly downloads, demographics skew U.S., median age ~36.

How much revenue? $2.3M– $3.7M/yr gross from ads + memberships attributable to the STM show

cmilton•4h ago
I just can't get over how gleeful the author sounds in wasting compute on "an often unreasonable amount of work to search the internet and figure out an answer."

Is that the goal? Send this thing off on a wild goose chase, and hope it comes back with the right answer no matter the cost?

KaoruAoiShiho•4h ago
Uh, people have wasted entire lifetimes chasing wild goose. Newton and Einstein both spent the latter halves of their lives :( despite being geniuses.
cmilton•4h ago
I think the primary difference would be they didn't waste billions of dollars in their research.
simonw•2h ago
Playing with tools like this is how I learn to use them.
p3rls•3h ago
In my industry it just returns hindustanitimes slop