frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Thinkmoon.ai – Build your own Alpha Arena, trade crypto with AI Agents

https://demo.thinkmoon.ai/
1•thinkmoon•1m ago•0 comments

The American West's most iconic tree (Ponderosa pine) is disappearing

https://phys.org/news/2025-12-american-west-iconic-tree.html
1•bikenaga•1m ago•0 comments

How to Create a Design System Optimized for AI Coding

https://www.braingrid.ai/blog/design-system-optimized-for-ai-coding
1•acossta•2m ago•1 comments

Show HN: ZON-TS 50–65% fewer LLM tokens, zero parse overhead

https://zonformat.org
1•ronibhakta•2m ago•0 comments

As humanoid robots enter mainstream security pros flag risk of botnets on legs

https://www.theregister.com/2025/12/09/humanoid_robot_security/
1•Bender•2m ago•0 comments

Hollywood-hungry Gulf states bankroll Paramount's Warner Bros bid

https://www.reuters.com/business/finance/hollywood-hungry-gulf-states-bankroll-paramounts-warner-...
1•geox•2m ago•0 comments

Show HN: Made an iPad app to sync handwritten notes to Notion

https://shorthand.ink/
1•rnmp•3m ago•0 comments

Show HN: CapSummarize – Summarize videos and generate thumbnails and clips

https://www.capsummarize.app/
1•samuxbuilds•5m ago•0 comments

Show HN: I built a system to collect feedback more like force it

https://www.msgmorph.com/
1•hamzaawan•5m ago•0 comments

React2Shell serves as good reminder why JavaScript is no fun

https://zarar.dev/react2shell-serves-as-good-reminder-why-javascript-is-no-fun/
1•speckx•6m ago•0 comments

Linux 6.19's Hung Task and System Lockup Detectors Can Provide Greater Insight

https://www.phoronix.com/news/Linux-6.19-Detectors-More-Info
1•Bender•10m ago•0 comments

How Russia's Largest Private University Linked to $25M Essay Mill

https://krebsonsecurity.com/2025/12/drones-to-diplomas-how-russias-largest-private-university-is-...
1•Bender•11m ago•0 comments

Tempo's Testnet Is Live

https://tempo.xyz/blog/testnet
1•simonebrunozzi•11m ago•0 comments

The Married Scientists Torn Apart by a Covid Bioweapon Theory

https://www.nytimes.com/2025/12/07/us/china-virologist-li-meng-yan-coronavirus.html
2•bookofjoe•12m ago•1 comments

Will the digital control grid inevitably fail, or is it here?

https://libresolutionsnetwork.substack.com/p/teachable-digital-prison
1•Noaidi•12m ago•1 comments

Why a College Fighting for Survival Is Slashing Econ and Physics Majors

https://www.bloomberg.com/news/features/2025-12-09/albright-college-budget-cuts-are-eliminating-m...
2•littlexsparkee•13m ago•1 comments

Two Githubs, One Laptop

https://blog.djnavarro.net/posts/2025-12-08_two-github-one-laptop/
1•speckx•14m ago•1 comments

Interview with Calvin Rose, the Janet Creator

https://alexalejandre.com/programming/interview-with-bakpakin/
1•smartmic•15m ago•0 comments

Meta Avocado

https://www.cnbc.com/2025/12/09/meta-avocado-ai-strategy-issues.html
2•noslenwerdna•15m ago•0 comments

Barnum's Law of CEOs

https://www.antipope.org/charlie/blog-static/2025/12/barnums-law-of-ceos.html
1•LaSombra•15m ago•0 comments

Launch HN: Mentat (YC S16) – Controlling LLMs with Runtime Intervention

https://playground.ctgt.ai
5•cgorlla•16m ago•0 comments

Nobel Winner Machado's Briefing Delayed as Oslo Arrival Unclear

https://www.bloomberg.com/news/articles/2025-12-09/nobel-winner-machado-s-briefing-delayed-as-osl...
1•wslh•16m ago•1 comments

Iain Douglas-Hamilton, Elephant Expert and Protector, Dies at 83

https://www.nytimes.com/2025/12/09/world/africa/iain-douglas-hamilton-dead.html
1•quapster•18m ago•1 comments

PyCharm 2025.3 – Unified IDE, Jupyter notebooks in remote dev, uv as default

https://www.jetbrains.com/pycharm/whatsnew/
2•indigodaddy•18m ago•0 comments

I think jj-vcs is worth your time

https://schpet.com/note/why-i-think-jj-vcs-is-worth-your-time
1•steveklabnik•18m ago•0 comments

Show HN: MPL – A Python DSL that transpiles logic to Pine Script

https://github.com/hakanovski/MPL
1•hknyrgnc•18m ago•1 comments

The AI Price Hike

https://molodtsov.me/2025/12/ai-price-hike/
1•speckx•18m ago•0 comments

The open hardware water cooling controller project

https://github.com/kennycoder/waku-ctl
1•kennycoder•21m ago•1 comments

Datadog built a low-latency, multi-tenant data replication platform

https://www.datadoghq.com/blog/engineering/cdc-replication-search/
2•mooreds•21m ago•0 comments

Odies – AI Companions living on your screen

https://www.youtube.com/watch?v=uwQjUHvm0JM
1•omoistudio•21m ago•0 comments
Open in hackernews

Ask HN: Should "I asked $AI, and it said" replies be forbidden in HN guidelines?

140•embedding-shape•51m ago
As various LLMs become more and more popular, so does comments with "I asked Gemini, and Gemini said ....".

While the guidelines were written (and iterated on) during a different time, it seems like it might be time to have a discussion about if those sort of comments should be welcomed on HN or not.

Some examples:

- https://news.ycombinator.com/item?id=46164360

- https://news.ycombinator.com/item?id=46200460

- https://news.ycombinator.com/item?id=46080064

Personally, I'm on HN for the human conversation, and large LLM-generated texts just get in the way of reading real text from real humans (assumed, at least).

What do you think? Should responses that basically boil down to "I asked $LLM about $X, and here is what $LLM said:" be allowed on HN, and the guidelines updated to state that people shouldn't critique it (similar to other guidelines currently), or should a new guideline be added to ask people from refrain from copy-pasting large LLM responses into the comments, or something else completely?

Comments

JohnFen•38m ago
I find such replies to be worthless wastes of space on par with "let me google that for you" replies. If I want to know what genAI has to say about something, I can just ask it myself. I'm more interested in what the commenter has to say.

But I don't know that we need any sort of official ban against them. This community is pretty good about downvoting unhelpful comments, and there is a whole spectrum of unhelpful comments that have nothing to do with genAI. It seems impractical to overtly list them all.

Scene_Cast2•36m ago
There is friction to asking AI yourself. And a comment typically means that "I found the AI answer insightful enough to share".
codechicago277•32m ago
The problem is that the AI answer could just be wrong, and there’s another step required to validate what it spit out. Sharing the conversation without fact checking it just adds noise.
ben_w•30m ago
Unfortunately it's easier to train an AI to be convincing than to be correct, so it can look insightful before it's true.

Like horoscopes, only they're not actually that bad so roll a D20 and on a set of numbers known only to the DM (and varying with domain and task length) you get a textbook answer and on the rest you get convincing nonsense.

ManlyBread•18m ago
"Friciton" in this case is just plain old laziness and I don't think that it should be encouraged.
WesolyKubeczek•15m ago
Then state your understanding of what it said in your own words, maybe you’ll realize it’s bunk mid-sentence.
shishy•38m ago
People are probably copy pasting already without that disclosure :(
coffeecat•22m ago
I'm sure there are people who spend their time doing this, but I don't understand the motive. Doesn't one post in comment threads because one wishes to share their thoughts with other humans?
chemotaxis•37m ago
This wouldn't ban the behavior, just the disclosure of it.
xivzgrev•32m ago
Agreed - in fact these folks are going out of their way to be transparent about it. It's much easier to just take credit for a "smart" answer
AlwaysRock•27m ago
I guess... That is the point in my opinion.

If you just say, "here is what llm said" if that turns out to be nonsense you can say something like, "I was just passing along the llm response, not my own opinion"

But if you take the llm response and present it as your own, at least there is slightly more ownership over the opinion.

This is kind of splitting hairs but hopefully it makes people actually read the response themselves before posting it.

gortok•36m ago
While we will never be able to get folks to stop using AI to “help” them shape their replies, it’s super annoying to have folks think that by using AI that they’re doing others a favor. If I wanted to know what an AI thinks I’ll ask it. I’m here because I want to know what other people think.

At this point, I make value judgments when folks use AI for their writing, and will continue to do so.

sbrother•27m ago
I strongly agree with this sentiment and I feel the same way.

The one exception for me though is when non-native English speakers want to participate in an English language discussion. LLMs produce by far the most natural sounding translations nowadays, but they imbue that "AI style" onto their output. I'm not sure what the solution here is because it's great for non-native speakers to be able to participate, but I find myself discarding any POV that was obviously expressed with AI.

justin66•25m ago
As AIs get good enough, dealing with someone struggling with English will begin to feel like a breath of fresh air.
tensegrist•21m ago
one solution that appeals to me (and which i have myself used in online spaces where i don't speak the language) is to write in a language you can speak and let people translate it themselves however they wish

i don't think it is likely to catch on, though, outside of culturally multilingual environments

internetter•15m ago
> i don't think it is likely to catch on, though, outside of culturally multilingual environments

It can if the platform has built in translation with an appropriate disclosure! for instance on Twitter or Mastodon.

https://blog.thms.uk/2023/02/mastodon-translation-options

AnimalMuppet•20m ago
Maybe they should say "AI used for translation only". And maybe us English speakers who don't care what AI "thinks" should still be tolerant of it for translations.
kps•20m ago
When I occasionally use MTL into a language I'm not fluent in, I say so. This makes the reader aware that there may be errors unknown to me that make the writing diverge from my intent.
sejje•15m ago
I think multi -language forums with AI translators is a cool idea.

You post in your own language, and the site builds a translation for everyone, but they can also see your original etc.

I think building it as a forum feature rather than a browser feature is maybe worth.

pjerem•12m ago
You know that this is the most hated feature of reddit ? (because the translations are shitty so maybe that can be improved)
sejje•9m ago
I didn't, but I don't think it would work well on an established English-only forum.

It should be an intentional place you choose, and probably niche, not generic in topic like Reddit.

I'm also open to the thought that it's a terrible idea.

emaro•8m ago
Agreed, but if someone uses LLMs to help them write in English, that's very different from the "I asked $AI, and it said" pattern.
sejje•18m ago
This is the only reasonable take.

It's not worth polluting human-only spaces, particularly top tier ones like HN, with generated content--even when it's accurate.

Luckily I've not found a lot of that here. That which I do has usually been downvoted plenty.

Maybe we could have a new flag option, which became visible to everyone with enough "AI" votes so you could skip reading it.

hotsauceror•16m ago
I agree with this sentiment.

When I hear "ChatGPT says..." on some topic at work, I interpret that as "Let me google that for you, only I neither care nor respect you enough to bother confirming that that answer is correct."

gardenhedge•11m ago
I disagree. It's not a potential avenue for further investigation. Imo ai should always be consulted
skobes•36m ago
I hate these too, but I'm worried that a ban just incentivizes being more sneaky about it.
llm_nerd•19m ago
I think people are just presuming that people are regurgitating AI pablum regardless.

People are seeing AI / LLMs everywhere — swinging at ghosts — and declaring that everyone are bots that are recycling LLM output. While the "this is what AI says..." posts are obnoxious (and a parallel to the equally boorish lmgtfy nonsense), not far behind are the endless "this sounds like AI" type cynical jeering. People need to display how world-weary and jaded they are, expressing their malcontent with the rise of AI.

And yes, I used an em dash above. I've always been a heavy user of the punctuation (being a scattered-brain with lots of parenthetical asides and little ability to self-edit) but suddenly now it makes my comments bot-like and AI-suspect.

I've been downvoted before for making this obvious, painfully true observation, but HNers, and people in general, are much less capable at sniffing out AI content than they think they are. Everyone has confirmation-biased themselves into thinking they've got a unique gift, when really they are no better than rolling dice.

tpxl•36m ago
I think they should be banned, if there isnt a contribution besides what the llm answered. It's akin to 'I googled this', which is uninteresting.
mattkrause•29m ago
I do find it useful in discussions of LLMs themselves. (Gemini did this; Claude did it too but it used to get tripped up like that).

I do wish people wouldn’t do it when it doesn’t add to the conversation but I would advocate for collective embarrassment over a ham-fisted regex.

MBCook•23m ago
That provides value as you’re comparing (and hopefully analyzing) output. It’s totally on topic.

In a discussion of RISC v5 and if it can beat ARM someone just posting “ChatGPT says X” adds absolutely nothing to the discussion but noise.

Ekaros•12m ago
I think "I googled this" can be valid and helpful contribution. For example looking up some statistic or fact or an year. If that is also verified and sanity checked.
sejje•5m ago
Yes, while citing an LLM in the same way is probably not as useful.

"I googled this" is only helpful when the statistic or fact they looked up was correct and well-sourced. When it's a reddit comment, you derail into a new argument about strength of sources.

The LLM skips a step, and gets you right to the "unusable source" argument.

syockit•35m ago
You can add the guideline, but then people would skip the "I asked" part and post the answer straight away. Apart from the obvious LLMesque structure of most of those bot answers, how could you tell if one has crafted the answer so much that it looks like a genuine human answer?

Obligatory xkcd https://xkcd.com/810/

card_zero•13m ago
15 years ago ... needs updating in regard of how things panned out.
gruez•34m ago
What do you think about other low quality sources? For instance, "I checked on infowars.com, and this is what came up"? Should they be banned as well?
everdrive•32m ago
It depends on if you're saying "Infowars has the answer, check out this article" vs "I know this isn't a reputable source, however it's a popular source and there's an interesting debate to be had about Infowars' perspective, even if we can agree it's incorrect."
newsoftheday•7m ago
Your point is conflating a potential low quality source with AI output while also making the judgement that <fill in the blank site> is a low quality source and to be disregarded 100% of the time; ignoring that the potential exists that an informative POV may be present, even on a potential low quality source site.
ben_w•34m ago
Depends on the context.

I find myself downvoting (flagging) them when I see them as submissions, and I can't think of any examples where they were good submission content; but for comments? There's enough discussion where the AI is the subject itself and therefore it's genuinely relevant what the AI says.

Then there's stuff like this, which I'd not seen myself before seeing your question, but I'd say asking people here if an AI-generated TLDR of 74 (75?) page PDF is correct, is a perfectly valid and sensible use: https://news.ycombinator.com/item?id=46164360

LeoPanthera•33m ago
Banning the disclosure of it is still an improvement. It forces the poster to take responsibility for what they have written, as now it is in their name.
AdamH12113•32m ago
To me, the valuable comments are the ones that share the writer's expertise and experiences (as opposed to opinions and hypothesizing) or the ones that ask interesting questions. LLMs have no experience and no real expertise, and nobody seems to be posting "I asked an LLM for questions and it said...". Thus, LLM-written comments (whether of the form "I asked ChatGPT..." or not) have no value to me.

I'm not sure a full ban is possible, but LLM-written comments should at least be strongly discouraged.

michaelcampbell•32m ago
Related: Comments saying "this feels like AI". It's this generation's "Looks shopped" and of zero value, IMO.
whimsicalism•24m ago
Disagree, find these comments valuable - especially if they are about an article that I was about to read. It's not the same as sockpuppeting accusations, which I think are right to be banned.
sbrother•24m ago
Fair, but then that functionality should be built into the flagging system. Obvious AI comments (worse, ones that are commercially driven) are a cancer that's breaking online discussion forums.
ruuda•17m ago
I find them helpful. It happens semi-regularly now that I read something that was upvoted, but after a few sentences I think "hmm, something feels off", and after the first two paragraphs I suspect it's AI slop. Then I go to the comments, and it turns out others noticed too. Sometimes I worry that I'm becoming too paranoid in a world where human-written content feels increasingly rare, and it's good to know it's not me going crazy.

In one recent case (the slop article about adenosine signalling) a commenter had a link to the original paper that the slop was engagement-farming about. I found that comment very helpful.

yodon•14m ago
> Comments saying "this feels like AI" should be banned.

Strong agree.

If you can make an actually reliable AI detector, stop wasting time posting comments on forums and just monetize it to make yourself rich.

If you can't, accept that you can't, and stop wasting everyone else's time with invalidated guesses about whether something is AI or not.

The least valuable and lowest signal comments are "this feels like AI." Worse, they never raise the quality of the discussion about the article.

It's "does anyone else hate those scroll bars" and "this site shouldn't require JavaScript" for a new generation.

masfuerte•32m ago
Does it need a rule? These comments already get heavily down-voted. People who can't take a hint aren't going to read the rules.
rsync•6m ago
This is my view.

I tend to dislike these type of posts but a properly designed and functioning vote mechanism should take care of it.

If not, it is the voting mechanism that should be tuned - not new rules.

63stack•31m ago
Yes
ekjhgkejhgk•31m ago
I don't think they should be banned, I think they should be encouraged: I'm always appreciative when people who can't think for themselves openly identify themselves so that it costs me less effort to spot them.
tehwebguy•31m ago
It should be allowed and downvoted
lproven•31m ago
I endorse this. Please do take whatever measures are possible to discourage it, even if it won't stop people. It at least sends a message: this is not wanted, this is not helpful, this is not constructive.
testdelacc1•31m ago
Maybe I remember the Grok ones more clearly but it felt like “I asked Grok” was more prevalent than the others.

I feel like the HN guidelines could take inspiration from how Oxide uses LLMs. (https://rfd.shared.oxide.computer/rfd/0576). Specifically the part where using LLMs to write comments violates the implicit social contract that the writer should put more care and effort and time into it than the reader. The reader reads it because they assume this is something a person has put more time into than they need to. LLMs break that social contract.

Of course, if it’s banned maybe people just stop admitting it.

josefresco•31m ago
As a community I think we should encourage "disclaimers" aka "I asked <AIVENDOR>, and it said...." The information may still be valuable.

We can't stop AI comments, but we can encourage good behavior/disclosure. I also think brevity should still be rewarded, AI or not.

superfishy•21m ago
I agree. The alternative is prohibiting this practice and having these posters not disclose their use of LLMs, which in many cases cannot really be easily detected.
AlwaysRock•30m ago
Yes. Unless something useful is actually added by the commenter or the post is about, "I asked llm x and it said y (that was unexpected)".

I have a coworker who does this somewhat often and... I always just feel like saying well that is great but what do you think? What is your opinion?

At the very least the copy paster should read what the llm says, interpret it, fact check it, then write their own response.

Arainach•27m ago
I keep this link handy to send to such coworkers/people:

https://distantprovince.by/posts/its-rude-to-show-ai-output-...

exasperaited•27m ago
I have a client who does this — pastes it into text messages! as if it will help me solve the problem they are asking me to solve — and I'm like "that's great I won't be reading it". You have to push back.
dylan604•24m ago
> At the very least the copy paster should read what the llm says, interpret it, fact check it, then write their own response.

then write their own response using an AI to improve the quality of the response? the implication here is that an AI user is going to do some research when using the AI was their research. to do the "fact check" as you suggest would mean doing actual work, and clearly that's not something the user is up for indicated by use of the AI.

so, to me, your suggestion is fantasy level thinking

0x00cl•30m ago
This is what DeepSeek said:

> 1. Existing guidelines already handle low-value content. If an AI reply is shallow or off-topic, it gets downvoted or flagged. > > 2. Transparency is good. Explicitly citing an AI is better than users passing off its output as their own, which a ban might encourage. > > 3. The community can self-regulate. We don't need a new rule for every type of low-effort content. > > The issue is low effort, not the tool used. Let downvotes handle it.

debo_•18m ago
I was hoping someone did this.
dominotw•29m ago
i asked chatgpt and it said no its not a good idea to ban
yomismoaqui•29m ago
I think disclosing the use the AI is better than hiding it. The alternative is people using it but not telling for fear of a ban.
gAI•28m ago
I asked AI, and it said yes.
exasperaited•28m ago
No, don't ban it. It's a useful signal for value judgements.
bryanlarsen•28m ago
What is annoying about them is that they tend to be long with a low signal/noise ratio. I'd be fine with a comment saying. "I think the ChatGPT answer is informative: [link]". It'd still likely get downvoted to the bottom of the discussion, where it likely belongs.
moomoo11•28m ago
Honestly I judge people pretty harshly. I ask people a question in honest good faith. If they’re trying to help me out and genuinely care and use AI fine.

But most of the time it’s like they were bothered that I asked and copy paste what an AI said.

Pretty easy. Just add their name to my “GFY” list and move on in my life.

Mistletoe•28m ago
I don’t see how it is much different than using Wikipedia. They are usually about the same answer and at least in Gemini it is usually a correct answer now.
breckinloggins•27m ago
If it’s part of an otherwise coherent post making a larger point I have no issue with it.

If it’s a low effort copy pasta post I think downvotes are sufficient unless it starts to obliterate the signal vs noise ratio on the site.

sans_souse•27m ago
There be a thing called Thee Undocumented Rules of HN, aka etiquette, in which states - and I quote: "Thou shall not post AI generated replies"

I can't locate them, but I'm sure they exist...

tastyfreeze•19m ago
I've seen that document. It also has a rule that states "Thou shall not be a bot."

Unfortunately, I can't find them. Its a shame. Everybody should read them.

Zak•27m ago
I don't think people should post the unfiltered output of an LLM as if it has value. If a question in a comment has a single correct answer that is so easily discoverable, I might downvote the comment instead.

I'm not sure making a rule would be helpful though, as I think people would ignore it and just not label the source of their comment. I'd like to be wrong about that.

prpl•26m ago
were lmgtfy links ever forbidden?
PeterStuer•25m ago
For better or worse, that ship has sailed. LLM's are now as omnipresent as websearch.

Some people will know how to use it in good taste, others will try to abuse it in bad taste.

It might not be universally agreed which is which in every case.

whimsicalism•25m ago
I think comments like this should link their generation rather than C+P it. Not sure if this should be a rule or we can just let downvoting do the work - I worry that a rule would be overapplied and I think there are contexts that are okay.
zoomablemind•25m ago
There's hardly a standard for a 'quality' contribution to discussion. Many styles, many opinions, many ways to react and support one's statements.

If anything, it had been quite customary to supply references for some important facts. Thus letting readers to explore further and interpret the facts.

With AI in the mix the references become even more important, in the view of hallucinations and fact poisoning.

Otherwise, it's a forum. Voting, flagging, ignoring are the usual tools.

stego-tech•25m ago
Formalizing it within the community rules removes ambiguity around intent or use, so yes, I do believe we should be barring AI-generated comments and stories from HN in general. At the very least, it adds another barometer of sorts to help community leaders do the hard work of managing this environment.

If you didn’t think it, and you didn’t write it, it doesn’t belong here.

a_wild_dandan•25m ago
No. I like being able to ignore them. I can’t do that if people chop off their disclaimers to avoid comment removal.
ruined•24m ago
you got a downvote button
ilc•23m ago
No, I put them with lmgtfy. You are being told that your question is easy to research and you didn't do the work, most of the time.

Also heaven forbid, AI can be right. I realize this is a shocker to many here. But AI has use, especially in easy cases.

watwut•16m ago
1.) They are not replies to people asking questions.

2.) Posting AI response has as much value as posting random reddit comment.

3.) AI has value where you are able to factually verify it. If someone asks a question, they do not know the answer and are unable to validate ai.

emaro•3m ago
I don't think LLM responses mean a question is easy to research - they will always give an answer.
mindcandy•22m ago
Is the content of the comment productive to the conversation? Upvote it.

Is the content of the comment counter-productive? Downvote it.

I could see cases where large walls of text that are generally useless should be downvoted or even removed. AI or not. But, the first example

> faced with 74 pages of text outside my domain expertise, I asked Gemini for a summary. Assuming you've read the original, does this summary track well?

to be frank, is a service to all HN readers. Yes it is possible that a few of us would benefit from sitting down with a nice cup of coffee, putting on some ambient music and taking in 74 pages of... whatever this is. But, faced with far more interesting and useful content than I could possibly consume all day every day, having a summary to inform my time investment is of great value to me. Even If It Is Imperfect

mistrial9•21m ago
the system of long-lived nicks on YNews is intended to build a mild and flexible reputation system. This is valuable for complex topics, and to notice zealots, among other things. The feeling while reading that it is a community of peers is important.

AI-LLM replies break all of these things. AI-LLM replies must be declared as such, for certain IMHO. It seems desirable to have off-page links for (inevitable) lengthy reply content.

This is an existential change for online communications. Many smart people here have predicted it and acted on it already. It is certainly trending hard for the forseeable future.

ManlyBread•20m ago
I think that the whole point of the discussion forum is to talk to other people, so I am in favor of banning AI replies. There's zero value in these posts because anyone can type chatgpt.com in the browser and then ask whatever question they want at any time while getting input from an another human being is not always guaranteed.
pembrook•20m ago
No, this is not a good rule.

What AI regurgitates about about a topic is often more interesting and fact/data-based than the emotionally-driven human pessimists spewing constant cynicism on HN, so in fact I much prefer having more rational AI responses added in as context within a conversation.

nlawalker•18m ago
No, just upvote or downvote. I think the site guidelines could take a stance on it though, encouraging people to post human insights and discouraging comments that are effectively LLM output (regardless of whether they actually are).
WesolyKubeczek•18m ago
Yes. If I wanted an LLM’s opinion, I would have asked it myself.
jpease•17m ago
I asked AI if “I asked AI, and it said” replies should be forbidden, and it said…
AnonC•9m ago
Are you a new HN mod (with authority over the guidelines) and are asking for opinions from readers (that’d be new)? Or are you just another normal user and are loudly wondering about this so that mods get inputs (as opposed to writing a nice email to hn@ycombinator.com)?

I think just downvoting by committed users is enough. What matters is the content and how valuable it seems to readers. There is no need to do any gate keeping by the guidelines on this matter. That’s my opinion.

jdoliner•6m ago
I've always liked that HN typically has comments that are small bits of research relevant to the post that I could have done myself but don't have to because someone else did it for me. In a sense the "I asked $AI, and it said" comments are just the evolved form of that. However the presentation does matter a little, at least to me. Explicitly stating that you asked AI feels a little like an appeal to authority... and a bad one at that. And makes the comment feel low effort. Often times comments that frame themselves in this way will be missing the "last-mile" effort that tailors the LLMs response to the context of the post.

So I think maybe the guidelines should say something like:

HN readers appreciate research in comments that brings information relevant to the post. The best way to make such a comment is to find the information, summarize it in your own words that explain why it's relevant to the post and then link to the source if necessary. Adding "$AI said" or "Google said" generally makes your post worse.

---------

Also I asked ChatGPT and it said:

Short Answer

HN shouldn’t outright ban those comments, but it should culturally discourage them, the same way it discourages low-effort regurgitation, sensationalism, or unearned certainty. HN works when people bring their own insight, not when they paste the output of a stochastic parrot.

A rule probably isn’t needed. A norm is.

MBCook•6m ago
Yes, please. It’s extremely low effort. If you’re not adding anything of value (typing into another window and copying and pasting the output are not) then it serves no purpose.

It’s the same as “this” of “wut” but much longer.

If you’re posting that and ANALYZING the output that’s different. That could be useful. You added something there.

satisfice•5m ago
Only if they also do a google search, provide the top one hundred hits, and paste in a relevant Wikipedia page.
incanus77•4m ago
Yes. This is the modern equivalent of “I searched the web and this is what it said”. If I could do the same thing and have the same results, you’re not adding any value.

Though this is unlikely a scenario that happened, I’d equate this with someone asking me what I thought about something, and me walking them over to a book on the shelf to show them what that author thought. It’s just an aggregated and watered-down average of all the books.

I’d rather hear it filtered through a brain, be it a good answer or bad.