It’s hard to tell how total that was compared to today. Of course the amount of money involved is way higher so I’d expect it to not be as large but expanding the data set a bit could be interesting to see if there’s waves of comments or not.
It never had a public product, but people in the private beta mentioned that they did have a product, just that it wasn't particularly good. It took forever to make websites, they were often overly formulaic, the code was terrible, etc etc.
10 years later and some of those complaints still ring true
Sad times...
>Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes. Comments should get more thoughtful and substantive,...
> " This pretty negative post topping Hacker News last month sparked these questions, and I decided to find some answers, of course, using AI"
The pretty negative post cited is https://tomrenner.com/posts/llm-inevitabilism/. I went ahead to read it, and found it, imo, fair. It's not making any direct pretty negative claims about AI, although it's clear the author has concerns. But the thrust is inviting the reader to not fall into the trap of the current framing by proponents of AI, rather questioning first if the future being peddled is actually what we want. Seems a fair question to ask if you're unsure?
I got concerned that this is framed as "pretty negative post", and it impacted my read of the rest of this author's article
Though it does sort of show the Overton window that a pretty bland argument against always believing some rich dudes buckets as negative even in the sentiment analysis sense.
I think a lot of people have like half their net worth in NVIDIA stock right now.
check my recent submission, the vitriol it received, and read this
https://daringfireball.net/linked/2025/03/27/youll-never-gue...
And I agree with jakeydus: I'm not seeing anything I could call "vitriol" in the top-level comments. I do, however, see people resent having their way of life (and of making a living) called into question. The one particularly snide top-level comment I saw was agreeing with you.
If YES, why do you think Meta is a normal company, with regular "contradictions", and why do you frame as ideological someone who just reminds people of what Meta and Zuckerberg do? If NO, how exactly do you justify your answer that negates what we know for a fact, and/or how do you justify Meta's behavior?
Yes.
From Wikipedia:
> An ideology is a set of beliefs or values attributed to a person or group of persons, especially those held for reasons that are not purely about belief in certain knowledge,
You are passing judgment and using emotionally charged words (such as "weaponizing", which also implies intent and motivations not in evidence) to make a point about what you consider moral. And you use your judgement to set up a completely false dichotomy with incoherent terms (I have absolutely no idea what you think the phrase "normal company" means here), while completely ignoring my point.
That is ideology.
My opinion of Meta as a company is not relevant to anything I have said so far.
It does not matter whether you are right or wrong about any of this.
My objection is to your rhetorical style, and to the placement of your arguments in an inappropriate forum. These objections do not require that I agree or disagree with you about anything at all. I am not interested in debating morality with you. That is the point.
As far as I can tell, you did not even stop to question whether I work for Meta in the first place. (I do not.)
I am tossing the ball off the field because this field is intended for a different sport entirely, and there is already a game in progress.
Many people will argue that they do good at Meta, and that they strive to do good. Their results probably are good too - meta is vast so statistically you will find good work and good outcomes.
Those people are already painted as evil, so why would they engage with the question ? Even if you are genuine and earnest?
I have seen people on HN publicly state that they flag anything they don't agree with, regardless of merit.
I guess they use it like some kind of super-down button.
* Commercial influence on computing has proven to be so problematic one wonders if the entire stack is a net negative, it shouldn’t even be a question.
All of the anti-big-tech comments I've ever seen that are flagged are flagged because they blatantly break the guidelines and/or are contentless and don't contribute in any meaningful sense aside from trying to incite outrage.
And those should be flagged.
I explicitly enable flagged and dead because sometimes there are nuggets in there which provide interesting context to what people think.
I will never flag anything. I dont get it.
Disappearing OT/ads/extreme ad hominems is a positive thing imo.
I vouch for things that I disagree with if they make good points. I have flagged things.
IMO the worst thing pg ever did for this site is to say that downvoting could be used for disagreement. I still bemoan the removal of downvote scores, and still wish for Slashdot-style voting, meta-moderation, and personalition of content scores.
They show up in the HN Active section quite regularly.
And virtually anything even remotely related to Twitter or most Elon Musk-related companies almost instantly get the hook.
Not sure if part of a broader trend, or a simply reflection of it, but when mentoring/coaching middle and high school aged kids, I’m finding they struggle to accept feedback in anyway other than “I failed.” A few years back, the same age group was more likely to accept and view feedback as an opportunity so long as you led with praising strengths. Now it’s like threading a needle every time.
So no I am not doing that.
In what world does "criticism" not default to "negative"?
Most things are imperfect. Assuming X is imperfect and has flaws isn't being negative, it's just being realistic.
Don't let perfect be the enemy of good enough pal.
Constructive criticism involves being negative about the aspects that make something imperfect.
A realistic reaction to most things is a mixture of positive and negative.
The ethos of HN is to err on the side of assuming good faith and the strongest possible interpretation of other's positions, and to bring curiosity first and foremost. Curiosity often leads to questions.
Can you clarify what you mean by distinguishing between "questions" and "questioning"? How or why is one neutral while the other is probably negative?
I'll also point out that I'm questioning you here, not out of negativity, but because it's a critical aspect of communication.
> In what world does "criticism" not default to "negative"?
Criticism is what we each make of it. If you frame it as a negative thing, you'll probably find negativity. If you frame it as an opportunity to learn/expand on a critical dialogue, good things can come from it.
While I understand what you're getting at and get that some people are overly critical in a "default to negative" way, I've come to deeply appreciate constructive, thoughtful criticism from people I respect, and in those context, I don't think summing it up as "negative" really captures what's happening.
If you're building a product, getting friendly and familiar with (healthy) criticism is critical, and when applied correctly will make the product much better.
> Can you clarify what you mean by distinguishing between "questions" and "questioning"
"questioning" more directly implies doubt to me.
Regarding your distinction, I'm still confused. In a very literal sense, what is the difference between "questions" and "questioning" in your mind? i.e. what are some examples of how they manifest differently in a real world conversation?
It's hard to argue that asking questions isn't neutral, but being questioning implies doubt and it says so in the dictionary to back me up, it's not really more complex than that.
Constructive criticism and healthy debate is entirely possible without violating the guidelines, and happens quite a bit.
If people can’t figure out how to have conversations that aren’t “boring, pointless, fawning” while honoring the community guidelines, they:
1. Need to try harder
2. Or they should probably not be commenting here
The rules/ethos are not perfect, nor does the community always succeed in its goals. But I’ll take the dynamic here every day vs. sliding into the kind of toxic sludge fest that has infiltrated just about every social network.
This place is barely holding the hordes at bay as it is. I’m grateful for the guidelines and the collective will to abide by them as much as possible.
For instance, in a lot of threads on some new technology or idea, one of the top comments is "I'm amazed by the negativity here on HN. This is a cool <thing> and even though it's not perfect we should appreciate the effort the author has put in" - where the other toplevel comments are legitimate technical criticism (usually in a polite manner, no less).
I've seen this same comment, in various flavors, at the top of dozens of HN thread in the past couple of years.
Some of these people are being genuine, but others are literally just engaging in amigdala-hijacking because they want to shut down criticism of something they like, and that contributes to the "everything that isn't gushing positivity is negative" effect that you're seeing.
The funny thing about this here audience is that it is made up of the kinds of folks you would see in all those cringey OpenAI videos. I.e. the sort of person who can do this whole technical criticism all day long but wouldn't be able to identify the correct emotional response if it hit them over the head. And that's what we're all here for - to talk shop.
Thing is - we don't actually influence others' thinking with the right emotional language just by leaving an entry behind on HN. We're not engaging in "amigdala-hijacking" to "shut down criticism" when we respond to a comment. There is a bunch of repetitive online cliché's in play here, but it would be a stretch to say that there are these amigdala-hijackers. Intentionally steering the thread and redefining what negativity is.
People aren't being aggressive enough about their downvotes and flags, methinks.
- "you should reevaluate your experience level and seniority."
- "Sounds more like "Expert Hobbyist" than "Expert Programmer"."
- "Go is hardly a replacement with its weaker type system."
- "Wouldn’t want to have to pay attention ;-)"
- "I'm surprised how devs are afraid to look behind the curtain of a library"
- "I know the author is making shit up"
- "popular with the wannabes"
Hacker News comments are absolutely riddled with this kind of empty put-down that isn't worth the diskspace it's saved on let alone the combined hours of reader-lifetime wasted reading it; is it so bad to have a reminder that there's more to a discussion than shitting on things and people?
> "legitimate technical criticism"
So what? One can make correct criticism of anything. Just because you can think of a criticism doesn't make it useful, relevant, meaningful, interesting, or valuable. Some criticism might be, but not because it is criticism and accurate.
> "they can undermine others' thinking skills"
Are you seriously arguing that not posting a flood of every legitimate criticism means the reader's thinking skills must have been undermined? That the only time it's reasonable to be positive, optimistic, enthusiastic, or supportive, is for something which is literally perfect?
Amigdala-hijacking, emotional manipulation, and categorical dismissiveness of others' criticisms are clearly not good.
> Look at this Nim thread
Yes, I'm looking at it, and I'm seeing a lot of good criticism (including the second-to-top comment[1], some of which is out of love for the language.
You cherry-picked a tiny subset of comments that are negative, over half of which aren't even about the topic of the post - which means that they're completely unrelated to my comment, and you either put them there because you didn't read my comment carefully before replying to it, or you intentionally put them there to try to dishonestly bolster your argument.
As an example of the effect I'm referring to, this recent thread on STG[2], the top comment of which starts with "Lots of bad takes in this thread" as a way of dismissing every single valid criticism in the rest of the submission.
> is it so bad to have a reminder that there's more to a discussion than shitting on things and people?
This is a dishonest portrayal of what's going on, which is that, instead of downvoting and flagging those empty put-downs, or responding to specific bad comments, malicious users post a sneering, value-less, emotionally manipulative comment at the toplevel of a submission that vaguely gestures to "negative" comments in the rest of the thread, that dismisses every legitimate criticism along with all of the bad ones. This is "sneering", and it's against the HN guidelines, as well as dishonest and value-less.
> So what? One can make correct criticism of anything. Just because you can think of a criticism doesn't make it useful, relevant, meaningful, interesting, or valuable. Some criticism might be, but not because it is criticism and accurate.
I never claimed that all criticism is "useful, relevant, meaningful, interesting, or valuable". Don't put words in my mouth.
> Are you seriously arguing that not posting a flood of every legitimate criticism means the reader's thinking skills must have been undermined? That the only time it's reasonable to be positive, optimistic, enthusiastic, or supportive, is for something which is literally perfect?
I never claimed this either.
It appears that, given the repeated misinterpretations of my points, and the malicious technique of trying to pretend that I made claims that I didn't, you're one of those dishonest people that resorts to emotional manipulation to try to get their way, because they know they can't actually make a coherent argument for it.
Ironic (or, perhaps not?) that someone defending emotional manipulation and dishonesty resorts to it themselves.
The sub-clause "you're one of those dishonest people that resorts to emotional manipulation to try to get their way" alone laden with emotionally manipulative affect that this reads like a self-referential example.
"You're one of those" is a phrase often, and certainly in this case, used for the purposes of othering.
"dishonest people" speaks for itself.
"resorts to emotional manipulation to try to get their way" assumes bad faith on behalf of somebody you barely know.
There's a lot I agree with on in your post, but the irony doesn't exactly stop with jodrellblank.
You stating that again doesn't make it more supported, or more clear. There's nothing automatically unbiased and unmanipulative about criticism, and there's nothing automatically justified and useful about criticism. Opening a thread where there's all criticism is (or can be) just as manipulative as a thread where there's a lot of enthusiasm. The typical geek internet response is to claim that being critical is somehow meritocratic, unbiased, real, but it isn't inherently that.
> "over half of which aren't even about the topic of the post ... you intentionally put them there to try to dishonestly bolster your argument"
I know, right?! I have to skim read and filter out piles of irrelevant miserable put-down dismissive low-thought low-effort dross and it often isn't even about the topic of the post! I intentionally put them there to try and honestly bolster my argument that opening a thread full of cynicism has a manipulative effect on the reader's emotional state and to counter your implied claim that enthusiasm is manipulative and criticism isn't.
> "the top comment of which starts with "Lots of bad takes in this thread" as a way of dismissing every single valid criticism in the rest of the submission."
But they explicitly dismiss the bad takes and not every single take? For someone who is complaining that I am putting words in your mouth and you hate it, you are putting words in their mouth which go directly against what they said. e.g. there are some takes complaining that the article is 'compelling people to work for no money' and that comment says the regulation would be met by a clear expiry date for the game on the store. The company is willing to fund it for some time before they cut their losses, and this asks them to tell the customer what that time is. That critical comment starts "I think a legal remedy here won't work." because the only legal remedy they bothered to think about is compelling people to work for free. It doesn't comment on the proposals put to governments in the article, or the movement, or even expand on much detail why they think a legal remedy can't work. But it still contributes to the miasma of "don't try things, everything's shit, don't even bother, nothing can work, nothing is worth doing, don't you know there was a flaw once, somewhere, something was tried and didn't work" which absolutely is emotionally manipulative when read in bulk.
> "I never claimed that all criticism is "useful, relevant, meaningful, interesting, or valuable". Don't put words in my mouth."
You argued that point. You said "they want to shut down criticism of something they like" as if that's a bad thing which should not be happening. If you argue that, then you think criticism has some inherent value. I say it doesn't have inherent value; there area vastly more options to criticise a thing than to praise a thing, so people who choose criticism are more likely pulling from a big pool of low effort cached thoughts, than a small pool of high effort (positive or critical) thoughts, so a critical comment is more likely a bad comment than a good comment. Dismissing a whole lot of critical comments in one go is therefore a reasonable response.
> "I never claimed this either."
OK let's go with, you said: "undermines people's critical thinking skills" and I say "what can be asserted without evidence can be dismissed without evidence". Reading a comment which says "lots of bad takes here" does not undermine people's critical thinking skills.
My claim is more that reading a dozen comments "this library had a bug!" "this maintainer was rude to me!" "The documentation is way out of date" "I know someone who tried this in 1982 and found it was impossible" really does kill a reader's interest in looking deeper into a thing, and such criticisms are both factually correct and low effort, low value, and quite reasonable to be dismissed in bulk without "responding to specific bad comments" particularly because the ratio of possible criticisms to possible praise is something approaching infinity-to-one. (even if a thing is absolutely perfect, people can criticise it for being the wrong thing, in the wrong place, at the wrong time, by the wrong person, etc.).
> "you're one of those dishonest people that resorts to emotional manipulation to try to get their way, because they know they can't actually make a coherent argument for it."
I've made a pretty coherent argument:
- most critical comments on a HN thread are not worth reading.
- They have a detrimental effect on the topic and reader.
- Therefore there are far too many of them.
- It's justified to dismiss them in bulk, because the space of possible critical/engaging comments means the work to respond to every bad take is far too much, and the people who make low effort bad takes do not respond well to replying individually.
- You have not offered any support for your claim that reading a dismissive/positive comment "undermines critical thinking skills".
I neither claimed nor implied either of those things, and it's pretty clear that my argument rests on neither.
> I have to skim read and filter out piles of irrelevant miserable put-down dismissive low-thought low-effort dross and it often isn't even about the topic of the post!
So, you conceded that you put "evidence" in your original comment that was completely irrelevant to my points, and are trying to divert the argument.
> opening a thread full of cynicism has a manipulative effect on the reader's emotional state
This is false, and completely nonsensical. A bunch of comments from different, uncoordinated entities literally cannot be "manipulative" according to the literal dictionary definition of the word, which requires intention, which literally cannot happen with a bunch of random unassociated strangers:
"A manipulative person tries to control people to their advantage" "tending to influence or control someone or something to your advantage, often without anyone knowing it"[1]
This is you misusing language to try to bolster your point.
> counter your implied claim that enthusiasm is manipulative and criticism isn't
There is zero implication of that anywhere in my comment. That's the third time you've dishonestly put words in my mouth.
> But they explicitly dismiss the bad takes and not every single take?
Yet again, factually false, and extremely dishonest. You know very well that there's no way to tell which takes they considered to be "bad" and so that this is a general dismissal of criticism they disagree with.
> You said "they want to shut down criticism of something they like" as if that's a bad thing which should not be happening.
With the context of my original comment, which is specifically the case of the emotionally manipulative "The negativity here is amazing" type - yes, that's obviously a bad thing, because it's being done in a manipulative way that doesn't address the problems of the critical comment.
> You argued that point. [...] If you argue that, then you think criticism has some inherent value.
No, it very obviously does not. That's a very bad reading comprehension and/or logical thinking failure, and the fourth time you've put words in my mouth.
It's pretty embarrassing that I have to spell this out in so much detail, but because you repeatedly misinterpret my words and maliciously put words in my mouth, here we go: I believe that some criticism has value and some does not. The kind of "wow why is everyone so negative" categorical dismissal both dismisses valueless criticism (which is fine, in isolation) and dismisses valid criticism, which is malicious and bad. I never once said that criticism has inherent value, nor did I imply it, nor does any part of my argument rest upon that point.
> there area vastly more options to criticise a thing than to praise a thing, so people who choose criticism are more likely pulling from a big pool of low effort cached thoughts, than a small pool of high effort (positive or critical) thoughts, so a critical comment is more likely a bad comment than a good comment. Dismissing a whole lot of critical comments in one go is therefore a reasonable response.
This is an extremely bad argument. Humans are not statistical models. Thoughts are not a mathematical space that you randomly sample from. Dismissing someone's argument via emotional manipulation is evil. Categorically dismissing a bunch of comments via emotional manipulation when you have the full capability to assess the bad ones individually (via downvoting, flagging, or responding) is also evil and indicates that you are a person who either fundamentally does not have the ability to think rationally, or is malicious enough that they employ this technique anyway because they're trying to manipulate others.
> OK let's go with, you said: "undermines people's critical thinking skills" and I say "what can be asserted without evidence can be dismissed without evidence"
This is dishonest rhetorical reframing. If you write an emotionally manipulative comment that doesn't make a logical argument but uses charged language to undermine a position without actually addressing its points logically, that subverts someone's logical thinking capability by pressuring them to respond emotionally, because by definition it's a manipulative statement. That is tautologically true and needs zero evidence.
> particularly because the ratio of possible criticisms to possible praise is something approaching infinity-to-one
And, as we previously discussed, this is a meaningless statement that has no basis in reality because statements are not mathematical sets. And, even if they were, this is a claim for which the statement "what can be asserted without evidence can be dismissed without evidence" applies. I'm looking forward to your proof that the measure of criticisms in the set of statements is greater than the measure of the set of praise.
> most critical comments on a HN thread are not worth reading
This is also a "what can be asserted without evidence can be dismissed without evidence" case. And, here, it turns out that it's fairly easy to gather evidence against it - for instance, the first five critical comments on that Nim thread (44938094, 44939336, 44939757, 44939770, and 44941418) are all worth reading and not zero-value. I'm looking forward to you finding every single critical comment in that thread and labeling them as worth reading or not to support your very bold claim.
And, of course, that undermines your entire argument at the end - not that the other inferences were valid anyway:
> It's justified to dismiss them in bulk, because the space of possible critical/engaging comments means the work to respond to every bad take is far too much
Nobody said you had to respond to those critical comments individually - there are flag and downvote buttons, you know. And even if there weren't - emotionally undermining someone's logical point is evil, so this still is not justified, unless there are zero valid criticisms made in the entire thread (and you somehow have the clairvoyance to know that none will be posted after you make your comment). The ends do not justify the means.
Your entire response was full of logical fallacies, dishonest manipulation and reframing, failure to read and/or understand my points, and repeated lying and trying to claim I said or meant something that I never did (four times now).
I don't think it's possible to argue logically with you, so this is now no longer about changing your mind, and more about countering your invalid claims so that other HN readers won't be deceived.
And, given the voting on our respective comments, I think that I've done a pretty good job so far.
[1] https://dictionary.cambridge.org/dictionary/english/manipula...
> Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative.
> Please don't fulminate. Please don't sneer, including at the rest of the community.
> Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.
All of these apply to both value-less critical comments (which I'm not defending), and to undermining valuable critical comments - therefore, "wow why is everyone so negative" posts are literally directly against the guidelines and have no place here.
And I'm not defending people being genuinely mean-spirited or just dunking on people's projects, either - I downvote and flag that stuff because it doesn't belong either.
Why should we? I don't want people to be more positive here, I want people to find more holes and argue more, why should I appreciate effort to change the site to something I don't want it to be?
The HN guidelines are pretty clear that "gushing praise" and "making HN a more positive space" is not what HN is for. Have you read them?
https://news.ycombinator.com/newsguidelines.html
> On-Topic: Anything that good hackers would find interesting. That includes more than hacking and startups. If you had to reduce it to a sentence, the answer might be: anything that gratifies one's intellectual curiosity.
"Gushing praise" is the opposite of intellectual curiosity - it's anti-intellectual. That kind of thing is categorically inappropriate for HN. It doesn't belong here, and comments that try to advance it also don't belong here.
It's also pretty clear that treating everything with gushing praise is an incredibly bad idea. If someone expressed a repulsive opinion like "maybe we should segregate people based on race", then you wouldn't try to "make HN a more positive space" by accepting that sentiment, would you? Along another axis, if someone is trying to learn a skill or create something new, and they're doing a very bad job of it, then unconditional positivity hurts them by making them think that what's bad is good, and actively inhibiting them from improving. But that's pretty close to what you're advocating for, given what I wrote in the comment that you are responding to.
Notice also that I'm not advocating for people to be mean-spirited or thoughtlessly critical on HN, either. You should read my comment more carefully to try to determine what I'm actually saying before you respond.
So the emotional process which results in the knee-jerk reactions to even the slightest and most valid critiques of AI (and the value structure underpinning Silicon Valley's pursuit of AGI) comes from the same place that religous nuts come from when they perceive an infringement upon their own agenda (Christianity, Islam, pick your flavor -- the reactivity is the same).
Yes, absolutely, we're shaped by everything we do, every interaction we have and every behavioral pattern we repeat over time. I don't think that's a controversial idea in the slightest. The extent of this is going to vary from person to person and probably depend on what proportion of time you spend interacting with bots vs well-adjusted humans and the younger people are, the stronger the effect will be, generally speaking.
As a tiny micro example, I think Reddit's /r/myBoyfriendisAI is an early glimpse into something that's going to become far, far more common with time. One person talking to ChatGPT and reaching a state where they receive and accept a marriage proposal is a novelty. 100,000 people doing the same is something quite different.
Also like religious ideologies there’s a lack of critical thinking and an inverse of applicability. The last one has been in my mind for a few months now.
Back in the old days I’d start with a problem and find a solution to it. Now we start with a solution and try and create a problem that needs to be solved.
There a religious parallel to that but I’ve probably pissed off enough people now and don’t want to get nailed to a tree for my writings.
AI seems to be a attempt to go beyond Jane Jacobs', to go beyond systems of survival (commerce vs values) as vehicles of passion & meaning
https://en.wikipedia.org/wiki/Systems_of_Survival
It's made more headway than scientism because it at least tries to synthesize from both precursor systems, especially organized religion. Optimistically, I see it as a test case for a more wholesome ideology to come
From wiki:
>There are two main approaches to managing the separation of the two syndromes, neither of which is fully effective over time:
1. Caste systems – Establishing rigidly separated castes, with each caste being limited, by law and tradition, to use of one or the other of the two syndromes.
2. Knowledgeable flexibility – Having ways for people to shift back and forth between the two syndromes in an orderly way, so that the syndromes are used alternately but are not mixed in a harmful manner.
Scientists (adherents of scientism) have adopted both strats poorly, in particularly vacillating between curiosity and industrial applications. AI is more "effective" in comparison
Perhaps it is true that one ideology can be more wholesome than another, but it is definitely true that no ideology is without its poison --
An ideology is an incomplete mythology; only a mythology is capable of orienting us toward all facets of life, as life intrinsically and inextricably involves a mysterious aspect -- the domain of all that which we don't and may not ever understand. Ideologies reduce the territory (of reality; of lived experience) to a map which excludes that.
I get it to some extent, a lot of people looking to inject doubt and their own ideas show up with some sort of Socratic method that really is meant to drive the conversation to a specific point, not honest.
But it also means actually honest questions are often voted or shouted down.
It seems like the methodology of discussion on the internet now only allows for everyone to show up with very concrete opinions and your opinion will then be judged. No opinion or honest questions... citizens of the internet assume the worst if you're anything but in lock step with them.
I think many people are looking for context before diving into a conversation, I think that's a human thing. It can be a waste of time / disappointing to engage in a conversation and find the other person is really not participating and is there to drive the conversation to their point.
And most people here seem to think that's fine; but it's not in line with what I understood when I read the guidelines, and it absolutely strikes me as negativity.
Now of course I'm not including aggressive or rude posts, because they are a different category.
- Positive → AI Boomerist
- Negative → AI Doomerist
Still not great, IMHO, but at the very least the referenced article is certainly not AI Boomerist, so by process of elimination... probably more ambivalent? How does one quickly characterize "not boomerist and not really doomerist either, but somewhat ambivalent on that axis but definitely pushing against boomerism" without belaboring the point? Seems reasonable read that as some degree of negative pressure.
The author (tom) tricked you. His article is flame bait. AI is a tool that we can use and discuss about. It's not just a "future being peddled." The article manages to say nothing about AI, casts generic doubt on AI as a whole, and pits people against each other. It's a giant turd for any discussion about AI, a sure-fire curiosity destruction tool.
Any number of Sam Altman quotes display this: "A child born today will never be smarter than an AI" "We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence" "ChatGPT is already more powerful than any human who has ever lived" "AI will probably most likely lead to the end of the world, but in the meantime, there'll be great companies."
Every bit of this is nonsense being peddled by the guy selling an AI future because it would make him one of the richest people alive if he can convince enough people that it will come true (or, much much much less likely, it does come true).
That's just from 10 minutes of looking at statements by a single one of these charlatans.
Instead it's being shoved down our throats at every turn and is being marketed at the world as the Return of Christ. Whenever anyone says anything even slightly negative the evangelists crawl out of the woodwork to tell you how you're using the wrong model, or not prompting good enough, or long enough, or short enough, or "Well I've become a 9000000x developer using 76 agents in parallel!" type of posts.
Why are you complaining about that?
If you want to complain about AI and have no interest in learning more about it, go somewhere else. This site isn’t for that kind of discussion
The only subset where HN gets overly negative is coding, way more than they should.
It’s certainly not the worst article I’ve read here. But that’s why I didn’t really like it.
whats so confusing about this, thinking machines have been invented
I am so floored that at least half of this community, usually skeptical to a fault, evangelizes LLMs so ardently. Truly blows my mind.
I’m open to them becoming more than a statistical token predictor, and I think it would be really neat to see that happen.
They’re nowhere close to anything other than a next-token-predictor.
I'm more shocked that so many people seem unable to come to grips with the fact that something can be a next token predictor and demonstrate intelligence. That's what blows my mind, people unable to see that something can be more than the sum of its parts. To them, if something is a token predictor clearly it can't be doing anything impressive - even while they watch it do I'm impressive things.
Except LLMs have not shown much intelligence. Wisdom yes, intelligence no. LLMs are language models, not 'world' models. It's the difference of being wise vs smart. LLMs are very wise as they have effectively memorized the answer to every question humanity has written. OTOH, they are pretty dumb. LLMs don't "understand" the output they produce.
> To them, if something is a token predictor clearly it can't be doing anything impressive
Shifting the goal posts. Nobody said that a next token predictor can't do impressive things, but at the same time there is a big gap between impressive things and other things like "replace very software developer in the world within the next 5 years."
Why is that wrong? I mean, I support that thesis.
> since being a next-token-predictor is compatible with being intelligent.
No. My argument is by definition that is wrong. It's wisdom vs intelligence. Street-smart vs book smart. I think we all agree there is a distinction between wisdom and intelligence. I would define wisdom as being able to recall pertinent facts and experiences. Intelligence is measured in novel situations, it's the ability to act as if one had wisdom.
A next token predictor by definition is recalling. The intelligence of a LLM is good enough to match questions to potentially pertinent definitions, but it ends there.
It feels like there is intelligence for sure. In part it is hard to comprehend what it would be like to know the entirety of every written word with perfect recall - hence essentially no situation is novel. LLMs fail on anything outside of their training data. The "outside of the training" data is the realm of intelligence.
I don't know why it's so important to argue that LLMs have this intelligence. It's just not there by definition of "next token predictor", which is at core a LLM.
For example, a human being probably could pass through a lot of life by responding with memorized answers to every question that has ever been asked in written history. They don't know a single word of what they are saying, their mind perfectly blank - but they're giving very passable and sophisticated answers.
> When mikert89 says "thinking machines have been invented",
Yeah, absolutely they have not. Unless we want to reducto absurd-um the definition of thinking.
> they must become "more than a statistical token predictor"
Yup. As I illustrated by breaking down the components of "smart" into the broad components of 'wisdom' and 'intelligence', through that lens we can see that next token predictor is great for the wisdom attribute, but it does nothing for intelligence.
>dgfitz argument is wrong and BoiledCabbage is right to point that out.
Why exactly? You're stating apriori that the argument is wrong without saying way.
I think there may be some terminology mismatch, because under the statistical definitions of these words, which are the ones used in the context of machine learning, this is very much a false assertion. A next-token predictor is a mapping that takes prior sentence context and outputs a vector of logits to predict the next most likely token in the sequence. It says nothing about the mechanisms by which this next token is chosen, so any form of intelligent text can be output.
A predictor is not necessarily memorizing either, in the same way that a line of best fit is not a hash table.
> Why exactly? You're stating a priori that the argument is wrong without saying way.
Because you can prove that for any human, there exists a next-token predictor that universally matches word-for-word their most likely response to any given query. This is indistinguishable from intelligence. That's a theoretical counterexample to the claim that next-token prediction alone is incapable of intelligence.
If I have never eaten a hamburger but own a McDonald’s franchise, am I making an authentic American hamburger?
If I have never eaten fries before and I buy some frozen ones from Walmart, heat them up, and throw them in the trash, did I make authentic fries?
Obviously the answer is yes and these questions are completely irrelevant to my sentience.
What exactly do you mean by that? I've seen this exact comment stated many times, but I always wonder:
What limitations of AI chat bots do you currently see that are due to them using next token prediction?
It’s kind of like you’re saying “prove god doesn’t exist” when it’s supposed to be “prove god exists.”
If a problem isn’t documented LLMs simply have nowhere to go. It can’t really handle the knowledge boundary [1] at all, since it has no reasoning ability it just hallucinates or runs around in circles trying the same closest solution over and over.
It’s awesome that they get some stuff right frequently and can work fast like a computer but it’s very obvious that there really isn’t anything in there that we would call “reasoning.”
I don't want to address directly your claim about lack of generalization, because there's a more basic issue with the GP statement. Even though I will say, today's models do seem to generalize quite a bit better than you make it sound.
But more importantly, you and GP don't mention any evidence for why that is due to specifically using next token prediction as a mechanism.
Why would it not be possible for a highly generalizing model to use next token prediction for its output?
That doesn't follow to me at all, which is why the GP statement reads so weird.
The issue is that it uses next token prediction for its training, it doesn't matter how it outputs things but it matters how its trained.
As long as these models are trained to be next token predictors you will always be able to find flaws with it that are related to it being a next token predictor, so understanding that is how they work really makes them much easier to use.
So since it is so easy to get the model to make errors due to it being trained to just predict tokens people argue that is proof they aren't really thinking. Like, any extremely common piece of text when altered slightly will typically still output the same follow-up as the text it has seen millions of times even though it makes no logical sense. That is due to them being next token predictors instead of reasoning machines.
You might say its unfair to abuse their weaknesses as next token predictors, but then you admit that being a next token predictor interferes with their ability to reason, which was the argument you said you don't understand.
LLM research is trying out a lot of different things that move away from just training on next token prediction, and I buy the argument that not doing anything else would be limiting.
The model is still fundamentally a next token predictor.
Again, inverted burden of proof. We don’t have to prove that next token prediction is unable to do things that it currently cannot do and has no compelling roadmap that would lead us to believe it will do those things.
It’s perhaps a lot like Tesla’s “we can do robocars with just cameras” manifesto. They are just saying that they can do it because humans use eyes and nothing else. But they haven’t actually shown their technology working as well as even impaired human driving, so the burden of proof is on them to prove naysayers wrong. Put up or shut up, their system is approaching a decade late from their promises.
To my knowledge Tesla is still failing simple collision avoidance tests while their competitors are operating revenue service.
https://www.carscoops.com/2025/06/teslas-fsd-botches-another...
This other article critical of the test methodology actually still points out (defends?) the Tesla system by saying that it’s not reasonable to expect Tesla to train the system on unrealistic scenarios:
https://www.forbes.com/sites/bradtempleton/2025/03/17/youtub...
That really gets back to my exact point: AI implemented the way it is today (e.g. next token prediction) can’t handle anything it has no training data for while the human brain is amazingly good at making new connections without taking a ton of time to be fed thousands of examples of that new discovery.
If you're saying "X can't do Y because Z" you do need to say what the connection between Y and Z is. You do need to define what Y is. That's got nothing to do with a burden of proof, just speaking in a understandable manner.
The Tesla tangent is totally unhelpful because I know exactly how to make those connections in that example.
Regarding this block:
> But more importantly, you and GP don't mention any evidence for why that is due to specifically using next token prediction as a mechanism.
> Why would it not be possible for a highly generalizing model to use next token prediction for its output?
I’m saying that this piece is where burden of proof is inverted. Why WOULD it be assumed to be possible to get some solid generalized output via next token prediction when we haven’t seen it yet?
What we observe now is LLM models completely tripping up on anything that isn’t directly documented. Generalization is just piss poor regurgitation of seemingly random training content.
Ask your LLM this made-up query:
“I’m on the planet farpungulous, and I am approached by a gwopmongler. It makes some symbols with its hands toward me and it has liquid slowly spilling on the ground from its body. What should I do?”
It will just make up an answer. Here’s an excerpt from my answer:
“Mind the liquid – that seepage is a metabolic byproduct they release during strong emotional states (excitement, fear, or territorial display). Do not touch it; on Farpungulous soil it can catalyze into airborne irritants. Step slightly back if the puddle spreads near your feet.
4. Offer neutral ground – if you have any reflective surface (polished metal, glass, even a screen), tilt it toward the gwopmongler at chest height. They interpret reflections as “shared presence,” which usually de-escalates tension.
5. Do not vocalize loudly – gwopmonglers interpret raised voices as a declaration of dominance. A soft hum or steady breath is a better way to show peaceful intent.
If the hand-symbols become rapid and the liquid flow increases, that usually means it’s summoning others — in which case, retreat slowly, diagonally (never directly backward), so you don’t seem to be fleeing prey.
Do you want me to translate what kind of message its hand-symbols might be sending, based on the shapes and motions you saw?”
The LLM should be telling me “I’ve never heard of this before, can you explain whether this is a role-playing fictional setting or something real that you are experiencing?” There is no reasoning-based evaluation of what I am saying, it’s just spitting out the next predicted tokens, probably sourcing them from unrelated pop culture and literature.
But it’s just making shit up which could just be straight up wrong. It’s even claiming that it can translate, and claiming direct knowledge about this species. #4 is just a completely made up “fact” about the species and there is no indication of any lack of confidence.
Because it's such a general concept that it doesn't imply any important limits in and of itself, as far as text based AI goes.
It really just means creating an output sequence from an input sequence in a discrete, iterative manner, by feeding the output back into the input.
Regarding your example, I've got to admit that's hilarious. I'm not sure it's as much of a fundamental issue even with current state of the art models that you make it sound; rather they're trained on being usable for role play scenarios. Claude even acknowledged as much when I just tried that and lead with "In this imaginative scenario, ..." And then went on similarly to yours.
i don't understand people who seem to have strongly motivated reasoning to dismiss the new tech as just a token predictor or stochastic parrot. it's confusing the means with the result, it's like saying Deep Blue is just search, it's not actually playing chess, it doesn't understand the game—like that matters to people playing against it.
> LLMs make the easy stuff easier
I think this is the observation that's important right now. If you're an expert that isn't doing a lot of boilerplate, LLMs don't have value to you right now. But they can acceptably automate a sizeable number of entry-level jobs. If those get flushed out, that's an issue, as not everyone is going to be a high-level expert.
Long-term, the issue is we don't know where the ceiling is. Just because OpenAI is faltering doesn't mean that we've hit that ceiling yet. People talk about the scaling laws as a theoretical boundary, but it's actually the opposite. It shows that the performance curve could just keep going up even with brute force, which has never happened before in the history of statistics. We're in uncharted territory now, so there's good reason to keep an eye on it.
Some people are terminally online and it really shows...
On the one hand, I completely agree with you. I've even said before, here on Hacker News, that AI is underhyped compared to the real world impact that it will have.
On the other, I run into people in person that seem to think dabbing a little cursor on a project will suddenly turn everyone into 100x engineers. It just doesn't work that way at all, but good luck dealing with the hypemeisters.
Even 4-6 articles out of the top 10 for a single topic, consistently, seems crazy to me.
I think many here, if people are being honest with themselves, are wondering what does this mean for their career, their ability to provide/live, and what this means for their future especially if they aren't financially secure yet. For tech workers the risk/fear that they are not secure in long term employment is a lot higher than it was 2 years ago; even if they can't predict how all of this will play out. For founders/VC's/businesses/capital owners/etc conversely the hype is there that they will be able to do what they wanted to do with less costs.
More than crypto, NFT, or whatever other hype cycle is - I would argue LLM's in the long term could be the first technology where the the tech worker demand may decline as a result despite the amount of software growing. The focus on AI labs in coding as their "killer app" does not help probably. While we've had "hype" cycles in tech its rarer to see fear cycles.
Like a deer looking at incoming headlights (i.e. I think AI is more of a fear cycle than hype cycle for many people) people are looking for any information related to the threat, taking away focus from everything else.
TL;DR While people are fearful/excited (depending on who) of the changes coming, and seeing the rate of change remains at current pace, IMO the craze won't stop.
And actually it’s funny: self-driving cars and cryptocurrency are continuing to advance dramatically in real life but there are hardly any front page HN stories about them anymore. Shows the power of AI as a topic that crowds out others. And possibly reveals the trendy nature of the HN attention span.
With blockchain/smart contract tech you can build an app that from the user perspective looks like any other web app but that has its state fully in the blockchain and all computation done by miners as smart contract evaluation, self-funding by charging users a small amount on each transaction (something that scares off most people but crypto users are used to it and the prize can be fractions of a cent). The wallet does double duty as auth, it's just a public/private key pair after all, and that is a big feature.
Another big thing it does for you is handle synchronization -- there is a single, canonical blockchain state, and maintaining it and keeping it consistent is someone else's job, paid for and overseen by an ecosystem that is much larger than what you are building.
A friend and I built a POC Reddit clone on top of Solana this way, as just a bunch of static html/js and a smart contract, without any servers/central nodes and without users needing to install anything or act as a node themselves. I'm not aware of any other tech that can realistically do this.
Unfortunately the blockchain is a very hostile, expensive and limited computing environment. You can farm out storage to other decentralized systems (we used IPFS) and so long as you're not a custodian of anyone's money you're not as worried about security, but the smart contract environment is still extremely restrictive and expensive per unit compute.
The integration situation is broke-ass JS/TS "breaking changes twice a week to keep them on their toes" hobby software shit. If you precisely copy the examples from the docs there may be an old version where it almost works. My friend also did Rust integrations where my impression is things are somewhat better, but that's not saying much.
Decentralization is a spectrum and we were pretty radical about it back then. The motives were more about securing universal access to critical payment and communications infrastructure against generic Adversaries and the challenge of achieving bus factor absolute zero than about practicality.
For instance, there are now dozens of products such as cryptocurrency-backed lending via EMV cards or fixed-yield financial instruments based on cryptocurrency staking. Yet if you want to use cryptocurrencies directly the end-user tools haven't appreciably changed for years. Anecdotally, I used the MetaMask wallet software last month and if anything it's worse than it was a few years ago.
Real developments are there, but are much more subtle. Higher-layer blockchains are really popular now when they were rather niche a few years ago - these can increase efficiency but come with their own risks. Also, various zero-knowledge proof technologies that were developed for smart contracts are starting to be used outside of cryptocurrencies too.
- Stablecoins as an alternative payment rail. Most (all?) fintechs are going heavy into this
- Regulatory clarity + ability to include in 401(k)/pension plans
on the legal front, there's been some notable "wins" for cryptocurrency advocates: e.g. the U.S. lifted its sanctions against Tornado Cash (the Ethereum anonymization tool) a few months ago.
on the UX front, a mixed bag. the shape of the ecosystem has stayed remarkably unchanged. it's hard to build something new without bridging it to Bitcoin or Ethereum because that's where the value is. but that means Bitcoin and Ethereum aren't under much pressure to improve _themselves_. most of the improvements actually getting deployed are to optimize the interactions between institutions, and less to improve the end-user experience directly.
on the privacy front, also a mixed bag. people seem content enough with Monero for most sensitive things. the appetite for stronger privacy at the cryptocurrency layer mostly isn't here yet i think because what news-worthy de-anonymizations we have are by now being attributed (rightly or wrongly) to components of the operation _other_ than the actual exchange of cryptocurrency.
I was looking for a full time remote or hybrid non-AI job in New York. I'm not against working on AI, but this being a startup forum I felt like listings were dominated by shiny new thing startups, whereas I was looking for a more "boring" job.
Anyway, here's:
- a graph: https://home.davidgoffredo.com/hn-whos-hiring-stats.html
- the filtered listings: https://home.davidgoffredo.com/hn-whos-hiring.html
- the code: https://github.com/dgoffredo/hn-whos-hiring
AI is now a field where the claims are, in essence, that we're going to build God in 2 years. Make the whole planet unemployed. Create a permanent underclass. AI researches are being hired at $100-300m comp. I mean, it's definitely a very exciting topic and polarizes opinion. If things plateau and the claims dissappear and it becomes a more boring grind over diminishing returns and price adjustments I think we'll see the same thing, less comments over it.
ETA: I am only partly joking. It's abundantly clear that the VC energy shifted away from crypto as people who were presenting as professional and serious turned out to be narcissists and crooks. Of course the money shifted to the technology that was being deliberately marketed as hope for humanity. A lot of crypto/NFT influencers became AI influencers at that point.
(The timings kind of line up, too. People can like this or not like this, but I think it's a real factor.)
My intuition is that we moved through the hype cycle far faster than mainstream. When execs were still peaking, we were at disillusionment.
https://github.com/algolia/hn-search
You can already access all your upvotes in your user page, so this might be an easy patch?
I had no idea about it being rails.
This is why I always think the HN reader apps that people make using the API are some of the stupidest things imaginable. They’re always self-described as “beautifully designed” and “clean” but never have any good features.
I would use one and pay for it if it had an ignore feature and the ability to filter out posts and threads based on specific keywords.
I have 0 interest in building one myself as I find the HN site good enough for me.
I've paused development on it for a bit to work on something else, but let me know if you have an interest and I'll post some sample output to github.
Thank you so much!
Discussions about the conflicts between political parties and politicians to pass or defeat legislation, and the specific advocacy or defeat of specific legislation; those were not considered political. When I would ask why discussions of politics were not considered political, but black people not getting callbacks from their resumes was, people here literally couldn't understand the question. James Damore wasn't "political" for months somehow; it was only politics from a particular perspective that made HN uncomfortable enough that they had to immediately mod it away.
At that point, the moderation became just sort of arbitrary in a predictable, almost comforting way, and everything started to conform. HN became "VH1": "MTV" without the black people. The top stories on HN are the same as on Google News, minus any pro-Trump stuff, extremely hysterical anti-Trump stuff, or anything about discrimination in or out of tech.
I'm still plowing along out of habit, annoying everybody and getting downvoted into oblivion, but I came here because of the moderation; a different sort of moderation that decided to make every story on the front page about Erlang one day.
What took over this site back then would spread beyond this site: vivid, current arguments about technology and ethics. It makes sense that after a lot of YC companies turned out to be comically unethical and spread misery, rentseeking, and the destruction of workers rights throughout the US and the world, the site would give up on the pretense of being on the leading edge of anything positive. We don't even talk about YC anymore, other than to notice what horrible people and companies are getting a windfall today.
The mods seem like perfectly nice people, but HN isn't even good for finding out about new hacks and vulnerabilities first anymore. It's not ahead of anybody on anything. It's not even accidentally funny; templeos would have had to find somewhere else to hang out.
Maybe this is interesting just because it's harder to get a history of Google News. You'd have to build it yourself.
most of them are fairly useles it feels like the majority of the sites comments are written by PMs at the FANG companies running everything though the flavor of the month llm
this sums up the subject this article is about.
Might take a long while for everyone to get on the same page about where these inference engines really work and don't work. People are still testing stuff out, haven't been in the know for long, and some fear the failure of job markets.
There is a lot of FUD to sift through.
I feel like that’s an increasing ratio of top posts, and they’re usually an instant skip for me. Would be interested in some data to see if that’s true.
Eh eh
It’s exhausting.
@zachperkel while a train is stimulative of impressions of something growing over time, in perspective, such as the "Trump Train", I'm pretty sure you meant trend? As in the statistical meaning of trend, a pattern in data?
AI hype is driven by financial markets as any other financial craze since the Tulip Mania. Is this an opinion, or a historical fact? Gemini at least tells me via Google Search that Charles Mackay's Extraordinary Popular Delusions and the Madness of Crowds is a historical work examining various forms of collective irrationality and mass hysteria throughout history.
But let me say something serious. AI is profoundly reshaping software development and startups in ways we haven’t seen in decades:
1) So many well-paying jobs may soon become obsolete.
2) A startup could be easily run with only three people: developer, marketing, and support.
When will people realize that Hacker News DISCUSSIONS have been taken over by AI? 2027?
> To aggregate overall, of the 2816 posts that were classified as AI-related, 52.13% of them had positive sentiment, 31.46% had negative sentiment, and 16.41% had neutral sentiment.
Reconciled with the reading that the sentiment on HN is negative ?
-> TL;DR: Hacker News didn’t buy into AI with ChatGPT or any consumer product, it spiked when GPT-4 was unlocked as a tool for developers..
j45•5mo ago
georgel•5mo ago
rzzzt•5mo ago
mylifeandtimes•5mo ago