The increasing AI/LLM domination of the site has made it much less appealing to me.
The increasing AI/LLM domination of the site has made it much less appealing to me.
I am wondering what the ratio is for VC and angel dealflow in the valley right now.
Hanging out on the "new" page and upvoting quality non-AI articles is an effective method of resistance.
A bigger impact for me has been the number of mentions of AI in the comments. It's not just that a large part of the front page is dominated by LLM hype posts, it's that every single post has a least one guy near the top somehow bringing AI into the discussion. I don't even care if it's "AI will fix this" or "haha, AI sucks at this too". I just don't want to hear anything about AI ever again.
I've started downvoting them, the same way I always downvote "I fed this to an LLM and here's what it spat out".
Genuinely curious: Why?
Don’t get me wrong, I upvoted this post, and would love to see AI separated out, or at least tagged (like a root comment suggests) so that I can filter them out if I want.
But I can’t say I’d never want to hear anything about AI ever again (though I’m headed in that direction).
What field are you in, and what are your interests, such that you’d want to visit HN without ever seeing mentions of AI?
The hype around it is ridiculous. I don't personally find it nearly as useful as people are saying, so everything feels like people are trying to gaslight me
Don't get me wrong, it's cool tech. Amazing stuff. I just personally don't have much interest in it until it's much more reliable for the things I want to use it for
And I'm really exhausted, tired of hearing about how this is going to replace people like me any minute now
I'm kind of exhausted in general (year after year) of frankly unimaginative engineers who should know better, latching on to whatever is the latest soup of the month, and touting it here as the greatest human achievement since fire.
HN threads very often feel like whichever side posted first winds up dominating the thread, it's bizarre
I see the comments on some articles that are massively pro AI all upvoted to the top, then the comments on another article and it's all the negative AI comments upvoted
It's weirdly echo-chambery on a post by post basis
Personally I just started treating this site as a sophisticated shitposting place and started actually talking about tech in group chats with friends who work in the industry. Increasingly I see folks refer to the content here in the same breath as Reddit so I don't think I'm the only one.
It's probably just a scale problem. When a website becomes big enough it becomes dominated by the folks with the most time to post and the most passionate opinions.
There are some people who are having genuine crises over this stuff, some of it existential, and some of it “wow I thought my friends had some basic agreements about the world that we actually don’t,” and seeing this stuff on the regular just fans these sorts of issues.
Also, in a simpler sense, there are a limited number of homepage spots, and if you don’t want to see a topic, it effectively shrinks your homepage. If HN only showed five stories to me it would be less useful than it is now.
Yes, I feel like all these shallow "[Someone] vibe-coded [thing] with AI using [Claud whatever]" articles are hitting the front page and muscling out other, more interesting ones. Just like the "[Common unix utility] re-written in Rust!" articles of years past.
https://ontology2.com/essays/HackerNewsForHackers/
years ago but I don't stand by that article because I don't feel that way anymore. I do stand by the sequel
https://ontology2.com/essays/ClassifyingHackerNewsArticles/
because that's the operating principle of YOShInOn which is something a little more sophisticated applied to RSS feeds and productized.
I'm a software engineer. I consider this some of the most important work of our generation. The hardware we've made today has unlocked an until now impossible control over the world. We don't have to mechanically devise a way to make a clock that tracks the stars. We can just program it into a microchip, and it'll just do it. We don't have to manage an untold thousands of people to calculate our taxes. We can write it into a computer and it can just do it. Forever and perfectly. We're just not applying it.
I've reached the point of despair. It's not a AI doom kind of despair, where I believe that AI is going rogue or whatever. It's a much more pedestrian of despair. We have tremendous problems ahead of us. Both when it comes to the climate, but also when it comes to just doing the things that society always has to do and AI doesn't offer anything to any of the actual problems of society.
While people are dying of Ebola in Africa and Americans are dying because they can't pay for healthcare, we are talking about automating software development for ad-tech companies. It's embarrassing. This is my field, these are my people, and this is the best we have to offer.
I try to abstain from that despair by just not engaging with it. Either AI will happen and we'll take it from there, or it wont and then we'll have wasted a lot of effort and will hopefully never had any credibility as an industry again. I can't make a difference in either of those outcomes, so I just want it to go away.
Let me make it clear though. I too love the math behind recent AI. I even love the engineering behind how we do fast GEMM on GPU's. The challenges are really fun technically. That just can't be what decides our direction.
I hope that somewhat answered it a little. It's a bit hard to get such a large topic rooted so deeply in me into a comment. Thinking about the future in relation to these billion dollar companies and what they make does actually make me emotional.
And that triggers the culture war, because Urban/Rural and other major factions have wildly different experiences, incentives, and goals on these fronts. And anyone trying to tackle those real problems who is noticed by one side or the other will inevitably get attacked.
And rather than sit down and really consider what we (as a nation!) want overall, make compromises, and agree to work together, we’d rather sit in our comfortable air conditioned places and stab each other in the back over the internet - or just check out into a comfortable bubble.
And unfortunately that means that the real problems are escalating.
Fully agree, and I in fact am finding that I actually find more stories I'm interested in that way than looking at the front page. For whatever reason, I'm increasingly getting out of sync (interests-wise) with broader HN. So many stories I think are great HN material (and would have been a few years ago) languish with almost no activity.
So there are two reasons IMHO to browse new: Surface better stories to front page for engagement, and find better stories
Very common in computer science contexts. Young undergraduates always pick up the new tech and make something that seems alien and wrong first. It's not even the masters students.
Possibly the same Kiro - Agentic IDE post would have been as interesting to you as the launch of Atom or something related to VS Code, etc.
Res ipsa loquitur
I hang out in /ask and /asknew for my part.
PS: Hey, Paul... When are you going to close my 2021 issue[0], you already merged the pull request[1] :D
Come on, man!
Buy my AI/LLM RAG Agentic bot to handle pull-requests and follow-ups based on HN conversations.
If that is done first, we might not need to separate subjects.
HN lacks even the most basic aspects of human verification.
Like most it too will come to pass (as it is further adopted in the mainstream and becomes commonplace).
Most of "the political posts" seem to happen because someone shares a news article and everyone else uses it as an excuse to discuss the general topic (or at least something that gets general agreement as being the topic). I'm not really clear on how LLMs get involved there.
On the contrary, LLMs based AIs create a lot of new problems.
I enjoy the website as-is, and simply use search when I want to get to the topics that interest me.
One, lets be honest, hn wont do it, part of their secret sauce is that they don't change, and they know that.
Two, fragmenting the community would just reduce engagement and risk making both feel like a ghost town.
Three, LLMs are (one of) the forefronts of our industry. State of the art is advancing fast. It has properties that no one knows the best practises for. And it has implications that are wide ranging. To try and bury this because it has a lot of new developments goes against why most of us are on this site.
I believe in the meritocracy of the upvote button.
I've had the exact same feeling a lot over the past couple years or so, and especially the last 6 months. I used to hit the front page and find 5 to 10 stories I was interested in. Exhausting those to read the second or third page wasn't common. Now I find maybe one story I want, and I routinely will scan through 4 or 5 pages (down to 120 to 160) and only find a handful (4 or 5) that I want to read.
I've long found myself wishing for mini-HNs on different broad topics that interest me. Sadly this was the whole point/idea behind reddit. For example, besides the actual and venerable and loved real HN, I'd love an HN for:
1. Politics: Where disagreements are encouraged and any claims are challenged, but only with factual arguments/counterarguments, and any emotional arguments are moderated (basically how we encourage HN comments to be). There have been some reddit communities over the years doing this, but IME they frequently devolve into echo chambers. It almost always comes down to bad moderators.
2. General News: Where stuff that is of broad interest (and not really tech-related) can be posted and commented on in thoughtful ways. Particularly local news would be fun
3. <placeholder>: Had an idea and forgot it as I was making the list. Will edit and insert when I remember!
I've kind of accepted that my dream just can't work (at least, looking at Reddit as the great experimentation of that). People on the internet are just (generally speaking) incapable of consistently humanizing the user(s) on the other end, and proceed to treat others very poorly. Pride and inability to be wrong strongly exacerbate that tendency.
In my experience:
Most of them are basically designed to be echo chambers from the start — opposition is only admitted in to the extent that it allows easy targets to knock down. Most people just aren't that good at explaining why they believe what they believe, let along making a convincing argument for it; so all you need to do is set up an environment where one side's position is the default.
There have been a few attempts at explicitly avoiding that problem. They do eventually collapse. But I don't think it's due to bad moderation. It's more that certain factions simply refuse to engage civilly and unemotionally with each other. They will see statements as inherently provocative that the other side genuinely consider matter-of-fact.
I was a moderator for a place like that once. It was remarkable to me how, on the "hot topics" that were polarizing and led to a lot of bans and suspensions, on one side people who were suspended would argue and whine and complain basically as long as we'd listen to them, maybe even the entire duration of the suspension, and they would never get it into their head what our standards were for respectful discourse; and they would even suggest that having such standards was inherently oppressive; and when they got back they would immediately go back to their old ways. And on the other side, people would basically say "LOL, see you on <suspension end date>" and disappear, and come back as promised, and behave themselves for a while.
And while there were a very few people who simply couldn't kick the habit of using slurs or other disparaging terms to refer to identifiable groups of people, there were far more — almost all on the opposite side — who simply couldn't kick the habit of openly insulting the people they were directly responding to. Or at insinuating negative character traits and hidden motivations not in evidence, or other such "dark hinting" as we call it. Or even just of using obnoxious, brutal sarcasm all the time when we expected people to speak plainly.
There’s only one other community I’ve encountered like it, run by a small liberal arts college.
From a signals perspective, HN is incredibly valuable. You get to watch in real time what’s capturing the minds of technically inclined readers. Sure, that means lots of lurkers and a few dominant topics (right now: AI). But that’s also kind of the point. HN works as a reflection of where the collective attention is, whether we like it or not.
Anyways...just two cents.
They always seem to take the form of "Should we divide this group into A and B, A stays here and B goes over there and that way everybody is happy"
Invariably the person who proposes this wants to remain in group A and will not be a participant in group B.
To me this seems like the subtext is "Those people are not welcome here, they are not like us. It's not like we have anything against them, we just don't want them ramming it down our throats"
Anyone is free to make a website with whatever content they want, they can invite people to it and grow your own community. Directing a community to divide to remove an element you dislike is an attempt to appropriate the established community.
It could just as easily be "I don't feel like there is a place here for me anymore and I wish I had another place to go"
People with that sentiment ask about what alternative places exist, some of them make their own places.
My post above mentioned something I notice on Reddit. I hardly ever visit Reddit these days. It doesn't really feel like the place for me now. I am not posting this comment on Reddit.
I don't think that's overall very true
Most of those people are just lonely and isolated, and that's a big part of why we are living in what people are calling a "loneliness epidemic"
It's easier than ever to make a new niche area. It's more difficult than ever to get your niche area discovered by others, because you are drowned out by the noise
It feels quite hopeless for many people in my experience
How does this sound? It’s about a religion not the people.
I don't disagree with this observation about Reddit. However, I feel HN readers are more topic-oriented. Folks really do come to HN to read the articles and then maybe get drawn into a discussion.
I grant there are some topics here that tend to be more engagement driven but on balance I think the above holds.
based on the number of comments i see that are oblivious to the actual content of the articles, i'm pretty sure the user flow is "Folks come to HN to read headlines and have a conversation, and then maybe get drawn into reading an article"
Past that, I don't see non-reading commenters being a dominant presence. Some topics draw a few more than normal but that's the worst of it.
Then the people wanting to filter "x" could just do it via simple grease monkey scripts or if HN natively supported it.
Sure, it wouldn't be perfect, but neither does it have to be.
Similar to nest usurpation with eusocial insects, this is by definition parasitism when the energy-redirection is unwanted or unavoidable.
In the specific case of AI it's way worse than the usual suspects where everyone is effected and so everyone has to have some opinion (looking at you politics). Because even some rant about how much you hate AI is directly feeding it at least 3 ways: first there's the raw data, then there's the free-QA aspect, then there's the free-advertisement aspect when others speak up to disagree with your rant. So yeah, even people who like some of the content sometimes quickly start to feel hijacked.
This is very hard to do. But hey, I'll give it a try.
Starting now a new community for AI-assisted coding: https://kraa.io/vibecoding
> vibecoding
These should not be deemed equivalent.
I find I can do that with granular enough subreddits, or the (maybe old) feature in Twitter where you could group people you follow into lists and see multiple "homepages".
This for me has solved the issue of dividing community, which at the least from a practical level can be tricky.
Ive been exploring how to achieve this effect "on top" of HN lately, rather than by controlling followers, by popping a very simple AI filter on top that re-ranks it for me, and found it quite satisfying, but not sure what the ultimate value/usecase might be.
I am truly tired of AI being rammed down my throat, not just via the tech news, but in article content (slop), in un-asked-for tech product features, and at my own tech job. The solution is not to divide the community and make people unwelcome, but to provide at least some minimal set of filters and ways to opt out of the hype frenzy. I don't want people to feel unwelcome, but I do wish there was a way to turn the AI firehose off.
If, say, a third to two third of articles in any given frontpage, for multiple months to years, do not fit this description - can you see how one's ability to find what they are looking for gets hampered?
Like yes, you can grow nice flowers on the beautiful fertile soil there, it just sucks we need to get rid of these protected grasslands harboring endangered species on top of it.
I do think it's worthwhile to occasionally have a discussion about what content we want to see, and if a particular topic is getting too much attention.
It's also totally reasonable for a group of people to not want their agenda hijacked.
So, IMO, let the discussion continue. Let's see what comes out of it.
Every day it was the same discussion over again, from someone who didn't bother to do a Google search or look at what was posted the day prior. After a week or so of seeing the same discussion over and over again, I stopped reading the news site.
Needless to say, it's important to occasionally have discussions like this. I also think we under-appreciate the amount of moderation that goes on here. Sometimes I look at the "new" feed and it is just loaded with lots and lots of nonsense, so I get that someone has to put their finger on the scale to keep the quality up.
I don't think the poster believes some kind of democracy could bring about this.
I do believe that by entertaining the idea, the subsequent discussion will be useful for moderators to get a feel of what their userbase thinks of the current state of things.
From my understanding, the soul of HN and what makes it what it is is the moderation - having discussions on issues is an efficient way to signal to them.
This is one of those things that is kind of hard to say without people getting triggered because of negative stereotypes but sometimes you have to stand up for principles and kick people out of social groups to keep a good thing going.
It shows you the Hacker News page with ai and llm stories filtered out.
You can change the exclusion terms and save your changes in localStorage.
o3 knocked it out for me in a couple of minutes: https://chatgpt.com/share/68766f42-1ec8-8006-8187-406ef452e0...
Initial prompt was:
Build a web tool that displays the Hacker
News homepage (fetched from the Algolia API)
but filters out specific search terms,
default to "llm, ai" in a box at the top but
the user can change that list, it is stored
in localstorage. Don't use React.
Then four follow-ups: Rename to "Hacker News, filtered" and add a
clear label that shows that the terms will
be excluded
Turn the username into a link to
https://news.ycombinator.com/user?id=xxx -
include the comment count, which is in the
num_comments key
The text "392 comments" should be the link,
do not have a separate thread link
Add a tooltip to "1 day ago" that shows the
full value from created_at
"Agents raid home of fired Florida data scientist who built Covid-19 dashboard"
"Confessions of an ex-TSA agent"
"Terrible real estate agent photographs"
etc etc
Llm maths? ;)
It's not hypocrisy or anything negative like that, but I do find it amusing for some reason.
Was it? I feel like it was clearly meant to be smug and inflammatory rather than useful in any meaningful way.
The prompt asks for "filters out specific search terms", not "intelligently filter out any AI-related keywords." So yes, a good example of the power of vibe coding: the LLM built a tool according to the prompt.
Eventually: using AI to build tools that use AI to escape AI using tools that use AI.
Few illustrations are so absurd yet feasible enough to depict as horrendous a reality as this.
No, you do not have to "stay up to date on AI stories"—if you see one, add the keyword to the list and move on. There are not as many buzzwords as you seem to be implying, anyways.
If you are dissatisfied, you are welcome to build your own intelligent version (but I am not sure this will be straightforward without the use of AI).
This submission we're commenting on could be about filtering out any data, not just AI stuff. Politics, crypto, AI etc. Or more minute like "Trump" "fracking" "bitcoin" etc.
In any of these scenarios, with a tool designed to filter out content based on limited context, when would you ever be perfectly satisfied?
would you like AI to help you build the perfect context-filter model?
Which is to say, filtering politics out is absurd, one person’s extreme politics is another’s default view of the universe.
It’s a similar kind of mindset.
Stop saying “literally”.
Not everyone has caught up.
It's another thing entirely when the way they're communicating is accurate and correct.
So if I want a front page free of LLM "agents" but also want to view stories about secret agents it will do that, right?
Wish it returned more unfiltered items tho.
OTOH, narrow solutions validate the broader solution, especially if there are a lot of them. Although in that case you invite a ton of "momentum" issues with ingrained user bases (and heated advocacy), hopelessly incompatible data models and/or UX models, and so on. It's an interesting world (in the Chinese curse sense) where such tools can be trivially created. It's not clear to me that fitness selection will work to clean up the landscape once it's made.
Even ones with detailed specs and the human agreed to them don't come back exactly as written.
That's at least 5 JIRA tickets.
I don't think I need a privacy policy since the app is designed so that nothing gets logged anywhere - it works by hitting the Algolia API directly from your browser, but the filtering happens locally and is stored in localStorage so nobody on earth has the ability to see what you filtered.
The API it uses is https://hn.algolia.com/api/v1/search?tags=front_page - which is presumably logged somewhere (covered by Algolia's privacy policy) but doesn't serve any cookies.
> Why do you do these demos if you aren't upfront about all the things the LLMs didn't do?
What do you mean by that?
One decision I had to make was whether the site should update in real time or be curated only. Eventually, I chose the latter because my personal goal is not to read every new link, but to read a few and understand them well.
(The fact that I wrote it using AI doesn't really matter, but I personally found it amusing so I included the prompts.)
Given that it is a poorly implemented solution that doesn't really do what the OP asked, yes it is.
https://github.com/simonw/tools/commit/ccde4586a1d95ce9f5615...
I don’t think it’s wrong, but I also don’t think we can really “AI generate” our way into better communities.
Love it. :D
On-Topic: Anything that good hackers would find interesting. That includes more than hacking and startups. If you had to reduce it to a sentence, the answer might be: anything that gratifies one's intellectual curiosity.
Off-Topic: Most stories about politics, or crime, or sports, or celebrities, unless they're evidence of some interesting new phenomenon. Videos of pratfalls or disasters, or cute animal pictures. If they'd cover it on TV news, it's probably off-topic.
The problem is, most of it really is (as it boils down to "I am / $COMPANY is using an LLM to do something; here's how you can do it too, and / or some pundit's opinion of the implications for the industry"). And the stuff that wouldn't be (like how they work, or statistics and benchmarks), often requires relatively specific domain knowledge to really appreciate.
There's always a flavor of the month. Go back 3-5 years and every third post was crypto or NFT related. AI/LLM too will pass.
I've never really understood this desire of people to effectively hide content that doesn't interest them. Just... ignore it. Like there are enough people on HN who really care about academia and research. I don't. But that's fine. Let them be.
But here's the interesting part: so many on HN rail against the newsfeed concept . You will hear a significant number of HNers say they just want everything in chronological order. Well, except for the subjects that don't interest them.
If HN submissions were tagged and a recommendation algorithm decided what to show you, you'd get exactly what you want: fewer AI/LLM posts if that doesn't interest you. But somehow newsfeeds are still bad?
It's not supposed to be zero-sum — posting volume isn't limited, or at least I assume we're nowhere near what the servers can technically handle — but attention span is limited. Seeing a front page full of things you aren't interested in makes it harder to find the things you are interested in, and feels discouraging if you want to post one of those things (an unfortunate feedback loop).
HN is probably the best source of informed, critical takes on AI/LLM content and that is super valuable to me. I don't think it should fork; I want the same audience to keep doing its work and having the debates :P.
then install violentmonkey
then install https://salamisushi.go-here.nl
browse around as usual and it will collect all discoverable feeds.
then export the feeds as opml
then install a robust RSS aggregator
then load the opml into the aggregator
then sort the news items by pubDate
then remove the obnoxious subscriptions
this is the way
AI is the largest technology advancement of the last 2 decades…it’s going to show up.
There was a ton of work and howling and news about them for years, decades.
Now they’re so boring and standard that they’re just table stakes. Nobody cares about them enough to get into long discussions about them.
The same in a best case will happen with LLMs - the things they can do will become boring and assumed, and people will eventually stop trying to make them do things they can’t.
As with any Major Ongoing Topic on HN (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...), the goal is to reserve frontpage space for the higher-quality stories and try to downweight the follow-ups and low-quality ones. We can't do this perfectly, of course, but we try.
I guess people still use HN to discover things that they never otherwise would have come across, just that it now also includes AI, for better or worse.
The UI for said system, on the other hand, is something I can't even imagine.
Think about it. You can go into whichever pre-AI booming period you desire.
Today I think I'm gonna check out what was hot in May 2009.
https://news.ycombinator.com/front?day=2009-05-14
"Obama proposes no capital gains tax on qualified small business stock"
Sounds steamy.
https://news.ycombinator.com/item?id=608202
See you there!
Different than prior hype cycles.
Frankly, this one seems to be dying out more from everyone just flat out refusing to pay attention to online stuff or things on their phone long enough to starve the beast. If that is even possible.
Ruby Rails, Postgres, SQLite, Rust, etc. They all have their moments and I dont think LLM right now is as overwhelming as any other hyped moments. Certainly not Erlang.
With that said, I don’t find the AI posts nearly as bad as the Blockchain era.
Where do I sign up?
A forum that is exclusionary-by-design has already failed.
(And I could very much do without the content that revolves around US politics. Even if it draws me in sometimes.)
I'm not fighting for a split/fork, just stating the fact that it's nothing compared to Erlang.
"We've had a huge spike in traffic lately, from roughly 24k daily uniques to 33k. This is a result of being mentioned on more mainstream sites [...] You can help the spike subside by making HN look extra boring. For the next couple days it would be better to have posts about the innards of Erlang [...]"
"Ok, ok, enough Erlang submissions. You guys are like the crowdsourced version of one of those troublesome overliteral genies. I meant more that it would be better not to submit and upvote the fluffier type of link. Without those we'll be fine."
Also some fun comments here: https://news.ycombinator.com/item?id=512178
It's also exceedingly generic such that AI isn't really a topic, it's an entire classification or maybe domain to steal from the animal kingdom hierarchy.
I would like to see more nuanced and interesting articles about AI though. Right now it's all about VCs measuring the size of their investments and the politics of alleged superstar programmers.
In general we're thinking about how you can have a transparent profile that stands in place of an opaque algo, or in this case a dominance of a community by something you're not so into. It allows you to still engage with HN, but through the lense of a profile you have control over.
Ironically it is built with AI, but its pretty straightforward no magic stuff. Keen to hear if it is useful, or could be, we're really early stages exploring where to go with it.
Oh, sorry, wrong hype cycle.
Currently, for me on the front page, there is 10/30 AI/LLM related. It means you have 20/30 that is not about AI/LLM. 1 of them is blockchain btw.
Typical HN, 1/3 hype, 1/3 less hype tech, 1/3 other. AI is the current hype.
But ultimately, your browser should have a local, open-source, user-loyal LLM that's able to accept human-language descriptions of how you'd like your view of some or all sites to change, and just like old Greasemonkey scripts or special-purpose extensions, it'd just do it, in the DOM.
Then instead of needing to raise this issue via an "Ask HN", you'd just tell your browser: "when I visit HN, hide all the AI/LLM posts".
The tricky part is having that act across all sites in a light and seamless way. Ive been working on a HN reskin, and it only is fast/transparent/cheap enough because HN has an api (no scraping needed), and the titles are descriptive enough that you can filter based on them, as simonws demo does. But its still HN specific.
I dont know if llms are fast enough at the moment to do this on the fly for arbitrary sites, but steps in that direction are interesting!
But of course local GPU processing power, & optimizations for LLM-like tools, all adancing rapidly. And these local agents could potentially even outsource tough decisions to heavierweight remote services. Essentially, they'd maintain/reauthor your "custom extension", themselves using other models, as necessary.
And forward-thinking sites might try to make that process easier, with special APIs/docs/recipe-interchanges for all users' agents to share their progress on popular needs.
It would also need to be able to "Recognize tasteless, ad-ridden, or other difficult-to-read pages, silently dismiss cookie popups and signup solicitations, undo any attempts to reinvent scrolling, remove all ads except for those on topics X, Y, and Z, and present the page using something like Firefox's reader mode."
Other requirements would include "Fill in these fields that are marked as autocomplete=off," "Use this financial site to display exactly the charts and tables that I want, in this order," "Clean up broken, irrelevant and repetitive search listings on Amazon and eBay," and so on.
For extra credit: "Maintain this persona on Facebook, this one on Bluesky, this one on Slashdot, and this one on HN. Synthesize documents needed to establish proof of age and other aspects of personal identity."
If it went thru that this changed I would not be opposed tho I would read both
news.ycombinator.com##tr.submission:has(:has-text(/LLM|agentic/)) + tr + tr
news.ycombinator.com##tr.submission:has(:has-text(/LLM|agentic/)) + tr
news.ycombinator.com##tr.submission:has(*:has-text(/LLM|agentic/))
Threads that are “my feed isn’t what I want” are exhausting. Sure, cool, but unless someone is breaking some rule, you’re looking for an algorithm to feed you content, which is all well and good, but it’s a different type of site.
Reddit (and HN) are designed exactly so that you can share something interesting you found.
[...document.querySelectorAll('.titleline > a')].filter(link => link.innerText.split(' ').find(word => ['llm', 'ai'].includes(word.toLowerCase()))).forEach(el => {const sub = el.closest('.submission'); sub.nextElementSibling.remove(); sub.remove() })
I wrote this in 2 minutes so I'm sure someone is going to reply with something better.
Besides, it's already starting to slow as people realize AI isn't as great as the influencers want you to believe.
If anything it needs less politics, I have other sites for that bs.
So what does this mean exactly? Nothing LLM/AI related on hacker news is new to you, or you would easily have come across it without HN? Really? Where exactly are you finding your AI/LLM news?
This too shall pass, Joe.
-> But still better than a highly-personalised algo that you don't get to control?
But I have no idea how to separate topics on HN. Is it even possible to do so while keeping the community intact.
Not sure what that means about the community, but must mean something.
So in practice, "AI" content ends up revolving around people bandying about opinions about whether or not we're all doomed, or whether or not we're all on the edge of a utopia, or how much productivity programmers (and which ones) have lost or gained, or what kinds of tasks the LLMs are or are not currently or still good at, or whether anyone still cares about the fact that the term "AI" is supposed to mean something broader than LLMs + tool use.
The emergence of the "vibe coding" concept has made things worse because people will just share their blog posts about personal experiences with trying to write code that way, or flood the Show HN section with things that are basically just "I personally found this specific thing to be 'the boring stuff' that's actually relevant to me, so now I'm automating it" with a few dozen lines of AI-generated code that perhaps invokes some API to ask another AI to do something useful.
To me it feels like golden age of hackers in the 60s-80s (which was before my time but I heard stories about) where everybody is doing their own home grown research to the best of their abilities and sharing insights of varying quality.
But somehow these days if it's not all polished, HN "hackers" aren't interested.
The fun part is that these days, typically the READMEs (especially) and licensing and documentation and maybe even the packaging setup are "polished"; the actual code (and perhaps the tests), not so much. It's quite backwards from what you expect from humans writing new code based on personal intrinsic motivation.
1. This is a great time to get your hands dirty with LLM tech and explore workflows and tooling that bring you joy.
2. The writing around this exploration is often low quality insights or low quality engagement bait that leads to flamewars. Engagement bait that often takes one of two forms. One being a novella on how surely this time the human race is doomed due to singularity/capture by the rich/fascism/etc. The other being how we're one cm away from utopia because automation/flourishing of creativity/etc.
I am enjoying playing around with the tech a lot but the presence of 2 is just annoying. I do think that's an HN problem and not a problem with tech writing as a whole. There's subreddits that, while they have their own problems, are a lot less flamey when discussing these topics.
More generally: You could think about creating "sub HNs" for AI, politics, functional programming, startups, and several other categories. You could think about having something in your settings which specified which sub-HNs would put stories on your front page, with the default being "all".
I just whack “hide” on those and never think of them again.
Because of the way it was.
And, because of the way it is,
We have it the way we have.
And so it is.
There are certainly periods where one concept is "viral" and appears quite often; that's normal.
toomuchtodo•7h ago
bookofjoe•7h ago
PaulHoule•7h ago
toomuchtodo•7h ago
https://news.ycombinator.com/item?id=44261825
I suppose an extension is the answer, classifying and customizing the user’s view accordingly with a pluggable LLM config.
azath92•4h ago
An extension could be a powerful way to apply it without having to leave HN, but I wonder if that (and our website prototype) is a short term solution. I can imaging having an extension per news/content site, or an "alt site" for each that takes into account your preferences, but it feels clunky.
OTOH having a generic llm in browser that does this for all sites feels quite far off, so maybe the narrow solutions where you really care about it are the way to go?
curious_cat_163•2h ago
Hasnep•7h ago
mdaniel•3h ago
Which they could solve by having a less dumb invite system. They can very easily confirm I am not a bot nor a spammer based on any number of objective metrics I can provide to them. But instead the answer is "idle in IRC, hope for the best" and thus they end up with the audience who is willing to jump through those hoops
jowea•7h ago
al_borland•7h ago
esafak•7h ago
toomuchtodo•7h ago
If someone wants to add LLM pluggable support (API endpoint target) and it’ll work on Firefox, I’m willing to kick in some fiat. “HN Copilot.”
aleksituk•4h ago