I have been finding myself feeling very bitter about AI lately. I'm angry about how it's seeping into every aspect of life. Not just my work and my hobbies but it also seems to be creeping into many online communities (including this one!)
I have been thinking a lot about how we could possibly build any of the trust that we used to have online. Yes, bots have been a problem for a long time but this is so much further beyond spam posting. LLMs have poisoned the commons online At Scale and there's likely no going back. It has made me very bitter, I won't lie.
However that doesn't mean we can't find a way forward with something new that is somehow resistant to LLMs. I'm not sure what exactly that might look like but I'm curious what ideas others have had.
My wish list would be something that
* Is resistant to LLM "infiltration" for lack of a better word. We should be able to be relatively confident that people on the other end are real humans
* Does not require giving up all anonymity. It will likely require some identity authority but interactions between users should/could be pseudonymous at least
* Ideally is also resistant to LLM scraping. I personally find the thought of sharing work publicly now so LLMs can ingest it is demoralizing
I know it's a big ask and maybe not realistic. I'm curious what HN thinks about this possibility though
Edit: This was partially inspired by the recent mod post discussed here: https://news.ycombinator.com/item?id=47340079
I respect that HN's mod team is willing to sort of leave this up to the honor system, but I think in the future we are going to need some serious ideas to strictly prevent this unwanted behavior, not just hope people will play nice
up-n-atom•8h ago