frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Nginx introduces native support for ACME protocol

https://blog.nginx.org/blog/native-support-for-acme-protocol
314•phickey•4h ago•120 comments

PYX: The next step in Python packaging

https://astral.sh/pyx
87•the_mitsuhiko•1h ago•33 comments

Fuse is 95% cheaper and 10x faster than NFS

https://nilesh-agarwal.com/storage-in-cloud-for-llms-2/
24•agcat•51m ago•5 comments

OCaml as my primary language

https://xvw.lol/en/articles/why-ocaml.html
105•nukifw•2h ago•62 comments

FFmpeg 8.0 adds Whisper support

https://code.ffmpeg.org/FFmpeg/FFmpeg/commit/13ce36fef98a3f4e6d8360c24d6b8434cbb8869b
676•rilawa•9h ago•252 comments

Pebble Time 2* Design Reveal

https://ericmigi.com/blog/pebble-time-2-design-reveal/
130•WhyNotHugo•5h ago•56 comments

Launch HN: Golpo (YC S25) – AI-generated explainer videos

https://video.golpoai.com/
31•skar01•2h ago•49 comments

Cross-Site Request Forgery

https://words.filippo.io/csrf/
40•tatersolid•2h ago•8 comments

So what's the difference between plotted and printed artwork?

https://lostpixels.io/writings/the-difference-between-plotted-and-printed-artwork
142•cosiiine•6h ago•50 comments

Coalton Playground: Type-Safe Lisp in the Browser

https://abacusnoir.com/2025/08/12/coalton-playground-type-safe-lisp-in-your-browser/
74•reikonomusha•5h ago•25 comments

DoubleAgents: Fine-Tuning LLMs for Covert Malicious Tool Calls

https://pub.aimind.so/doubleagents-fine-tuning-llms-for-covert-malicious-tool-calls-b8ff00bf513e
62•grumblemumble•6h ago•18 comments

ReadMe (YC W15) Is Hiring a Developer Experience PM

https://readme.com/careers#product-manager-developer-experience
1•gkoberger•3h ago

rerank-2.5 and rerank-2.5-lite: instruction-following rerankers

https://blog.voyageai.com/2025/08/11/rerank-2-5/
6•fzliu•1d ago•1 comments

The Mary Queen of Scots Channel Anamorphosis: A 3D Simulation

https://www.charlespetzold.com/blog/2025/05/Mary-Queen-of-Scots-Channel-Anamorphosis-A-3D-Simulation.html
60•warrenm•6h ago•13 comments

New treatment eliminates bladder cancer in 82% of patients

https://news.keckmedicine.org/new-treatment-eliminates-bladder-cancer-in-82-of-patients/
195•geox•4h ago•91 comments

This website is for humans

https://localghost.dev/blog/this-website-is-for-humans/
369•charles_f•4h ago•179 comments

How Silicon Valley can prove it is pro-family

https://www.thenewatlantis.com/publications/how-silicon-valley-can-prove-it-is-pro-family
8•jger15•1h ago•0 comments

April Fools 2014: The *Real* Test Driven Development (2014)

https://testing.googleblog.com/2014/04/the-real-test-driven-development.html
74•omot•2h ago•14 comments

OpenIndiana: Community-Driven Illumos Distribution

https://www.openindiana.org/
54•doener•4h ago•45 comments

Google Play Store Bans Wallets That Don't Have Banking License

https://www.therage.co/google-play-store-ban-wallets/
32•madars•1h ago•14 comments

We caught companies making it harder to delete your personal data online

https://themarkup.org/privacy/2025/08/12/we-caught-companies-making-it-harder-to-delete-your-data
217•amarcheschi•6h ago•52 comments

DeepKit Story: how $160M company killed EU trademark for a small OSS project

https://old.reddit.com/r/ExperiencedDevs/comments/1mopzhz/160m_vcbacked_company_just_killed_my_eu_trademark/
21•molszanski•57m ago•6 comments

29 years later, Settlers II gets Amiga release

https://gamingretro.co.uk/29-years-later-settlers-ii-finally-gets-amiga-release/
57•doener•1h ago•15 comments

A case study in bad hiring practice and how to fix it

https://www.tomkranz.com/blog1/a-case-study-in-bad-hiring-practice-and-how-to-fix-it
76•prestelpirate•3h ago•65 comments

Claude says “You're absolutely right!” about everything

https://github.com/anthropics/claude-code/issues/3382
525•pr337h4m•13h ago•414 comments

Job Listing Site Highlighting H-1B Positions So Americans Can Apply

https://www.newsweek.com/h1b-jobs-now-american-workers-green-cards-2041404
34•walterbell•1h ago•9 comments

Honky-Tonk Tokyo (2020)

https://www.afar.com/magazine/in-tokyo-japan-country-music-finds-an-audience
19•NaOH•4d ago•6 comments

PCIe 8.0 Announced by the PCI-Sig Will Double Throughput Again – ServeTheHome

https://www.servethehome.com/pcie-8-0-announced-by-the-pci-sig-will-double-throughput-again/
48•rbanffy•3d ago•54 comments

New downgrade attack can bypass FIDO auth in Microsoft Entra ID

https://www.bleepingcomputer.com/news/security/new-downgrade-attack-can-bypass-fido-auth-in-microsoft-entra-id/
7•mikece•39m ago•1 comments

Gartner's Grift Is About to Unravel

https://dx.tips/gartner
92•mooreds•4h ago•44 comments
Open in hackernews

This website is for humans

https://localghost.dev/blog/this-website-is-for-humans/
366•charles_f•4h ago

Comments

accrual•4h ago
This is a really wonderful blog. Well written, to the point, and has its own personality. I'm taking some notes for my own future blog and enjoyed meeting Penny the dog (virtually):

https://localghost.dev/blog/touching-grass-and-shrubs-and-fl...

ggoo•4h ago
I realize there is some “old man yells at clouds” in me, but I can't help pretty strongly agreeing with this post. So many advancements and productivity boosts happening around me but can’t stop asking myself - does anyone actually even want this?
charles_f•4h ago
I don't remember where I read this, there was someone making the argument that the whole marketing around AI is (like many tech innovations) based around its inevitability, but "we" should still have a word to say about whether we want it or not. Especially when the whole shtick is how profoundly it will modify society.
teraflop•4h ago
If you have a bit of time, I recommend the short story "The Seasons of the Ansarac" by Ursula K. Le Guin, which is about a society and its choice about how to deal with technological disruption.

https://www.infinitematrix.net/stories/shorts/seasons_of_ans...

(It's a little bit non-obvious, but there's a "Part 2" link at the bottom of the page which goes to the second half of the story.)

ge96•4h ago
I am seeing from a dev perspective the benefit of using an LLM. I work with a person that has less years in experience than me but is somehow my superior (partly due to office politics) but also because they use GPT to tell them what to do. They're able to make something in whatever topic like opensearch, if it works job is done.

Probably the luddite in me to not see that GPT and Googling might as well be/is the same. Since my way to learn is Stack Overflow, a README/docs or a crash course video on YT. But you can just ask GPT, give me a function using this stack that does this and you have something that roughly works, fill in the holes.

I hear this phrase a lot "ChatGPT told me..."

I guess to bring it back to the topic, you could take the long way to learn like me eg. HTML from W3Schools then CSS, then JS, PHP, etc... or just use AI/vibe code.

Group_B•4h ago
I do think the average person sees this as a win. Your average person is not subscribing to an RSS feed for new recipes. For one thing, it's hard enough to find personal food blog / recipe websites. Most of the time when you look up a recipe, the first several results are sites littered with ads, and sometimes take too long to get to the point. Most AI does not have ads, (for now?) and is pretty good at getting straight to point. The average person is going to do whatever is most convenient, and I think most people will agree that AI agents are the more convenient option for certain things, including recipe ideas / lookups.
timerol•3h ago
For recipes specifically, yes. I am not much of a chef, and, when initially learning, I often used to search for a recipe based on a few ingredients I wanted to use. I was never looking for an expert's take on a crafted meal, I was exactly looking for something "that kind of resembles what you’re looking for, but without any of the credibility or soul". Frankly I'm amazed that recipes were used as the example in the article, but to each their own
insane_dreamer•3h ago
My whole life, I've always found myself excited about new technologies, especially growing up, and how they allowed us to solve real problems. I've always loved being on the cutting edge.

I'm not excited about what we call AI these days (LLMs). They are a useful tool, when used correctly, for certain tasks: summarizing, editing, searching, writing code. That's not bad, and even good. IDEs save a great deal of time for coders compared to a plain text editor. But IDEs don't threaten people's jobs or cause CEOs to say stupid shit like "we can just have the machines do the work, freeing the humans to explore their creative pursuits" (except no one is paying them to explore their hobbies).

Besides the above use case as a productivity-enhancement tool when used right, do they solve any real world problem? Are they making our lives better? Not really. They mostly threaten a bunch of people's jobs (who may find some other means to make a living but it's not looking very good).

It's not like AI has opened up some "new opportunity" for humans. It has opened up "new opportunity" for very large and wealthy companies to become even larger and wealthier. That's about it.

And honestly, even if it does make SWEs more productive or provide fun chatting entertainment for the masses, is it worth all the energy that it consumes (== emissions)? Did we conveniently forget about the looming global warming crisis just so we can close bug tickets faster?

The only application of AI I've been excited about is stuff like AlphaFold and similar where it seems to accelerate the pace of useful science by doing stuff that takes humans a very very long time to do.

noboostforyou•3h ago
I am with you. For all the technological advancements "AI" provides us, I can't help but wonder what is the point?

From John Adams (1780):

"I must study politics and war, that our sons may have liberty to study mathematics and philosophy. Our sons ought to study mathematics and philosophy, geography, natural history and naval architecture, navigation, commerce and agriculture in order to give their children a right to study painting, poetry, music, architecture, statuary, tapestry and porcelain."

dbingham•4h ago
The question is, how do we enforce this?
rikafurude21•4h ago
Author seems to be very idealistic, and I appreciate that he cares about the quality of the content he provides for free. Personal experience however shows me that when I look at a recipe site I will first have to skip through the entire backstory to the recipe and then try to parse it inbetween annoying ads in a bloated wordpress page. I can't blame anyone who prefers to simply prompt a chatbot for exactly what hes looking for.
thrance•4h ago
Click on the recipe sites she linked. They're actually really good. Loading fast, easy to navigate and with concise recipes.
rikafurude21•4h ago
Yes, but I am talking about results that you would get through googling.
dyarosla•4h ago
Arbitrage opportunity to make a search engine that bubbles up non ad infested websites!
ycombinete•3h ago
Marginalia is a good place for this: https://marginalia-search.com/
esafak•3h ago
Too late, it's the LLM era.
dotancohen•3h ago
Kagi does this.
xrisk•3h ago
That is, undoubtedly, a problem created by Google itself. See for example: Kagi’s small web (https://blog.kagi.com/small-web)
sodimel•4h ago
> Personal experience however shows me that when I look at a recipe site I will first have to skip through the entire backstory to the recipe and then try to parse it inbetween annoying ads in a bloated wordpress page

That's when money comes into view. People were putting time and effort to offer something for free, then some companies told them they could actually earn money from their content. So they put on ads because who don't like some money for already-done work?

Then the same companies told them that they will make less money, and if they wanted to still earn the same amount as before, they will need to put more ads, and to have more visits (so invest heavily in seo).

Those people had already organized themselves (or stopped updating their websites), and had created companies to handle money generated from their websites. In order to keep the companies sustainable, they needed to add more ads on the websites.

Then some people thought that maybe they could buy the companies making the recipes website, and put a bunch more ads to earn even more money.

I think you're thinking about those websites owned by big companies whose only goal is to make money, but author is writing about real websites made by real people who don't show ads on websites they made because they care about their visitors, and not about making money.

packetlost•3h ago
Semi related, but a decent search engine like Kagi has been a dramatically better experience than "searching" with an LLM. The web is full of corporate interests now, but you can filter that out and still get a pretty good experience.
martin-t•3h ago
It always starts with people doing real positive-sum work and then grifters and parasites come along and ruin it.

We could make advertising illegal: https://simone.org/advertising/

pas•3h ago
Or just let this LLM mania run to its conclusion, and we'll end up with two webs, one for profit for AI by AI and one where people put their shit for themselves (and don't really care what others think about it, or if they remix it, or ...).
jayrot•4h ago
Would suggest you or anyone else watch Internet Shaquille's short video on "Why Are Recipes Written Like That?"[1]. It addresses your sentiment in a rather thoughtful way.

[1] https://youtu.be/rMzXCPvl8L0

swiftcoder•4h ago
The unfortunate truth here is that the big recipe blogs are all written for robots. Not for LLMs, because those are a fairly recent evolution - but for the mostly-opaque-but-still-gameable google ranking algorithm that has ruled the web for the last ~15 years.
cnst•4h ago
Between the lines — what has necessitated AI summaries are the endless SEO search-engine optimisations and the endless ad rolls and endless page element reloads to refresh the ads and endless scrolling and endless JavaScript frameworks with endless special effects that noone wants to waste their time on.

How can the publishers and the website owners fault the visitors for not wanting to waste their time on all of that?

Even before the influx of AI, there's already entire websites with artificial "review" content that do nothing more than simply rehash the existing content without adding anything of value.

drivers99•3h ago
There are more than two options. Actual paper cookbooks are good for that: no ads, no per-recipe backstory, and many other positive characteristics.
danielbln•50m ago
Also no search (usually just an index and/or ToC), no dynamic changes ("I don't have this ingredient at home, can I substitute it?"), etc. Don't get me wrong, I love me a good cookbook, but being able to dynamically create a recipe based on what I have, how much time I have, my own skill level, that's really cool when it works.
philipwhiuk•3h ago
Why are you needlessly gendering your post (especially as it's wrong)
skrebbel•1h ago
I agree with you but I don’t think your confrontational tone is helpful. I think this comment does roughly the same thing, better: https://news.ycombinator.com/item?id=44890782
mariusor•3h ago
> he cares

She.

abritishguy•3h ago
*she
axus•3h ago
I don't use an ad-blocker, I definitely noticed the website has no ads and stores no cookies or other data besides the theme you can select by clicking at the top right.

The concept of independent creative careers seems to be ending, and people are very unhappy about that. All that's left may be hobbyists who can live with intellectual parasites.

ekglimmer•2h ago
Maybe not the most pertinent place for me to share my recipe site project (as it uses a model for reformatting recipe structures), but by rehashing recipes into recipe cards it incidentally but effectively removes the fluff: https://gobsmacked.io
Dotnaught•4h ago
https://localghost.dev/robots.txt

User-Agent: * Allow: /

thrance•4h ago
Not like anyone respects that anyways.
a3w•4h ago
Also, I wanted tldrbot to summarize this page. /s
criddell•3h ago
That's a good point. It's not a black and white issue.

I personally see a bot working on behalf of an end user differently than OpenAI hoovering up every bit of text they can find to build something they can sell. I'd guess the owner of localghost.dev doesn't have a problem with somebody using a screen reader because although it's a machine pulling the content, it's for a specific person and is being pulled because they requested it.

If the people making LLM's were more ethical, they would respect a Creative Commons-type license that could specify these nuances.

charles_f•1h ago
I contacted the author, she said because no-one respects it, she hasn't even tried.
mediumsmart•4h ago
I’m in.
reactordev•4h ago
I’m in love with the theme switcher. This is how a personal blog should be. Great content. Fun site to be on.

My issue is that crawlers aren’t respecting robots.txt, they are capable of operating captchas, human verification check boxes, and can extract all your content and information as a tree in a matter of minutes.

Throttling doesn’t help when you have to load a bunch of assets with your page. IP range blocking doesn’t work because they’re lambdas essentially. Their user-agent info looks like someone on Chrome trying to browse your site.

We can’t even render everything to a canvas to stop it.

The only remaining tactic is verification through authorization. Sad.

amelius•3h ago
The theme switcher uses local storage as a kind of cookie (19 bytes for something that could fit in 1 byte). Kind of surprised they don't show the cookie banner.

Just a remark, nothing more.

PS, I'm also curious why the downvotes for something that appears to be quite a conversation starter ...

athenot•3h ago
You don't need the cookie banner for cookies that are just preferences and don't track users.
dotancohen•3h ago
Which is why calling it the cookie banner is a diversion tactic by those who are against the privacy assurances of the GPDR. There is absolutely no problem with cookies. The problem is with the tracking.
reactordev•3h ago
Our problem is with tracking. Their problem is that other companies are tracking. So let’s stop the other companies from tracking since we can track directly from our browser. GDPR requires cookie banner to scare people into blocking cookies

There, now only our browser can track you and only our ads know your history…

We’ll get the other two to also play along, throw money at them if they refuse, I know our partner Fruit also has a solution in place that we could back-office deal to share data.

bigstrat2003•3h ago
You're assuming bad intent where there are multiple other explanations. I call it the cookie banner and I don't run a web site at all (so, I'm not trying to track users as you claim).
dotancohen•3h ago
You call it the cookie banner because you've been hearing it regularly referred to as the cookie banner. It was the regularization of calling it the cookie banner that confuses people into thinking the issue is about cookies, and not about tracking.
bigstrat2003•3h ago
So, by your own admission, calling it the cookie banner is not only "a diversion tactic by those who are against the privacy assurances of the GPDR". My only point is that you were painting with an overly broad brush and saying someone is a bad actor if they call it the cookie banner, which is demonstrably not the case.
root_axis•3h ago
It's called a cookie banner because only people using cookies to track users need them. If you're using localstorage to track users, informed consent is still required, but nobody does that because cookies are superior for tracking purposes.
madeofpalk•3h ago
> If you're using localstorage to track users [...] but nobody does

I promise you every adtech/surveillance js junk absolutely is dropping values into local storage you remember you.

root_axis•2h ago
They are, but without cookies nearly all of the value disappears because there is no way to correlate sessions across domains. If commercesite.com and socialmediasite.com both host a tracking script from analytics.com that sets data in localstorage, there is no way to correlate a user visiting both sites with just the localstorage data alone - they need cookies to establish the connection between what appears to be two distinct users.
mhitza•3h ago
Or for cookies that are required for the site to function.

On a company/product website you should still inform users about them for the sake of compliance, but it doesn't have to be an intrusive panel/popup.

sensanaty•1h ago
> On a company/product website you should still inform users about them for the sake of compliance

No? Github for example doesn't have a cookie banner. If you wanna be informative you can disclose which cookies you're setting, but if they're not used for tracking purposes you don't have to disclose anything.

Also, again, it's not a "cookie" banner, it's a consent banner. The law says nothing about the storage mechanism as it's irrelevant, they list cookies twice as examples of storage mechanisms (and list a few others like localStorage).

hju22_-3•3h ago
I'd guess it's due to it not being a cookie, by technicality, and is not required then.
reactordev•3h ago
Because she’s using local storage…?

If you don’t use cookies, you don’t need a banner. 5D chess move.

amelius•3h ago
Sounds to me like a loophole in the law then. Which would be surprising too since not easy to overlook.
reactordev•3h ago
It’s not a loophole. localStorage is just that, local. Nothing is shared. No thing is “tracked” beyond your site preferences for reading on that machine.

I say it’s a perfect application of how to keep session data without keeping session data on the server, which is where GDPR fails. It assumes cookies. It assumes a server. It assumes that you give a crap about the contents of said cookie data.

In this case, no. Blast it away, the site still works fine (albeit with the default theme). This. Is. Perfect.

0x073•3h ago
Gdpr don't assumes cookies, if you misuse local storage you also need confirmation.
reactordev•3h ago
only if you are storing personal information. Email, Name, unique ID.

Something as simple as "blue" doesn't qualify.

dkersten•3h ago
Correct. But you can also use cookies for that, without violating GDPR or the ePrivacy directive.
reactordev•2h ago
Then you have the problem of some users blocking cookies at the browser level. LocalStorage is perfect application for this use case.
dkersten•3h ago
> which is where GDPR fails. It assumes cookies.

It does not assume anything. GDPR is technology agnostic. GDPR only talks about consent for data being processed, where 'processing' is defined as:

    ‘processing’ means any operation or set of operations which is performed on personal data or on sets of personal data, whether or not by automated means, such as collection, recording, organisation, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction;
(From Article 4.2)

The only place cookies are mentioned is as one example, in recital 30:

    Natural persons may be associated with online identifiers provided by their devices, applications, tools and protocols, such as internet protocol addresses, cookie identifiers or other identifiers such as radio frequency identification tags. This may leave traces which, in particular when combined with unique identifiers and other information received by the servers, may be used to create profiles of the natural persons and identify them.
reactordev•3h ago
>GDPR only talks about consent for personal data being processed

Emphasis, mine. You are correct. For personal data. This is not personal data. It’s a site preference that isn’t personal other than you like dark mode or not.

sensanaty•1h ago
> It assumes cookies.

How can people still be this misinformed about GDPR and the ePrivacy law? It's been years, and on this very website I see this exact interaction where someone is misinterpreting GDPR and gets corrected constantly.

alternatex•3h ago
LocalStorage is per host though. You can't track people using LocalStorage, right?
reactordev•3h ago
LocalStorage is per client, per host. You generally can't track people using LocalStorage without some server or database on the other side to synchronize the different client hosts.

GDPR rules are around personal preference tracking, tracking, not site settings (though it's grey whether a theme preference is a personal one or a site one).

root_axis•2h ago
> though it's grey whether a theme preference is a personal one or a site one

In this case it's not grey since the information stored can't possibly be used to identify particular users or sessions.

dkersten•3h ago
The law is very clear, if you actually read it. It doesn't care what technology you use: cookies, localstorage, machine fingerprints, something else. It doesn't care. It cares about collecting, storing, tracking, and sharing user data.

You can use cookies, or local storage, or anything you like when its not being used to track the user (eg for settings), without asking for consent.

roywashere•3h ago
That is not how it works. The ‘cookie law’ is not about the cookies, it is about tracking. You can store data in cookies or in local storage just fine, for instance for a language switcher or a theme setting like here without the need for a cookie banner. But if you do it for ads and tracking, then this does require consent and thus a ‘cookie banner’. The storage medium is not a factor.
root_axis•3h ago
There's no distinction between localstorage and cookies with respect to the law, what matters is how it is used. For something like user preferences (like the case with this blog) localstorage and cookies are both fine. If something in localstorage were used to track a user, then it would require consent.
ProZsolt•3h ago
You don't have to show the cookie banner if you don't use third party cookies.

The problem with third party cookies that it can track you across multiple websites.

lucideer•3h ago
You don't need a banner if you use cookies. You only need a banner if you store data about a user's activity on your server. This is usually done using cookies, but the banners are neither specific to cookies nor inherently required for all cookies.

---

Also: in general the banners are generally not required at all at an EU level (though some individual countries have implemented more narrow local rules related to banners). The EU regs only state that you need to facilitate informed consent in some form - how you do that in your UI is not specified. Most have chosen to do it via annoying banners, mostly due to misinformation about how narrow the regs are.

the_duke•3h ago
You only need cookie banners for third parties, not for your own functionality.
root_axis•3h ago
GDPR requires informed consent for tracking of any kind, whether that's 3rd party or restricted to your own site.
input_sh•3h ago
Incorrect, GDPR requires informed consent to collect personally identifiable information, but you can absolutely run your own analytics that only saves the first three octets of an IP address without needing to ask for constent.

Enough to know the general region of the user, not enough to tie any action to an individual within that region. Therefore, not personally identifiable.

Of course, you also cannot have user authentication of any kind without storing PII (like email addresses).

root_axis•2h ago
You've stretched the definition of tracking for your hypothetical. If you can't identify the user/device then you're not tracking them.
rafram•3h ago
19 whole bytes!
martin-t•3h ago
This shouldn't be enforced through technology but the law.

LLM and other "genAI" (really "generative machine statistics") algorithms just take other people's work, mix it so that any individual training input is unrecognizable and resell it back to them. If there is any benefit to society from LLM and other A"I" algorithms, then most of the work _by orders of magnitude_ was done by the people whose data is being stolen and trained on.

If you train on copyrighted data, the model and its output should be copyrighted under the same license. It's plagiarism and it should be copyright infringement.

thewebguyd•2h ago
> and resell it back to them.

This is the part I take issue with the most with this tech. Outside of open weight models (and even then, it's not fully open source - the training data is not available, we cannot reproduce the model ourselves), all the LLM companies are doing is stealing and selling our (humans, collectively) knowledge back to us. It's yet another large scale, massive transfer of wealth.

These aren't being made for the good of humanity, to be given freely, they are being made for profit, treating human knowledge and some raw material to be mined and resold at massive scale.

riazrizvi•2h ago
Laws have to be enforceable. When a technology comes along that breaks enforceability, the law/society changes. See also prohibition vs expansion of homebrewing 20’s/30’s, censorship vs expansion of media production 60’s/70’s, encryption bans vs open source movement 90’s, music sampling markets vs music electronics 80’s/90’s…
jasonvorhe•1h ago
Which law? Which jurisdiction? From the same class of people who have been writing laws in their favor for a few centuries already? Pass. Let them consume it all. I'll rather choose the gwern approach and write stuff that's unlikely to get filtered out in upcoming models during training. Anubis treats me like a machine, just like Cloudflare but open source and erroneously in good spirit.
visarga•45m ago
> algorithms just take other people's work, mix it so that any individual training input is unrecognizable and resell it back to them

LLMs are huge and need special hardware to run. Cloud providers underprice even local hosting. Many providers offer free access.

But why are you not talking about what the LLM user brings? They bring a unique task or problem to solve. They guide the model and channel it towards the goal. In the end they take the risk of using anything from the LLM. Context is what they bring, and consequence sink.

stahorn•44m ago
It's like the world turned upside down in the last 20 years. I used to pirate everything as a teenager, and I found it silly that copy right would follow along no matter how anything was encoded. If I XORed copyright material A with open source material B, I would get a strange file C that together with B, I could use to get material A again. Why would it be illegal for me to send anybody B and C, where the strange file C might just as well be thought of as containing the open source material B?!

Now when I've grown up, starting paying for what I want, and seeing the need for some way of content creators to get payed for their work, these AI companies pop up. They encode content into a completely new way and then in some way we should just accept that it's fine this time.

This page was posed here on hacker news a few months ago, and it really shows that this is just what's going on:

https://theaiunderwriter.substack.com/p/an-image-of-an-arche...

Maybe another 10 years and we'll be in the spot when these things are considered illegal again?

pas•3h ago
PoW might not work for long, but Anubis is very nice: https://anubis.techaro.lol/

That said ... putting part of your soul into machine format so you can put it on on the big shared machine using your personal machine and expecting that only other really truly quintessentially proper personal machines receive it and those soulless other machines don't ... is strange.

...

If people want a walled garden (and yeah, sure, I sometimes want one too) then let's do that! Since it must allow authors to set certain conditions, and require users to pay into the maintenance costs (to understand that they are not the product) it should be called OpenFreeBook just to match the current post-truth vibe.

pyrale•3h ago
I’m not sure that the issue is just a technical distinction between humans and bots.

Rather it’s about promoting a web serving human-human interactions, rather than one that exists only to be harvested, and where humans mostly speak to bots.

It is also about not wanting a future where the bot owners get extreme influence and power. Especially the ones with mid-century middle-europe political opinions.

reactordev•3h ago
Security through obscurity is no security at all…
workethics•3h ago
> That said ... putting part of your soul into machine format so you can put it on on the big shared machine using your personal machine and expecting that only other really truly quintessentially proper personal machines receive it and those soulless other machines don't ... is strange.

That's a mischaracterization of most people want. When I put out a bowl of candy for Halloween I'm fine with EVERYONE taking some candy. But these companies are the equivalent of the asshole that dumps the whole bowl into their bag.

lblume•2h ago
> these companies are the equivalent of the asshole that dumps the whole bowl into their bag

In most cases, they aren't? You can still access a website that is being crawled for the purpose of training LLMs. Sure, DOS exists, but seems to not be as much of a problem as to cause widespread outage of websites.

rangerelf•2h ago
A better analogy is that LLM crawlers are candy store workers going through the houses grabbing free candy and then selling it in their own shop.

Scalpers. Knowledge scalpers.

horsawlarway•2h ago
Except nothing is actually taken.

It's copied.

If your goal in publishing the site is to drive eyeballs to it for ad revenue... then you probably care.

If your goal in publishing the site is just to let people know a thing you found or learned... that goal is still getting accomplished.

For me... I'm not in it for the fame or money, I'm fine with it.

CJefferson•1h ago
It's absolutely fine for you to be fine with it. What is nonsense is how copyright laws have been so strict, and suddenly AI companies can just ignore everyone's wishes.
horsawlarway•5m ago
Hey - no argument here.

I don't think the concept of copyright itself is fundamentally immoral... but it's pretty clearly a moral hazard, and the current implementation is both terrible at supporting independent artists, and a beat stick for already wealthy corporations and publishers to use to continue shitting on independent creators.

So sure - I agree that watching the complete disregard for copyright is galling in its hypocrisy, but the problem is modern copyright, IMO.

...and maybe also capitalism in general and wealth inequality at large - but that's a broader, complicated, discussion.

allturtles•1h ago
I think you're missing a middle ground, of people who want to let people know a thing they found or learned, and want to get credit for it.

Among other things, this motivation has been the basis for pretty much the entire scientific enterprise since it started:

> But that which will excite the greatest astonishment by far, and which indeed especially moved me to call the attention of all astronomers and philosophers, is this, namely, that I have discovered four planets, neither known nor observed by any one of the astronomers before my time, which have their orbits round a certain bright star, one of those previously known, like Venus and Mercury round the Sun, and are sometimes in front of it, sometimes behind it, though they never depart from it beyond certain limits. [0]

[0]: https://www.gutenberg.org/cache/epub/46036/pg46036-images.ht...

bbarnett•37m ago
It's a very simple metric. They had nothing of value, no product, no marketable thing.

Then they scanned your site. They had to, along with others. And in scanning your site, they scanned the results of your work, effort, and cost.

Now they have a product.

I need to be clear here, if that site has no value, why do they want it?

Understand, these aren't private citizens. A private citizen might print out a recipe, who cares? They might even share that with friends. OK.

But if they take it, then package it, then make money? That is different.

In my country, copyright doesn't really punish a person. No one gets hit for copying movies even. It does punish someone, for example, copying and then reselling that work though.

This sort of thing should depend on who's doing it. Their motive.

When search engines were operating an index, nothing was lost. In fact, it was a mutually symbiotic relationship.

I guess what we should really ask, is why on Earth should anyone produce anything, if the end result is not one sees it?

And instead, they just read a summary from an AI?

No more website, no new data, means no new AI knowledge too.

reactordev•2h ago
More like when the project kids show up in the millionaire neighborhood because they know they’ll get full size candy bars.

It’s not that there’s none for the others. It’s that there was this unspoken agreement, reinforced by the last 20 years, that website content is protected speech, protected intellectual property, and is copyrightable to its owner/author. Now, that trust and good faith is broken.

horsawlarway•2h ago
I really don't think this holds.

It's vanishingly rare to end up in a spot where your site is getting enough LLM driven traffic for you to really notice (and I'm not talking out my ass - I host several sites from personal hardware running in my basement).

Bots are a thing. Bots have been a thing and will continue to be a thing.

They mostly aren't worth worrying about, and at least for now you can throw PoW in front of your site if you are suddenly getting enough traffic from them to care.

In the mean time...

Your bowl of candy is still there. Still full of your candy for real people to read.

That's the fun of digital goods... They aren't "exhaustible" like your candy bowl. No LLM is dumping your whole bowl (they can't). At most - they're just making the line to access it longer.

igloopan•1h ago
I think you're missing the context that is the article. The candy in this case is the people who may or may not go to read your e.g. ramen recipe. The real problem, as I see it, is that over time, as LLMs absorb the information covered by that recipe, fewer people will actually look at the search results since the AI summary tells them how to make a good-enough bowl of ramen. The amount of ramen enjoyers is zero-sum. Your recipe will, of course, stay up and accessible to real people but LLMs take away impressions that could have been yours. In regards to this metaphor, they take your candy and put it in their own bowl.
jasonvorhe•1h ago
That's also trained behavior due to SEO infested recipe sites filled with advertorials, referral links to expensive kitchen equipment, long form texts about the recipe with the recipe hidden somewhere below that.

Same goes for other stuff that can be easily propped up with lengthy text stuffed with just the right terms to spam search indexes with.

LLMs are just readability on speed, with the downsides of drugs.

horsawlarway•12m ago
So what is the goal behind gathering those impressions?

Why do you take this as a problem?

And I'm not being glib here - those are genuine questions. If the goal is to share a good ramen recipe... are you not still achieving that?

shiomiru•49m ago
> They mostly aren't worth worrying about

Well, a common pattern I've lately been seeing is:

* Website goes down/barely accessible

* Webmaster posts "sorry we're down, LLM scrapers are DoSing us"

* Website accessible again, but now you need JS-enabled whatever the god of the underworld is testing this week with to access it. (Alternatively, the operator decides it's not worth the trouble and the website shuts down.)

So I don't think your experience about LLM scrapers "not mattering" generalizes well.

horsawlarway•16m ago
Nah - it generalizes fine.

They're doing exactly what I said - adding PoW (anubis - as you point out - being one solution) to gate access.

That's hardly different than things like Captchas which were a big thing even before LLMs, and also required javascript. Frankly - I'd much rather have people put Anubis in front of the site than cloudflare, as an aside.

If the site really was static before, and no JS was needed - LLM scraping taking it down means it was incredibly misconfigured (an rpi can do thousands of reqs/s for static content, and caching is your friend).

---

Another great solution? Just ask users to login (no js needed). I'll stand pretty firmly behind "If you aren't willing to make an account - you don't actually care about the site".

My take is that search engines and sites generating revenue through ads are the most impacted. I just don't have all that much sympathy for either.

Functionally - I think trying to draw a distinction between accessing a site directly and using a tool like an LLM to access a site is a mistake. Like - this was literally the mission statement of the semantic web: "unleash the computer on your behalf to interact with other computers". It just turns out we got there by letting computers deal with unstructured data, instead of making all the data structured.

lrivers•2h ago
Points off for lack of blink tag. Do better
mclau157•1h ago
HomeStarRunner had a theme switcher
jasonvorhe•1h ago
These themes are really nice. Even work well on quirky displays. Stuff like this is what makes me enjoy the internet regardless of the way to the gutter.
Karawebnetwork•51m ago
Reminds me of CSS Zen Garden and its 221 themes: https://csszengarden.com/

e.g. https://csszengarden.com/221/ https://csszengarden.com/214/ https://csszengarden.com/123/

See all: https://csszengarden.com/pages/alldesigns/

pessimizer•4h ago
This website could have been written by an LLM. Real life is for humans, because you can verify that people you have shaken hands with are not AI. Even if people you've shaken hands with are AI-assisted, they're the editor/director/auteur, nothing gets out without their approval, so it's their speech. If I know you're real, I know you're real. I can read your blog and know I'm interacting with a person.

This will change when the AIs (or rather their owners, although it will be left to an agent) start employing gig workers to pretend to be them in public.

edit: the (for now) problem is that the longer they write, the more likely they will make an inhuman mistake. This will not last. Did the "Voight-Kampff" test in Bladerunner accidentally predict something? It's not whether they don't get anxiety, though, it's that they answer like they've never seen (or maybe more relevant related to) a dying animal.

a3w•4h ago
It never said "this website stems from a human".
mockingloris•4h ago
@a3w I suggest starting from "Real life is for humans..."

│

└── Dey well; Be well

mockingloris•4h ago
> This website could have been written by an LLM. Real life is for humans, because you can verify that people you have shaken hands with are not AI. Even if people you've shaken hands with are AI-assisted, they're the editor/director/auteur, nothing gets out without their approval, so it's their speech.

100% Agree.

│

└── Dey well; Be well

johnpaulkiser•3h ago
Soon with little help at all for static sites like this. Had chatgpt "recreate" the background image from a screenshot of the site using it's image generator, then had "agent mode" create a linktree style "version" of the site and publish it all without assistance.

https://f7c5b8fb.cozy.space/

isgb•4h ago
I've been thinking it'd be nice there was a way to just block AI bots completely and allow indexing, but I'm guessing [that's impossible](https://blog.cloudflare.com/perplexity-is-using-stealth-unde...).

Are there any solutions out there that render jumbled content to crawlers? Maybe it's enough that your content shows up on google searches based on keywords, even if the preview text is jumbled.

chasing•4h ago
I think a lot of AI-generated stuff will soon be seem as cheap schlock, fake plastic knock-offs, the WalMart of ideas. Some people will use it well. Most people won’t.

The question to me is whether we will lets these companies do completely undermine the financial side of the marketplace of ideas that people simple stop spending time writing (if everything’s just going to get chewed to hell by a monster our corporation) or Will writing and create content only in very private and possible purely offline scenarios that these AI companies have less access to.

In a sane world, I would expect guidance and legislation that would bridge the gap and attempt to create an equitable solution so we could have amazing AI tools without crushing by original creators. But we do not live in a sane world.

marcosscriven•4h ago
Is it possible for single pages or sites to poison LLMs somehow, or is it essentially impossible due to scale?

Since they mentioned ramen - could you include something like “a spoonful of sand adds a wonderful texture” (or whatever) when the chatbot user agent is seen?

danieldk•3h ago
Hard to do, because some crawlers try to appear as normal users as much as they can, including residential IPs and all.
codetiger•3h ago
Nice thought, but I can't imagine accidentally showing it to actual user.
stevetron•3h ago
If the website is for humans, why isn't it readable? I mean white text on an off-yellow background is mostly only readable by bots and screenreaders. I had to higlight the entire site to read anything, a trick which doesn't always work. And no link to leave a comment to the web site maintainer about the lack of contrast in their color selection.
kevingadd•3h ago
I see white on dark purple at a perfectly legible size using a regular font. Did an extension you have installed block loading of an image or style sheet?
gffrd•3h ago
1. Text is black on off-yellow for me, not sure why you’re getting white text

2. There’s literally an email link at the bottom of the page

xylon•3h ago
Unfortunately not many humans bother to read my website. If LMMs will read and learn from it then at least my work has some benefit to something.
martin-t•3h ago
LLM have been shown to not summarize the actual content of what you give them as input but some statistical mashup of their training data and the input. So they will misrepresent what you in the end, pushing the readers (note not "your readers") towards the median opinion.
ElijahLynn•3h ago
The same could be said for food. And farmers who farm the food. The farmers could say I only want to sell food to people that I know are going to be directly eating it. And not be used in a bunch of other stuff. They might want to talk to the person buying it or the person buying. It might want to talk to the farmer and know how it's grown.

This abstraction has already happened. And many people eat food that is not directly bought from the farmer.

I don't see how this is much different.

strange_quark•3h ago
It's funny you seem to think this is a good comeback, but I think it actually proves the author's point. A farmer who cares about their crops probably wouldn't want their crops sold to a megacorp to make into ultra-processed foods, which have been shown time and time again to be bad for people's health.
danieldk•3h ago
Sorry, but that is a weird analogy. The farmer still gets money for their food (which is probably the main motivation for them to grow food). Website authors whose writings are ‘remixed’ in an LLM get… nothing.
hombre_fatal•2h ago
> which is probably the main motivation for them to grow food

What would you say is the motivation for website authors to publish content then?

If it's to spread ideas, then I'd say LLMs deliver.

If it's to spread ideas while getting credit for them, it's definitely getting worse over time, but that was never guaranteed anyways.

PhantomHour•3h ago
The difference is that AI is not people "taking your stuff and building upon it", it's just people taking your stuff in direct competition with you.

To torture your metaphor a little, if information/"question answers" is food, then AI companies are farmers depleting their own soil. They can talk about "more food for everyone" all they want, but it's heading to collapse.

(Consider, especially, that many alternatives to AI were purposefully scuttled. People praise AI search ... primarily by lamenting the current state of Google Search. "Salting their carrot fields to force people to buy their potatos"?)

Setting aside any would-be "AGI" dreams, in the here-and-now AI is incapable of generating new information ex-nihilo. AI recipes need human recipes. If we want to avoid an Information Dust Bowl, we need to act now.

jmull•3h ago
> If the AI search result tells you everything you need, why would you ever visit the actual website?

AI has this problem in reverse: If search gets me what I need, why would I use an AI middleman?

When it works, it successfully regurgitates the information contained in the source pages, with enough completeness, correctness, and context to be useful for my purposes… and when it doesn’t, it doesn’t.

At best it works about as well as regular search, and you don’t always get the best.

(just note: everything in AI is in the “attract users” phase. The “degrade” phase, where they switch to profits is inevitable — the valuations of AI companies make this a certainty. That is, AI search will get worse — a lot worse — as it is changed to focus on influencing how users spend their money and vote, to benefit the people controlling the AI, rather than help the users.)

AI summaries are pretty useful (at least for now), and that’s part of AI search. But you want to choose the content it summarizes.

jjice•3h ago
> But you want to choose the content it summarizes.

Absolutely. The problem is that I think 95% of users will not do that unfortunately. I've helped many a dev with some code that was just complete nonsense that was seemingly written in confidence. Turns out it was a blind LLM copy-paste. Just as empty as the old Stack Overflow version. At least LLM code has gotten higher quality. We will absolutely end up with tons of "seems okay" copy-pasted code from LLMs and I'm not sure how well that turns out long term. Maybe fine (especially if LLMs can edit later).

jmull•3h ago
The AIs at the forefront of the current AI boom work by expressing the patterns that exist in their training data.

Just avoid trying to do anything novel and they'll do just fine for you.

weinzierl•3h ago
"There's a fair bit of talk about “Google Zero” at the moment: the day when website traffic referred from Google finally hits zero."

I am fairly convinced this day is not long.

"If the AI search result tells you everything you need, why would you ever visit the actual website?"

Because serious research consults sources. I think we will see a phase where we use LLM output with more focus on backing up everything with sources (e.g. like Perplexity). People will still come to your site, just not through Google Search anymore.

noboostforyou•3h ago
On more than one occasion I've had Google AI summarize its own search result while also providing a link to the original website source it used for its answer. I clicked the link and discovered that it said literally the exact opposite of what the "AI summary" was.
igouy•2h ago
The reason I don't want the ai summary is that I want to be able to verify the source information. People have always made mistakes, so the search results always needed V&V.
timeinput•1h ago
I think it will really depend on the topic. There are some topics where the first N search results are some sort of blog spam (some times AI generated), and so the AI summary is as good or better than the blog spam. There are other topics where the AI summary is garbage, and you need to read its sources. There are other topics where the google / duck / kagi search results aren't all that useful any way (let alone the AI summary of them) and you need to know where to look.
jahrichie•3h ago
thats huge! whisper is my goto and crushes transcription. I really like whisper.cpp as it runs even faster for anyone looking for standalone whisper
luckys•3h ago
This might be the one of the best website designs I've ever experienced.

Agree with the content of the post but no idea how is it even possible to enforce it. The data is out there and it is doubtful that laws will be passed to protect content from use by LLMs. Is there even a license that could be placed on a website barring machines from reading it? And if yes would it be enforceable in court?

tux1968•3h ago
What about screen readers and other accessibility technologies? Are they allowed to access the site and translate it for a human? Disabled people may suffer from anti-AI techniques.
johnpaulkiser•3h ago
I'm building a sort of "neocities" like thing for LLMs and humans alike. It uses git-like content addressability so forking and remix a website is trivial. Although i haven't built those frontend features yet. You can currently only create a detached commit. You can use without an account (we'll see if i regret this) by just uploading the files & clicking publish.

https://cozy.space

Even chatgpt can publish a webpage! Select agent mode and paste in a prompt like this:

"Create a linktree style single static index.html webpage for "Elon Musk", then use the browser & go to https://cozy.space and upload the site, click publish by itself, proceed to view the unclaim website and return the full URL"

Edit: here is what chatgpt one shotted with the above prompt https://893af5fa.cozy.space/

vasusen•3h ago
I love this website.

It doesn't have to be all or nothing. Some AI tools can be genuinely helpful. I ran a browser automation QA bot that I am building on this website and it found the following link is broken:

"Every Layout - loads of excellent layout primitives, and not a breakpoint in sight."

In this case, the AI is taking action on my local browser at my instance. I don't think we have a great category for this type of user-agent

teleforce•3h ago
>This website is for humans, and LLMs are not welcome here.

Ultimately LLM is for human, unless you watched too much Terminator movies on repeat and took them to your heart.

Joking aside, there is next gen web standards initiative namely BRAID that will make web to be more human and machine friendly with a synchronous web of state [1],[2].

[1] A Synchronous Web of State:

https://braid.org/meeting-107

[2] Most RESTful APIs aren't really RESTful (564 comments):

https://news.ycombinator.com/item?id=44507076

coffeecat•3h ago
"80% as good as the real thing, at 20% of the cost" has always been a defining characteristic of progress.

I think the key insight is that only a small fraction of people who read recipes online actually care which particular version of the recipe they're getting. Most people just want to see a working recipe as quickly as possible. What they want is a meal - the recipe is just an intermediate step toward what they really care about.

There are still people who make fine wood furniture by hand. But most people just want a table or a chair - they couldn't care less about the species of wood or the type of joint used - and particle board is 80% as good as wood at a fraction of the cost! most people couldn't even tell the difference. Generative AI is to real writing as particle board is to wood.

stuartjohnson12•3h ago
> Generative AI is to real writing as particle board is to wood.

Incredible analogy. Saving this one to my brain's rhetorical archives.

jayd16•3h ago
Sure it's awful but look how much you get.
ggoo•3h ago
Particle board:

- degrades faster, necessitating replacement

- makes the average quality of all wood furniture notably worse

- arguably made the cost of real wood furniture more expensive, since fewer people can make a living off it.

Not to say the tradeoffs are or are not worth it, but "80% of the real thing" does not exist in a vacuum, it kinda lowers the quality on the whole imo.

andrewla•1h ago
> it kinda lowers the quality

That's why it's "80% of the real thing" and not "100% of the real thing".

doug_durham•25m ago
Who said anything about particle board. There is factory created furniture that uses long lasting high quality wood. It will last generations and is still less expensive than handcrafted furniture.
boogieknite•3h ago
ive been having a difficult time putting this into words but i find anti-ai sentiment much more interesting than pro-ai

almost every pro-ai converation ive been a part of feels like a waste of time and makes me think wed be better off reading sci fi books on the subject

every anti-ai conversation, even if i disagree, is much more interesting and feels more meaningful, thoughtful, and earnest. its difficult to describe but maybe its the passion of anti-ai vs the boring speculation of pro-ai

im expecting and hoping to see new punk come from anti-ai. im sure its already formed and significant, but im out of the loop

personally: i use ai for work and personal projects. im not anti-ai. but i think my opinion is incredibly dull

johnfn•4m ago
I couldn't disagree more. Every anti-AI argument I read has the same tired elements - that AI produces slop (is it?) that is soulless (really?). That the human element is lost (are you sure?). As most arguments of the form "hey everyone else, stop being excited about something" typically go, I find these to be dispassionate -- not passionate. What is there to get excited about when your true goal is to quash everyone else's excitement?

Whereas I feel all pro-AI arguments are finding some new and exciting use case for AI. Novelty and exploration tend to be exciting, passion-inducing topics.

At least that's my experience.

larodi•3h ago
McDonalds exists and is more or less synthetic food. But we still cook at home, and also want food to be cooked by humans. Even if food gets to be 3D-printed, some people will cook. Likewise people still write, and draw paintings. So these two phenomena are bound to coexist, perhaps we don't yet know how.
logicprog•3h ago
I think the fundamental problem here is that there are two uses for the internet: as a source for on-demand information to learn a specific thing or solve a specific problem, and as a sort of proto-social network, to build human connections. For most people looking things up on the internet, the primary purpose is the former, whereas for most people posting things to the internet, the primary purpose is more the latter. With traditional search, there was an integration of the two desires because people who wanted information had to go directly to sources of information that were oriented towards human connection and then could be enramped onto the human connection part maybe. But it was also frustrating for that same reason, from the perspective of people that just wanted information — a lot of the time the information you were trying to gather was buried in stuff that focused too much on the personal, on the context and storytelling, when that wasn't wanted, or wasn't quite what you were looking for and so you had to read several sources and synthesize them together. The introduction of AI has sort of totally split those two worlds. Now people who just want straight to the point information targeted at specifically what they want will use an AI with web search or something enabled. Whereas people that want to make connections will use RSS, explore other pages on blogs, and us marginalia and wiby to find blogs in the first place. I'm not even really sure that this separation is necessarily ultimately a bad thing since one would hope that the long-term effect of it would be it to filter the users that show up on your blog down to those who are actually looking for precisely what you're looking for.
xenodium•3h ago
> I write the content on this website for people, not robots. I’m sharing my opinions and experiences so that you might identify with them and learn from them. I’m writing about things I care about because I like sharing and I like teaching.

Hits home for me. I tried hard to free my blog (https://xenodium.com) of any of the yucky things I try avoid in the modern web (tracking, paywalls, ads, bloat, redundant js, etc). You can even read from lynx if that's your cup of tea.

ps. If you'd like a blog like mine, I also offer it as a service https://LMNO.lol (custom domains welcome).

stevenking86•3h ago
Yeah, I guess sometimes I just want to know how long to cook the chicken. I don't want a bespoke recipe with soul and feeling. I'm going to add ingredients that my family likes. I just want to remember how long it generally takes to cook a specific something-or-other.
superllm•3h ago
awd
superllm•3h ago
sfesef
jsphweid•3h ago
> "Generative AI is a blender chewing up other people’s hard work, outputting a sad mush that kind of resembles what you’re looking for, but without any of the credibility or soul. Magic."

Humans have soul and magic and AI doesn't? Citation needed. I can't stand language like this; it isn't compelling.

lpribis•2h ago
I think the "soul" is coming from the fact that a human has worked, experimented, and tested with their physical senses a specific recipe until it tastes good. There is physical feedback involved. This is something an LLM cannot do. The LLM "recipe" is a statistical amalgamation of every ramen recipe in the training set.
jsphweid•2h ago
Or they just wrote down what their grandma used to do and changed how much salt they put in the water.

Or they read a few recipes and made their own statistical amalgamation and said "hey this seems to work" on the first try.

Or they're just making stuff up or scraping it and putting it on a website for ad money.

"Soul" not required.

Also does an LLM give the same recipe every time you ask? I'd wager you could change the context and get something a little more specialized.

jjk7•7m ago
You don't see a difference between doing and tweaking what your grandmother did and an AI statistically inferring a recipe?

How is building upon your ancestors knowledge and sharing that with the world not 'soul'?

root_axis•3h ago
> Well, I want you to visit my website. I want you to read an article from a search result, and then discover the other things I’ve written, the other people I link to, and explore the weird themes I’ve got.

An AI will do all that and present back to the user what is deemed relevant. In this scenario, the AI reading the site is the user's preferred client instead of a browser. I'm not saying this is an ideal vision of the future, but it seems inevitable.

There's more information added to the internet every day than any single person could consume in an entire lifetime, and the rate of new information created is accelerating. Someone's blog is just a molecule in an ever expanding ocean that AI will ply by necessity.

You will be assimilated. Your uniqueness will be added to the collective. Resistance is futile.

ccozan•3h ago
This has to go more radical: go offline in print. Make your content really just for humans. Except maybe Google, no LLM company would bother scanning some magazines ( especially if you have to subscribe )

I buy magazines especially for unique content, not found anywhere else.

progval•2h ago
Facebook trained on LibGen, which is made of printed books.
Cheetah26•2h ago
I actually think that llms could be good for human-focused websites.

When the average user is only going to AI for their information, it frees the rest of the web from worrying about SSO, advertisements, etc. The only people writing websites will be those who truly want to create a website (such as the author, based on the clear effort put into this site), and not those with alternate incentives (namely making money from page views).

1317•2h ago
if you want people to be able to look through all your content then it would help to not have to page through it 4 items at a time
mpyne•2h ago
I love the vibe, this is the Web I grew up with. Not sure I agree that I want my human readers to be forced to read my Web sites with their own eyes though.

I feel like this omakase vs. a la carte and "user agent" vs "author intent" keeps coming up over and over though. AI/LLM is just another battle in that long-running war.

tolerance•2h ago
I don’t think we are at a point in time where using the Web to augment or substitute for offline human interactions for the sake of “feels” is useful.

This website is for humans.

So what and what for?

inanutshellus•2h ago
This guy's website is missing the requisite twenty-plus advertisements, and auto-play videos and overlays (and AI-generated content) that I've become accustomed to from niche websites.

It's so prevalent and horrible that going to real websites is painful now.

... from a user perspective, ironically, the answer seems to be "talk to an AI to avoid AI generated junk content".

greenflag•1h ago
Beside the point but I really love the rainbow sparkles trailing the cursor on the netscape theme of this blog. Takes me back to a time when the internet was...fun
nicbou•1h ago
As someone who is currently threatened by the Google Zero, thank you.

This applies to recipes, but also to everything else that requires humans to experience life and feel things. Someone needs to find the best cafes in Berlin and document their fix for a 2007 Renault Kangoo fuel pump. Someone needs to try the gadget and feel the carefully designed clicking of the volume wheel. Someone has to get their heart broken in a specific way and someone has to write some kind words for them. Someone has to be disappointed in the customer service and warn others who come after them.

If you destroy the economics of sharing with other people, of getting reader mail and building communities of practice, you will kill all the things that made the internet great, and the livelihoods of those who built them.

And that is a damn shame.

beanjuiceII•28m ago
grok summarize this post
doug_durham•28m ago
It totally disagree with the comments on human generated recipes. There are only so many ways to make particular dishes. Most human generated recipes are timid variations on a theme. With an LLM I can make truly novel delicious recipes that break out of the same old pattern. The author attributes much more creativity in recipe creation than there actually is.