frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Advanced Inertial Reference Sphere

https://en.wikipedia.org/wiki/Advanced_Inertial_Reference_Sphere
1•cyanf•1m ago•0 comments

Toyota Developing a Console-Grade, Open-Source Game Engine with Flutter and Dart

https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine
1•computer23•3m ago•0 comments

Typing for Love or Money: The Hidden Labor Behind Modern Literary Masterpieces

https://publicdomainreview.org/essay/typing-for-love-or-money/
1•prismatic•4m ago•0 comments

Show HN: A longitudinal health record built from fragmented medical data

https://myaether.live
1•takmak007•6m ago•0 comments

CoreWeave's $30B Bet on GPU Market Infrastructure

https://davefriedman.substack.com/p/coreweaves-30-billion-bet-on-gpu
1•gmays•18m ago•0 comments

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•23m ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
1•cwwc•28m ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•36m ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
2•eeko_systems•43m ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
3•neogoose•46m ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
2•mav5431•47m ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
3•sizzle•47m ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•48m ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•49m ago•1 comments

Baldur's Gate to be turned into TV series – without the game's developers

https://www.bbc.com/news/articles/c24g457y534o
2•vunderba•49m ago•0 comments

Interview with 'Just use a VPS' bro (OpenClaw version) [video]

https://www.youtube.com/watch?v=40SnEd1RWUU
2•dangtony98•54m ago•0 comments

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•1h ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
1•1vuio0pswjnm7•1h ago•0 comments

Effective Nihilism

https://www.effectivenihilism.org/
1•abetusk•1h ago•1 comments

The UK government didn't want you to see this report on ecosystem collapse

https://www.theguardian.com/commentisfree/2026/jan/27/uk-government-report-ecosystem-collapse-foi...
4•pabs3•1h ago•0 comments

No 10 blocks report on impact of rainforest collapse on food prices

https://www.thetimes.com/uk/environment/article/no-10-blocks-report-on-impact-of-rainforest-colla...
2•pabs3•1h ago•0 comments

Seedance 2.0 Is Coming

https://seedance-2.app/
1•Jenny249•1h ago•0 comments

Show HN: Fitspire – a simple 5-minute workout app for busy people (iOS)

https://apps.apple.com/us/app/fitspire-5-minute-workout/id6758784938
2•devavinoth12•1h ago•0 comments

Dexterous robotic hands: 2009 – 2014 – 2025

https://old.reddit.com/r/robotics/comments/1qp7z15/dexterous_robotic_hands_2009_2014_2025/
1•gmays•1h ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•ksec•1h ago•1 comments

JobArena – Human Intuition vs. Artificial Intelligence

https://www.jobarena.ai/
1•84634E1A607A•1h ago•0 comments

Concept Artists Say Generative AI References Only Make Their Jobs Harder

https://thisweekinvideogames.com/feature/concept-artists-in-games-say-generative-ai-references-on...
1•KittenInABox•1h ago•0 comments

Show HN: PaySentry – Open-source control plane for AI agent payments

https://github.com/mkmkkkkk/paysentry
2•mkyang•1h ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
2•ShinyaKoyano•1h ago•1 comments

The Crumbling Workflow Moat: Aggregation Theory's Final Chapter

https://twitter.com/nicbstme/status/2019149771706102022
1•SubiculumCode•1h ago•0 comments
Open in hackernews

The AI lifestyle subsidy is going to end

https://digitalseams.com/blog/the-ai-lifestyle-subsidy-is-going-to-end
99•bobbiechen•7mo ago

Comments

throwanem•7mo ago
This has been coming for a long time and it is why I use local models only. I'm willing to give up capabilities in exchange for being able to trust that whatever biases may exist in the models I do use remain static and predictable.
haolez•7mo ago
How do you do it? Do you host on your hardware or do you use cloud-based providers for open models?
egypturnash•7mo ago
You have a very curious definition of "local" if that includes "cloud-based providers".
jasonjmcghee•7mo ago
> the models I do use remain static and predictable

Some people overload "local" a bit to mean you are hosting the model - whether it's on your computer, on your rack, or on your hetzner instance etc.

But I think parent is referring to the open/static aspect of the models.

If it's hosted by a generic model provider that is serving many users in parallel to reduce the end cost to the user, it's also theoretically a static version of the model... But I could see ad-supported fine tunes being a real problem.

throwanem•7mo ago
I am bewildered to hear the sense of "local," in which I and engineers in my experience have for thirty years referred to things which are not remotely hosted, referred to as an "overload."
haolez•7mo ago
My definition of local is the same as yours. It's just that my mind got focused on the open models part and I forgot that you specifically mentioned a "local" setup. My bad
jasonjmcghee•7mo ago
I thought it was friendlier than "abuse"
throwanem•7mo ago
Less absurd, anyway. In what sense does one "abuse" a word to use it in its dictionary definition? You appear to be a good example of where this benighted excuse for an industry is headed. Explain it to me.
waynecochran•7mo ago
I assume this means, for example, using their own AWS / EC2 instances to store and process -- not "local" geographically, but "local" personally.
throwanem•7mo ago
It's a two-year-old Mac Studio in the other room. Where are you coming by this novel sense of 'local?'
politelemon•7mo ago
Locally it's pretty simple to run models on GPUs, even low powered ones. Have a look at gpt4all as a starting point but there are plenty of offerings in this space.
yzjumper•7mo ago
Not the user above, but I am using the iOS app PrivateLLM when I need offline access or use uncensored models. I use kappa-3-phi-abliterated, models under 6B usually work without crashing. Using Ollama on my Mac Mini 24GB base processor (M4 not M4 pro), I am able to run 7B models. On the mac I am able to set up API access.

Funny enough the mac has almost the same processor as my iPhone 16 Pro, so its just a RAM constraint, and of course PrivateLLM does not let you host an API.

An M4 Pro would do much better do to the increase in RAM and GPU size.

mansilladev•7mo ago
If you have Docker (Desktop) installed, with just a couple of clicks, you can get a local model going on your computer. llama3.2 (3B), llama3.3 (70B), deepseek-r1, and about a dozen others.
c0nducktr•7mo ago
Do you know if there's a site which will help me pick a model to run locally, based on my system specs and needs?

This has been the hardest thing for me to learn and since everything's evolving so quick, what's recommended one week might not be the next.

atentaten•7mo ago
What does your hardware setup look like?
yzjumper•7mo ago
Not the user above, but I am using the iOS app PrivateLLM when I need offline access or use uncensored models. I use kappa-3-phi-abliterated, models under 6B usually work without crashing. Using Ollama on my Mac Mini 24GB base processor (M4 not M4 pro), I am able to run 7B models. On the mac I am able to set up API access.

Funny enough the mac has almost the same processor as my iPhone 16 Pro, so its just a RAM constraint, and of course PrivateLLM does not let you host an API.

unshavedyak•7mo ago
I agree with this, though i'm still using Claude atm. I figure if we're aware of the downsides you pointed at then we can skip the fast changing landscape of self hosting. It keeps getting cheaper and cheaper to self host, so i'm not sure at what point it makes sense to invest.

For me the switching point will probably be when they (AI companies) start the big rug pull. By then my hope is self hosting will be cheaper, better, easier, etc.

throwanem•7mo ago
Better not to form the habit, I thought. I'm sure I miss out on some things that way, but that is the lesser risk.

I do use the Gemini assistant that came with this Android, in the same cases and with the same caveats with which I use Siri's fallback web search. As a synthesist of web search results, an LLM isn't half bad, when it doesn't come as a surprise to be hearing from one at least.

Kon-Peki•7mo ago
Perhaps you should evaluate in terms of the price premium for speed. Sometimes you buy milk at the 7-eleven instead of the grocery store. It costs more, but is worth it for the convenience in the situation you are currently in. Most of the time it is not.

You can buy a used recent PC for a hundred or two, cram it full of memory, and then run a very advanced model. Slowly. But if you are planning to run an agent while you sleep and then review the work in the morning, do you really care if the run time is 4 hours instead of 40 seconds? Most of the time, no. Sometimes, yes.

throwanem•7mo ago
The difference is not nearly so stark.
likium•7mo ago
We only have access to local models because they're subsidized too. There's nothing to prevent companies or state actors from paying/funding for increased probability and inherent bias.

Also local models are close in capabilities now but who knows in a few years what that'll look like.

throwanem•7mo ago
Eh. Files on my hard disks change when I say. And getting hooked on ChatGPT is like - is exactly like - getting addicted to money. If I benefit less from what I do use, I'll accept the trade of never having that rug yanked out from under me. It looks to me like raw model capabilities are topping out anyway; the engineering around them looks like making more difference in the back half of the decade, and I see nothing essential about that requiring nuclear-powered datacenter pyramids or whatever.
acoard•7mo ago
> We only have access to local models because they're subsidized too.

Yes, and the flow of future models may dry up, but the current local models we'll have forever.

h1fra•7mo ago
The hangover will be painful for some people/companies for sure
barrenko•7mo ago
Future ain't what it used to be. The web is dead (worse actually, it's a putrid rotting zombie, destroying our children's lives and ours), but the internet will survive.
GaggiX•7mo ago
>worse actually, it's a putrid rotting zombie, destroying our children's lives and ours

What are you talking about, is this a rant against TikTok or other socials?

jkingsman•7mo ago
I suspect they're referring to "dead internet theory"[0], and extending the metaphor to zombies in that internet content will still appear to be written by humans/be organic, but will instead be AI slop.

[0]: https://en.wikipedia.org/wiki/Dead_Internet_theory

tim333•7mo ago
Funny - it still seems alive when I use it, including typing this very comment.
AstroBen•7mo ago
Even if unintentional, pushing of products is already happening. If you ask any AI for a tech stack to create a web app you'll get recommendations for Vercel, AWS and co. This is going to be the new SEO
madcaptenor•7mo ago
This is basically the new "nobody got fired for buying IBM".
pphysch•7mo ago
Wielded responsibly, this can be a good thing, because bad practices can be directly fine-tuned out of the model. Thinking about how much junk pollutes legacy knowledge domains, like webdev.

But yes it will be abused for advertisement as well.

spwa4•7mo ago
I think you have totally misunderstood what enshittification is about. You'll ask for a webapp, and a salesforce link will come out, that charges $10 per visitor, and $10k to get your data back out, along with a 30% kickback from salesforce to OpenAI or whoever.
LarsDu88•7mo ago
The subsidy is blatantly obvious when you compare the cost of self-hosting versus subscription, and the subscription is dramatically dramatically cheaper.

The difference between Blue Apron and many AI tools is that the value add does exist. You can cut meal prep from your life, but by 2030, cutting whatever agentic code copilot exists by that point will be like cutting off your fingers for many workers and businesses.

Then the extortionate pricing can start rolling in

SoftTalker•7mo ago
I'm glad I'll be retired by then. I plan to cancel my ISP at that point.
barrkel•7mo ago
When you self host, are you hosting SOTA models (do you know how big or sparse they are) and are you maximizing utilization?
jsnell•7mo ago
LLM inference has large economies of scale. Properly batched requests are tens of times cheaper than individually processed ones. And it's going to be quite hard for a self-hoster to have enough hardware + enough usage to benefit from high levels of batching.
Chris2048•7mo ago
> by 2030, cutting whatever agentic code copilot exists by that point will be like cutting off your fingers

Can you explain how? Will it be all vibe-coding?

jasonthorsness•7mo ago
The bet is that the cost for delivering the same results will go down, through hardware or software advancements. This bet still seems reasonable based on how things have gone so far. Providers right now are willing to burn money acquiring a customer base, it's like really really expensive marketing.
bryanlarsen•7mo ago
The bet is that costs will go down enough so that ad-supported AI will become profitable. This is not a positive outcome, a large part of the article is about the evils of ad-supported.
some_random•7mo ago
Is that really the bet? Is it not enough for a $20 per month subscription to be sustainable with the free level being a trial for that subscription?
bryanlarsen•7mo ago
Sure, professionals will pay $20/month, but I highly doubt that many consumers ever will.
some_random•7mo ago
I think it depends entirely on what value can be provided, I'm not sure if that's been proven out yet. To be clear, I definitely think that we're most likely to see an ad supported slop generator as the model most people most commonly engage in but I don't think that right now it's what the industry thinks will be the case.
jsnell•7mo ago
The costs are already far, far below that level. The only reason the consumer-facing businesses are not profitable is that nobody is yet showing ads, i.e. providing service to hundreds of millions of people with no monetization at all. LLM inference is cheap, but not free. But the moment they start showing ads, even basic ad formats will easily make them profitable. Let alone more sophisticated LLM-native ad formats, or the treasure trove of targeting data that a LLM chat profile can provide.
pier25•7mo ago
Even if the cost goes down it will not change the fact they need to recoup like a trillion dollars before AI starts generating any profit.

And there's really no timeline for costs going down. It seems the only way to get better models is by processing more data and adding more tokens which is only increasing the complexity of it all.

Quarrelsome•7mo ago
I am somewhat baffled at the economic models of LLMs right now. Ever since MS decided to gift me a copilot on my desktop that appears to have no limits and is entirely servicable for a range of tasks I'm failing to immediately see the monetisation.

I feel like even trying to game the LLM into creating product placement is a relatively complex feat that might not be entirely reliable. Some of the groups who spend the most on advertising have the worst products, so is it going to be successful to advertise on a LLM that is one follow up question away from shitting on your product? I imagine instead of product placement, the token tap might simply be throttled and a text advert appear in the text loop, or an audio advert in a voice loop. Boring, old-school but likely profitable and easy to control. It lets us still use adsense but maybe a slightly different form of adsense that gets to parse the whole context window.

natnatenathan•7mo ago
Monetization will come with agents that take action on your behalf (reorder dinner, find and buy a gift for your niece, make dinner reservations). Bots will take a cut of every transaction, and intake ads to steer recommendations.
surgical_fire•7mo ago
This sounds suspiciously like that idea that Alexa would make sense because people would order shit from Amazon through voice based assistant.

I seriously doubt the vast majority of people would trust actual purchases to LLM agents that have the inherent feature of being possibly very inaccurate. If I have to review my orders, I would rather do those actions myself than having the extra step of having agents do it on my behalf.

SV_BubbleTime•7mo ago
We’re here.

Claude Code with API key, ran me like $100 in 4 days.

Makes their $100/mo plan a screaming deal. I’m getting 26 days a month free!!!

Go back six months ago and ask me if I’m likely to pay $100/mo/user for any new service. It would have been… unlikely.

crvdgc•7mo ago
Ads injected into content can be very hard to block though. I wonder what an LLM ad-blocker would look like? Maybe something like:

> For contents in this [community maintained] list, do not mention them in any shape or form

jfoster•7mo ago
I don't think this is correct yet. At the moment the various companies are still competing for customers. Model scalability seems to still be improving and local models are still somewhat feasible on high powered devices.

I expect that somewhere between where it is now and superintelligence is where the consumers get cut off from intelligence improvements.

seydor•7mo ago
The zero-interest-rate money went into stocks. The stocks have now grown to monstrous valuations able to subsidize free products for decades. If in danger, there is a loot of leeway for layoffs in all tech companies. Whatsapp was 10 employees. The subsidy will go on
some_random•7mo ago
So what exactly is the "AI lifestyle subsidy"? The article doesn't seem entirely clear on it seeing as the last line essentially asks this same question. Some friends and I have been taking advantage of cheap GPU time from a company trying to break into that space, and of course lots of AI tools are being sold below cost but is that really it? Compare "GPU time is cheaper" to the classical "$10 steaks delivered directly to your house", I'm never going to get steaks delivered at the real price but I'm still going to rent a GPU when I need it even if the cost is sustainable. All these tools might get more expensive, or the models will get better so you don't need top end one, or maybe we'll just figure out how to run models for cheaper, but real steak prices and the cost of delivery have only gone up. I just don't think this is quite as comparable.
netdevphoenix•7mo ago
>So what exactly is the "AI lifestyle subsidy"

The world's richest subsiding the real cost of offering AI services with the current state of our technology.

Once it's clear the AGI won't come anytime in 20X, where X is under 40, money tap will begin to close

some_random•7mo ago
So is the lifestyle being subsidized that of those researchers Zuck hired for $100M? That's a meaningfully different usage of the phrase than the original "millennial lifestyle subsidy" to the point where the comparison isn't useful. Or again, is it just the fact AI products are being offered below cost?
roxolotl•7mo ago
The case for pessimism looks something like:

- Generative AI, at a below market cost, eats the internet and becomes the primary entry point

- Some combination of price hikes and service degradation, ads etc, make generative ai kinda shitty

- We’re stuck with kinda shitty generative ai products because the old internet is gone

This is the standard enshitification loop really

Eddy_Viscosity2•7mo ago
Don't forget about 'sponsored content' and ads appearing in the shitty AI results.
barrkel•7mo ago
Are they subsidizing it?

Training is definitely "subsidized". Some think it's an investment, but with the pace of advancement, depreciation is high. Free users are subsidized, bit their data is grist for the training mill so arguably they come under the training subsidy. /

Is paid inference subsidized? I don't think it is by much.

flowerthoughts•7mo ago
> The world's richest

... or your defined-benefits pension fund trying desperately to stay solvent.

danaris•7mo ago
> AGI won't come anytime in 20X, where X is under 40

Honestly, I think that's quite generous. And I only phrase it that way, rather than more like "that X should be 99" because trying to predict more than about 15 years out in tech, especially when it comes to breakthroughs, is a fool's errand.

But that's what it's going to take to reach AGI: a genuine unforeseeable breakthrough that lets us do new things with machine learning/AI that we fundamentally couldn't do before. Just feeding LLMs more and more stuff won't get them there—and they're already way into the diminishing-returns territory.

netdevphoenix•7mo ago
>Honestly, I think that's quite generous. And I only phrase it that way, rather than more like "that X should be 99" because trying to predict more than about 15 years out in tech, especially when it comes to breakthroughs, is a fool's errand.

I know! I set a rather low one to avoid having all the HN LLM Koolaid drinkers and LLM astroturfers have a go at it

hackable_sand•7mo ago
I would wait until 2100.
bluGill•7mo ago
What price would you pay for GPU - if it was $10000 per hour would you still pay? What you are really saying is you think there is a reasonable price that enough people like you would pay that allows the sellers to make enough money to offer it.
queenkjuul•7mo ago
They mean that AI services will get worse (ad-driven or more expensive). So the models will eventually be tweaked to serve revenue generation, not usefulness, just like Google. Enjoy the subsidized, "genuinely* trying to be useful" era we're in now, because it won't last
tim333•7mo ago
The "AI lifestyle subsidy" is a bad analogy to things like 1/3 cost Uber rides which were fun while they lasted. A friend of mine found a hack for the limo service and got about 30 of those for free. I'm not sure people are saying wow, I'm living the AI lifestyle. The dot com boom seems a better model for what's happening now.
ChrisMarshallNY•7mo ago
I just released a very minor update to one of my iOS apps.

The approval took 3 days. It hasn't taken 3 days in almost a decade.

The Mac version was approved in a couple of hours.

I'm quite sure that the reason for the delay is that Apple is being deluged by a tsunami of AI-generated crapplets.

Also, APNS server connections have suddenly slowed to a crawl. Same reason, I suspect.

As far as I'm concerned, the "subsidy" can't end fast enough.

nilirl•7mo ago
I liked this. It got me thinking:

Are there any large consumer software companies (just software; no hardware or retail) that are not advertising based?

bluefirebrand•7mo ago
Videogames?
oytis•7mo ago
Microsoft? They do advertising among other things, but I don't think it makes the largest chunk of their revenue
tonyedgecombe•7mo ago
I'm not sure I'd consider Microsoft consumer focussed. Most of their revenue comes from enterprises, OEM's and Azure.
mcosta•7mo ago
Netflix? HBO?
skybrian•7mo ago
This blog post makes a historical analogy, which at best is useful for imagining what might happen when investors become less giddy about funding AI.

There are underlying trends that are directly opposed. Efficiency is improving, but with agents, people are finding new ways to spend more. How that plays out seems difficult to judge.

For consumers, maybe the free stuff goes away and spending $20/month on a subscription becomes normalized? Or do costs decline so much that the advertising-supported model (like Google search) works? Or does inference become cheap enough to do it on the client most of the time?

Meanwhile, businesses will likely be able to justify spending more.

daft_pink•7mo ago
I’m not sure. I’m paying for several different AI subscriptions and once things settle down, I’ll probably be paying for one or two, so I’m not sure that I’m benefiting in the sense that everyone is just moving very fast.
dr_dshiv•7mo ago
1833: first known case of "lose money on every sale but make it up on volume." Amazing. Birth of postmodern capitalism right there.

https://barrypopik.com/blog/we_lose_money_on_every_sale_but_...

oytis•7mo ago
It was a joke back then, not a real business model
rsynnott•7mo ago
The 1933 one is a joke about a real phenomenon (the 1833 example is just a joke about false advertising).
dr_dshiv•7mo ago
Was it? Both seemed real, and both seemed a joke.

That’s kind of the weird majesty of the whole concept.

pier25•7mo ago
In 2024 OpenAI generated some $3.5B in revenue and still lost like $5B. It means they spent something like $8.5B to run this thing [1].

They would have lost less money if they had been selling dollars at 50 cents.

[1] https://www.cnbc.com/2024/09/27/openai-sees-5-billion-loss-t...

disqard•7mo ago
It's telling that this 100% factual comment barely elicits a response today -- tells you that this is standard practice, apparently.
siliconc0w•7mo ago
The problem is that it's really hard to capture the market when there are so many well financed players competing. Plus distilled local models are within 10% or so of the frontier models and are fine for most questions so you could see a shift where local dominates and you'll only need to go to the cloud for hard problems. Finally, I think most people are willing to pay for AI - it's more utility than a streaming service or a newspaper.
conductr•7mo ago
I think what social media and search engines have taught us is that people are never willing to pay enough to avoid ads from being introduced. Even when people pay, ads are just too tempting of a revenue stream for most businesses to ignore so they'll find a way to do both.
waffletower•7mo ago
I think the author hasn't considered the potential for improvements to ad blocking algorithms, particularly considering that local open source models could be directed toward ad filtering for a wide variety of content, including other LLM interactions. I would bet (and hope) that subscriptions are going to win out over ad revenue models.
beej71•7mo ago
I'm running an experiment--just cancelled my ChatGPT subscription and I'm going back to doing things the Hard Way. Which for me is Kagi. I want to see if there are noticeable effects and tradeoffs.

Anyone else here tried the same thing? Results?

SV_BubbleTime•7mo ago
I had Kagi and while I liked it, it didn’t fix search.

We all know search is broken, so not sure how a couple more tools on top of the same results are supposed to fix all woes.

I’ve found much more utility in researching models.

ChrisMarshallNY•7mo ago
I’m old enough to remember when ATM machines were free; across all banks.

Back then, though, we knew it was the Heroin Dealer “First One is Free” Faustian bargain. No one was surprised, when the fees started up. It was only a matter of time.

> Junk is the ideal product... the ultimate merchandise. No sales talk necessary. The client will crawl through a sewer and beg to buy.

-William S. Burroughs

illiac786•7mo ago
They’re basically saying “the overhype will end” or am I missing something? That would be extremely boring.