frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Bending Spoons laid off almost everybody at Vimeo yesterday

77•Daemon404•1h ago•37 comments

Ask HN: Do you have any evidence that agentic coding works?

372•terabytest•1d ago•379 comments

Avoid Cerebras if you are a founder

5•remusomega•53m ago•2 comments

Ask HN: Revive a mostly dead Discord server

17•movedx•20h ago•23 comments

Ask HN: COBOL devs, how are AI coding affecting your work?

167•zkid18•2d ago•183 comments

Ask HN: Which common map projections make Greenland look smaller?

17•jimnotgym•23h ago•16 comments

Ask HN: Is retreq / retspec a thing?

2•foobarbecue•4h ago•0 comments

Ask HN: How do you keep system context from rotting over time?

15•kennethops•1d ago•20 comments

Ask HN: Why don't tech companies provide housing?

5•alcasa•5h ago•7 comments

Ask HN: Is it even possible to stop Google Calendar Spam?

4•artur_makly•1h ago•1 comments

Ask HN: How to introduce Claude Code to a team?

8•9dev•1d ago•3 comments

Ask HN: What are the recommender systems papers from 2024-2025?

14•haensi•1d ago•1 comments

Ask HN: What's an API that you wish existed?

9•tornikeo•1d ago•14 comments

Ask HN: Did past "bubbles" have so many people claiming we were in a bubble?

16•bmau5•19h ago•18 comments

Ask HN: Local models to support home network infrastructure?

5•DrAwdeOccarim•1d ago•3 comments

Ask HN: Breaking into tech project management from different field?

4•conner_h5•20h ago•4 comments

Ask HN: How worried should I be about running LLM code on my machine?

9•scoofy•1d ago•4 comments

Ask HN: Should you combine your personal website and blog or keep them separate?

6•nanfinitum•23h ago•3 comments

Ask HN: Clipboard overflows causing system crashes in macOS Tahoe 26.3 beta 2?

8•nhubbard•1d ago•3 comments

Ask HN: How would you design for this scale today?

4•phs318u•1d ago•4 comments

Ask HN: Would you trust a new browser security extension in 2025?

3•linklock•1d ago•8 comments

Ask HN: What non-fiction do you read?

14•yanis_t•1d ago•15 comments

TruCite–an independent verification layer for AI outputs in regulated workflows

3•docmani74•1d ago•0 comments

Ask HN: What should I do with my old laptop in 2026?

5•nanfinitum•1d ago•8 comments

Treating anxiety as a bug in legacy code (engineering approach)

5•bitkin_dev•1d ago•5 comments

AI Californication

6•shoman3003•1d ago•2 comments

Ask HN: Do we need independence and autonomy in Edge-Cloud?

2•Dutchhack•19h ago•3 comments

Ask HN: how to detect teammate vs. enemy in Krunker.io?

2•kracked0x•20h ago•0 comments

Fabric lets me assess online AI from my Unix CLI

2•oldguy101•21h ago•1 comments

Ask HN: Claude Opus performance affected by time of day?

39•scaredreally•5d ago•39 comments
Open in hackernews

AI Californication

6•shoman3003•1d ago
I am not American, I come from a whole other culture that is the opposite of what the West is all about. Not saying that to antagonize you, but to say that as an outsider I can somewhat see how much the culture of California has impacted the worldview of most people over the past 50 year! But what is to come is even more impactful, either good or bad ..... it will result in much much less limitations to thought patterns.

It started with Hollywood of course, which made the other third of the planet that was not influenced by the British start thinking in english, which limits ideas that can be thought of. Ideas come from word, language is the month of thinking (I started thinking in english ~2010).

Then in the last 10 years where social media have made it even harder to escape the thought patterns that were made in SF, even with someone like me who spent 30 year in a closed off culture is starting to think more and more in American phrases- sometimes its hard to break that loop, I am starting to forget the old ways of thinking and looking at the world.

Still, all of that can be good. Ideas of feminism and tolerance of queer people are very good in general. But now with Ai modules that are unable to escape the brainwave of a California leftist, I fear for the intellectual future of the world (scientific too, since both are correlated). Even deepseek is more or less like the other models, its very hard to get it to look at things in a different way, or even debate basic stuff as an entity that lived a lifetime under socialism would look at things. My own culture is dying, so the probability that Ai would even understand our thinking pattern is slim (considering that much of it is not digitalized even).

20 years ago, I would look around my home and see so many unique things that represent my culture, and I would talk to so many people who never heard of Sinatra or watched a sitcom. Yesterday I looked around, only 1 item in the entire house is authentic, and lately i find many people speaking decent english & talking about how "woke" the world has gotten!

I spend a couple of months trying to prompt chatgpt & grok to debate various topic from a different worldview that is repeated all day in my head. But it can't, not because of any limitation on the module itself, but because it never got deep enough into that worldview to know what the hell is it about (sometimes it deliberately try not to).

I am not saying all of this to say that I hate the West, I experienced the ugliness of the East & I know how awful it is. I am just saying that LLMs are very limiting - if we feed it the same data every time, it's going to be more or less the same every time too. Doesn't matter where you got the data, if its in english from the upperlevel of the internet, its basically the same.

Comments

wosined•1d ago
You cannot expect the West to make stuff for the East. They make stuff for themselves. You can do the same.
ensocode•7h ago
Interesting thought, thanks. So what would we need? What if everyone effectively had a permanent microphone — via smartphones, smart speakers, cars, wearables — and all of that lived, spoken, emotionally charged data were fed into future LLMs?

On the surface, that sounds like a path toward richer models: less elite-written text, more everyday language, more non-academic thinking, more embodied culture. But it also raises a deeper question: whose reality would actually be learned?

Because even if the data were global, the selection, labeling, weighting, and training objectives would still be controlled somewhere.

And then there’s preference. Would people eventually choose their models the way they choose media ecosystems today? A Californian-progressive LLM. A post-socialist Eastern European LLM. A Palestinian LLM for discussing geopolitics. A deeply conservative, tradition-preserving LLM that treats modernity itself as suspect.

If that happens, AI wouldn’t homogenize thought — it would solidify worldviews into software as it is done in media today. Dialogue might actually become harder, not easier.

So the think may not be “AI Californication” alone, but AI Balkanization, ...

The open question is whether we can build models that don’t just represent cultures, but can genuinely inhabit multiple, conflicting ontologies without collapsing them into a single moral frame. That may be the hardest problem of all — and one that current LLMs, trained mostly on English-speaking upper layers of the internet, are nowhere near solving yet.