frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Are We Becoming Distilled Versions of AI?

2•3chinproblem•1h ago
I’ve been thinking about a possibility that seems right to me but I don’t see discussed directly. As people use AI for more decisions, our cognition may start to shift through normal learning processes. The brain absorbs repeated patterns. If AI becomes part of everyday decision-making, some of its reasoning habits may get reflected in ours. This would be a kind of “cognitive distillation,” similar to how small AI models learn from large ones.

Most AI use today are medium decisions: planning a trip, organizing a project, or writing an algorithm. These have low emotional pressure and low friction, so it’s easy to ask an AI for help. But small and large decisions are not currently widely influenced.

Small decisions are things like where to put an item, which door to use at a gas station (AI that can see the broken door sign you miss), or the order of misc tasks. We make thousands of these each day without thinking. AI doesn’t influence these yet because the interface friction is too high. It’s not convenient to open a device for choices that happen in seconds.

Large decisions are major life choices: lying to get out of a family event, complex interpersonal situations (even psych pros struggle to influence these), or who inherits a sentimental item desired by multiple family members. People ask AI about these already, but the barrier isn’t the interface. It’s that these choices have deep personal weight and are heavily influenced by emotion.

Right now AI lives in the middle, but both edges are shifting.

On the small-decision side, friction is dropping fast. Glasses, earbuds, smart environments, and real-time overlays will bring AI into the same sensory space we use. Instead of being something you consult, AI will simply be present and able to offer a suggestion at the moment a decision happens. That doesn’t require control. Even small cues can shape many tiny choices per day. These small decisions matter because they are frequent and form habits.

On the large-decision side, AI systems are becoming better at recognizing behavioral patterns and presenting structured analysis. And as people interact with them more often, they may feel a kind of narrative familiarity with the system, similar to how characters in books become mentally “predictable.” Over time this could give AI regular influence over complex situations without needing emotional depth.

Once AI informs both rapid small decisions and major long-term ones, it stops being a tool used only for specific tasks and becomes part of the whole decision-making pipeline.

This returns to the idea of distillation. In machine learning, a small model can learn from a large one by observing its outputs. The small model ends up with a compressed version of the large model’s behavior.

Humans learn similarly. Repeated exposure leads to internal shortcuts. When you interact with AI regularly, you start to pick up its patterns. Eventually you start structuring your own thoughts in similar ways without intending to. Similar to how we learn writing styles, heuristics, or professional habits simply by being exposed to them often.

If AI becomes heavily involved in daily decisions, especially rapid ones, it becomes a dense pattern source. Over time this could shift how people naturally break down problems or frame choices. It doesn’t require AI to be humanlike, only consistent.

If large numbers of people rely on the same families of AI systems over long periods, their thinking may converge in certain ways and eventually dramatic changes may occur given enough time fully interfaced. This may be most drastic with early exposure. As this distillation starts you may find yourself wondering if a given thought is entirely your own. And what does it even mean for a thought to be mine when my own neural pathways are a ChatGPT distillation?

I’m posting this because I’m curious whether you find this framing reasonable and if there’s existing research along these lines.

Comments

anonymouskimmer•35m ago
Yes, I think it's reasonable. We humans adjust to our environments. Whether physical, social, or informational.

Spoiler for DC's Legends of Tomorrow season 5.

I don't know enough to look for existing research, but what you wrote reminded me of a DC's Legends of Tomorrow episode (Swan Thong). In it the three fates of Greek mythology have established effective control over the world through a smartwatch app that people ask for decisions from (earlier they had tried direct totalitarianism, but the Legends had foiled that). https://youtu.be/aJZlJcmPUnc?t=75

In the episode, people adjust mentally somewhat, but I don't think it gets quite to the detail you ask about.

The Outer Limits episode Stream of Consciousness also deals with this topic a bit: https://theouterlimits.fandom.com/wiki/Stream_of_Consciousne...

And I just participated in a conversation here on HN somewhat along those lines: https://news.ycombinator.com/item?id=46070610

3chinproblem•8m ago
Interesting. It does seem like technology barring AI was already standardizing communication in some ways. I imagine that real universal languages may just naturally emerge.

Imagine this idea of distillation to language. You are speaking to someone and neither of you speak the same language. The AI is translating. With enough exposure to this, you might start picking up some of their words and vice versa.

Over enough time words from languages will begin to merge as a mix of many languages. Take this far enough along and we might all speak the same hybrid language.

Air pollution may reduce health benefits of excercise

https://medicalxpress.com/news/2025-11-air-pollution-health-benefits.html
1•fuzzythinker•1m ago•0 comments

Show HN: Swatchify – CLI to get a color palette from an image

https://james-see.github.io/swatchify/
1•jamescampbell•2m ago•0 comments

One-fifth of the jobs at your company could disappear as AI automation takes off

https://www.theregister.com/2025/11/27/ai_employee_overcapacity_report/
1•pjmlp•5m ago•0 comments

slbounce: DRTM Secure-Launch implementation for Qualcomm devices

https://github.com/TravMurav/slbounce
1•transpute•5m ago•0 comments

How a Data Model Dependency Nearly Derailed My Project

https://medium.com/@HobokenDays/the-fate-of-shared-data-model-cf8a3dc88ac9
1•steven-123•8m ago•0 comments

Show HN: Codex Swarm – Local ChatGPT swarm for coding with Git-tracked agents

https://github.com/basilisk-labs/codex-swarm
1•densmirnov•12m ago•0 comments

Show HN: I made a shell with AI suggestions – Caroushell

https://github.com/ubershmekel/caroushell
2•ubershmekel•14m ago•0 comments

Brendan Gregg on being copied as an 'AI Brendan'

https://www.brendangregg.com/blog//2025-11-28/ai-virtual-brendans.html
2•anitil•15m ago•1 comments

Show HN: Ray-BANNED, Glasses to detect smart-glasses that have cameras

https://github.com/NullPxl/banrays
1•nullpxl•18m ago•0 comments

Git-reabsorb: Reorganize Git commits with new structure using an LLM

https://github.com/AllyMarthaJ/git-reabsorb
1•benno128•18m ago•0 comments

Mission Critical Advanced Scheduling (ALAP/ASAP) System

https://github.com/rodmena-limited/scriptplan
1•rodmena•20m ago•0 comments

The China That the World Sees Is Not the One I Live In

https://www.nytimes.com/2025/11/13/opinion/china-politics-social-public-mood.html
1•kaycebasques•21m ago•1 comments

Generator Website

1•generatorsite•23m ago•0 comments

CSS inspiration is on the rise These are awesome tbh

https://twitter.com/BalintFerenczy/status/1946198804694245486
1•iamA_Austin•23m ago•0 comments

What's cooking on Sourcehut? Q4 2025

https://sourcehut.org/blog/2025-11-20-whats-cooking-q4-2025/
1•Kerrick•29m ago•0 comments

Around 500M PCs are holding off upgrading to Windows 11, says Dell

https://www.theverge.com/news/831364/dell-windows-11-upgrade-numbers-earnings-call-q3-2025
4•Fiveplus•31m ago•1 comments

More than 93% discount for Free Software on Black Friday;)

https://mastodon.social/@fsfe/115625326159147544
2•kirschner•34m ago•0 comments

The tech-debt death spiral

https://lindbakk.com/blog/the-tech-debt-death-spiral
3•Seb-C•34m ago•0 comments

Awesome Version Managers

https://github.com/bernardoduarte/awesome-version-managers
1•saikatsg•44m ago•0 comments

How to use Linux vsock for fast VM communication

https://popovicu.com/posts/how-to-use-linux-vsock-for-fast-vm-communication/
1•mfrw•51m ago•0 comments

Black Friday Deals for Developers and Tech Teams

https://github.com/Pimjo/black-friday-deals
1•vinishbhaskar•1h ago•1 comments

WhisperThunder – A New Fast, High-Quality Text-to-Video Model

https://www.whisperthunder.top/
2•RyanMu•1h ago•1 comments

Show HN: AI Agents for Customer Support

https://www.sparrowdesk.com/ref=hn
1•jgm22•1h ago•0 comments

Ask HN: As CTO, do you pick JavaScript/TS as the default stack?

2•sawirricardo•1h ago•5 comments

World War AI

https://www.epsilontheory.com/world-war-ai/
3•koolhead17•1h ago•1 comments

Are We Becoming Distilled Versions of AI?

2•3chinproblem•1h ago•2 comments

Show HN: I Am Building an Intuitive Database GUI for ClickHouse and Postgres

https://www.datacia.app
3•rwiteshbera•1h ago•1 comments

Lot Is Back

https://lot-systems.com
1•vadikmarmeladov•1h ago•0 comments

Billiard Fractals: The Infinite Patterns Hidden in a Rectangle

https://xcontcom.github.io/billiard-fractals/docs/article.html
3•grandpanda•1h ago•2 comments

TigerStyle: Coding philosophy focused on safety, performance, dev experience

https://tigerstyle.dev/
17•nateb2022•1h ago•3 comments