frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
55•theblazehen•2d ago•11 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
637•klaussilveira•13h ago•188 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
935•xnx•18h ago•549 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
35•helloplanets•4d ago•30 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
113•matheusalmeida•1d ago•28 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
13•kaonwarb•3d ago•11 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
45•videotopia•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
222•isitcontent•13h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
214•dmpetrov•13h ago•106 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
324•vecti•15h ago•142 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
373•ostacke•19h ago•94 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
478•todsacerdoti•21h ago•237 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•19h ago•181 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
278•eljojo•16h ago•165 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
407•lstoll•19h ago•273 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
85•quibono•4d ago•21 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
57•kmm•5d ago•4 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
26•romes•4d ago•3 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
16•jesperordrup•3h ago•10 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
245•i5heu•16h ago•193 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
14•bikenaga•3d ago•2 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
54•gfortaine•11h ago•22 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
143•vmatsiiako•18h ago•64 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1061•cdrnsf•22h ago•438 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
179•limoce•3d ago•96 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
284•surprisetalk•3d ago•38 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
137•SerCe•9h ago•124 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
70•phreda4•12h ago•14 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
28•gmays•8h ago•11 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
63•rescrv•21h ago•23 comments
Open in hackernews

Language is primarily a tool for communication rather than thought (2024) [pdf]

https://gwern.net/doc/psychology/linguistics/2024-fedorenko.pdf
138•netfortius•2mo ago

Comments

netfortius•2mo ago
Excellent, comprehensive, extremely thorough work behind all this. Maturana would love it!
krackers•2mo ago
Doesn't hellen keller provide a counterexample? She seemed to imply pretty strongly that before acquisition of language she operated more on stimulus and bodily perception rather than higher-level thought.
brianush1•2mo ago
One could make the argument that higher-level thought is not the same as awareness of higher-level thought; perhaps language only affords the latter.
yyyk•2mo ago
It's clear humans have several networks working together. Some Mathematicians report they 'see' the solution, these rely on a visual network *. Others report they prefer to do math symbolically (relying on the language network?).

Perhaps there are also multiple human paths to higher-level thought, with Keller (who lost her sight) using the language facility while others don't have to.

* Given Box 1 contents, the article authors seem unaware of the research on this? e.g.

https://www.youcubed.org/resource/visual-mathematics/

https://www.hilarispublisher.com/open-access/seeing-as-under...

uoaei•2mo ago
She learned "language" later than most. The primary function for her was as communication with the outside world, not for cognition, which she was already doing from birth.
lunar-whitey•2mo ago
Keller's early experience of the world differed from typical in dimensions beyond language recognition.
BanditDefender•2mo ago
Those aren't mutually exclusive, stimulus and bodily perception enable higher-level thoughts about the physical world. Once I was driving a big cheap pickup with a heavy load on an interstate, and a rear tire violently blew out, causing the truck to sway violently. I operated entirely by feel + my 3D mental model of a moving truck to discern what and where went wrong and how to safely pull over. It was too fast and too difficult for any stupid words to get in the way.

I am glad humans are meaningfully smarter than chimps, and not merely more vocal. Helen Keller herself seemed to think that learning language finally helped her understand what this weird language thing was:

  I stood still, my whole attention fixed upon the motions of her fingers. Suddenly I felt a misty consciousness as of something forgotten—a thrill of returning thought; and somehow the mystery of language was revealed to me. I knew then that w-a-t-e-r meant the wonderful cool something that was flowing over my hand. The living word awakened my soul, gave it light, hope, set it free!
It is not like she was constantly dehydrated because she didn't understand what water was. She realized even a somewhat open-ended concept like "water" could be given a name by virtue of being recognizable via stimulus and bodily perception. That in and of itself is quite a high-level thought!
Grimblewald•2mo ago
No, if i recall the section in her autobiography, specifically it was being taught the concept of "i" / "me" that did it.

Up until that point language was just an extension of what she already knew, it was the learning of being other that did the trick. Being blind and deaf would certainly make it hard to draw a distinction between the self and the world, and while languaged helped her get that concept under wraps, i dont think it's strictly speaking required. Just one of many avenues towards.

notarobot123•2mo ago
But language is also the only way to communicate this. As far as I can tell my cat has a complex consciousnesses but there is no way for me to tell if she has this capacity for introspection and self-reflexivity.

If there are other avenues other than language, how would we know?

I think language is a medium that enables this kind of structured thought. Without it, I cannot imagine reaching this level of abstraction (understanding being a "self").

balamatom•2mo ago
As far as the cat herself is concerned, there is no reason to make that known, either. "Introspection" and "self-reflexivity" are notions, language items. Best used by a human for explaining to other humans why that human should be fed, you know?

What ontological difference does it make whether a being contains "introspection" and "self-reflexivity" but not "nuclear physics" or "interpretive dance"? It's still hungry with or without them. And what good is any of those to a cat, when "meow" fills the bowl just fine?

>If there are other avenues other than language, how would we know?

Well, if you knew, you'd certainly know, tautology extremely intended.

You would just be unable to communicate it, because language would forbid it.

Not "not support it", you see, explicitly forbid it: it would not only be impossible for you to communicate it, you would be exposing yourself to danger by attempting to communicate it.

Because the arbitrary limitation of expressible complexity is what holds language in power. (Hint: if people keep responding to you in confusing ways, you may be doing extralinguistic cognition; keep it up!)

>I think language is a medium that enables this kind of structured thought. Without it, I cannot imagine reaching this level of abstraction (understanding being a "self").

Language does a bait and switch here: first it sets a normative upper bound on the efficiency of knowledge transfer, then points at the limitation and names it "knowledge".

That's stupid.

Example: "the Self", oh that pesky Self, what is its true nature o wise ones? It's just another fucking linguistic artifact, that's what it is; "self-referentiality" is like the least abstract thing there is. You just got a bunch of extra unrelated stuff tacked onto that. And of course, you have an obligation to mistake that stuff for some mysterious ineffable nature and/or for yourself: if you did not learn to perform these miscognitions, the apes would very quickly begin to deny you sustenance, shelter, and/or bodily integrity.

Sincerely, your cat

Grimblewald•2mo ago
Plus, i'd argue without a concept of self there is no concept of territory and cats are territorial.
balamatom•2mo ago
I don't know. I can be territorial just fine without all those concepty thingies. Hissss!
andai•2mo ago
When I was a kid a friend asked me, "Hey, you speak three languages. Which one do you think in?"

I was bemused, and thought... "people think in words?"

Apparently people with ADHD or Autism can develop the inner voice later in life.

In my 20s, language colonized my brain. Took me years of meditation to get some peace and quiet back...

tarsinge•2mo ago
Meditation is interesting because it made me able to not only separate thoughts from words, but also consciousness from thoughts.

It’s also consistent with our intuition that toddlers have consciousness and thoughts and other mammals at least consciousness (and emotions) without language.

wobfan•2mo ago
I have never not thought in words. How does it work? Like, how can I for example think about plans or something if not in words?

I do meditate here and now, but sooner or later the constant stream of words will 100% set in again, usually during or immediately after meditation. And these words for example tell me or discuss whether I should go shower, go to gym, do dishes, or whatever. And in the end I'll decide based on that discussion and do it. It's weird how defined I am by this inner voice.

kranner•2mo ago
What about a-ha moments when you're solving a tricky problem? For me they come in a flash and I know I've solved the problem even before I've narrated the solution to myself.
CjHuber•2mo ago
For me such moments come in the form of knowing that I can verbalize it, but I have to verbalize it as quickly as possible otherwise I might loose it
j4coh•2mo ago
I tend to think in images without an internal dialog running. If I think about an upcoming trip I will imagine a series of images related to the trip, possible places to go, or just generally the place. After a bit a potential conclusion appears fully formed in my mind. If I think about a work problem, I might imagine the document, a coworkers face, or something like that while ruminating on it. Basically it feels like the subconscious is handling the details and the conscious self overall directs it.

Occasionally there is some snippet of a sentence I imagine, but it’s almost always cut off prior to finishing the sentence. If I imagine writing something, though, I’ll speak it to myself in my head.

Funnily enough, I’m a pretty weak mental visualiser too. I don’t have aphantasia but metal images are very transparent and dark.

MangoToupe•2mo ago
Interesting. I do the same but would never refer to this as thinking. Probably something more like "visualizing" or "feeling".
j4coh•2mo ago
It works for coding or system architecture and things like that, as well. For you, when you start thinking, a narrative voice appears? Is it debating yourself?
MangoToupe•2mo ago
No, I have plenty of non-linguistic mental processes, I just tend to define thoughts as linguistic to distinguish them from the other mental processes.
heavymemory•2mo ago
I think primarily in structures, spaces, and transformations. Language tags along afterward.
tryfinally•2mo ago
I do have an inner monologue, but I do make many decisions non-verbally. I often visualize actions and their consequences, in the context of my internal state. When I’m thirsty I consider the drinks available nearby and imagine their taste. In the morning coffee feels most tempting, unless I’ve already had a few cups - in that case drinking more would leave me feeling worse, not better. After a workout, a glass of water is the most expedient way to quench the thirst. It is similar when I write a piece of code or design a graphic. I look at the code and consider various possible transformations and additions, and prefer ones that move me closer to my goal, or at least make any sort of improvement. It’s basically a weighing of imagined possible world-states (and self-states), not a discussion.

I struggle to imagine how people can find the time to consider all of these trivial choices verbally - in my case it all happens almost instantaneously and the whole process is easy to miss. I also don’t see what the monologue adds to the process - just skip this part and make the decision!

That said, I do use an inner voice when writing, preparing what to say to someone, etc. and I feel like I struggle with this way of thinking much more.

HPsquared•2mo ago
I had this for the longest time. Very imbalanced academic performance because I could get the answer and understood a lot of things, but had huge trouble with written work. That is, converting the thought process into a linear stream of words and sentences. I suppose it's like serialization of objects in memory.

Edit: maybe this is like the difference between a diffusion model and a "next token" model. I always feel a need to jump around and iteratively refine the whole picture at once. Hard to maintain focus.

necovek•2mo ago
But taking a step back, this process of converting reasoning tied to experienced consequences into words that have relatively stable meaning and interpretation over generations is what is "academic".

Without that, one does not learn quickly what another human already thought and tried out in the past (2 hours or 2 years or 2 millenia ago, does not matter), the civilization never progresses to the point it has, and we reinvent all the same things repeatedly ("look ma, I strapped a rock to a stick and now I can bash lion's head in").

So really, if you struggle with this part of the process, you'd need to rely on somebody else who can understand your "invention" as well as you do, and can do a good job of putting it into words.

Really, this is what makes the academic process, well, academic.

necovek•2mo ago
The top-level comment tried to distinguish betweeen symbolic processing — verbal and non-verbal — as really being "thought", and other cognition/reasoning not.

I believe many of the things you bring up still involve symbolic reasoning (eg. how do you decide when is too much coffee if you do not think in representation of "I had N or too-many"? how do you consider code transformations unless you think in terms of the structure you have and you want to get to?).

It's no surprise that one is good with one language and sucks at the other, though: otherwise, we'd pick up new languages much faster. And not struggle as much with different types of languages as much (both spoken — think tonal vs not, or Hungarian vs anything else ;) — and programming — think procedural vs functional).

So spoken/written languages are one symbolic way to express our internal cognition, but even visual reasoning can be symbolic (think non-formal and formal flowcharts, graphs, diagrams... eg. things like UML or algorithm boxes use precisely defined symbols, but they don't have to be as precise to be happening).

It is a question if it is useful to make a distinction between all reasoning and that particular type of reasoning, and reuse a common, related word ("thinking", "thought"), or not?

nosianu•2mo ago
> I have never not thought in words.

You don't notice it, but that inner voice is only on the surface. It is generated from what's going on deeper. You may not notice it is very good at occupying your attention. Your "real" thoughts are deeper, then we have processes generating speech based on our deeper structures.

Language communication is not a true representation of what you know. It is a messy iterative process when we try to externalize in words what we know. We also end up with people having the same words who don't understand one another.

An instance of that is the often used (at least on reddit) bell curve meme - https://i.imgur.com/cUOiP2d.jpeg

It is not that the person on the right has the same understanding as the one on the left. It is far deeper, but you end up using the same words. The knowledge behind the words is hard to express, when you try you will not end up truly conveying your internal state. The words are iteratively and messily derived from exploring your inner state, with varying success.

For better or worse, language has the attention of the people. We end up with magical tales about "true names", where knowing an entities "true name" gives you full control. Or with magic that is invoked by speaking certain phrases, and the universe obliges. Or with heated discussions about arbitrary definitions when it rarely matters, and when you really shouldn't, because if you get to the inevitably fuzzy edges of the actual concepts behind words you should just switch to other words and metaphors that have the subject you are interested in discussing in the middle instead of at the edge. In reality, our internal models and thinking are hidden in our not that well understood (except in the minute details, those we know a great deal of) neural networks.

aquariusDue•2mo ago
Ah yes, language is the guise the rationally irrational wears. /s

I mostly agree with you but I always find it a bit funny how we are the only things/beings that seem to be aware of their own (meta)cognition yet I can't actually pop up my hood like a car so to speak to understand what actually goes on. It gets funnier when we generally can't agree what goes on in our heads by just talking about it with each other. I don't suppose the fox thinks about why did it enter the hen house after the meal, what led it to such an act.

More related when I wrote this comment I still can't tell if I engaged my inner monologue and wrote by dictation as it were or if I let my fingers do the thinking and I read back what they wrote.

Discussions about the mind's eye and inner monologue and so on are always fun but most of the time I never get that much out of them other than satisfying curiosity.

As an aside I remember reading somewhere that some speed reading techniques involve not speaking in your mind the words you're reading (forego your inner monologue) and just internalizing their form and their associated meaning that you already know or something like that.

thaumasiotes•2mo ago
> I have never not thought in words. How does it work? Like, how can I for example think about plans or something if not in words?

This is just a mistake on your part. Your thoughts are already not in words.

alfiedotwtf•2mo ago
This feels like last year when I found out I have ADHD and aphantasia...

What do you mean "think in words"? Is it like a narrator, or a discussion like Herman's Head? Are you hearing these words all the time or only when making decisions?

teunispeters•2mo ago
I can summon up a voice if needed, but yeah normally not thinking in words. Aphantasia means I don't think in pictures either ;) What I think mostly is in patterns and connections, and flows.
JoelMcCracken•2mo ago
Ditto. I have a hard time thinking in pictures. When I do there can only be one detailed part at a time, a very small area.

I don’t really think in language either. To me thought is much more a kind of abstract process

roncesvalles•2mo ago
I still don't "buy" that some people don't have an inner voice. In my opinion it's either a misunderstanding of what it means to have an inner voice (it's not the schizophrenic "other person" voice), or people simply lying to appear quirky and special.

If people don't have an inner voice, it also must be the case the some people (these people?) don't have consciousness. It isn't obvious that consciousness is essential to fitness, especially of an inner voice isn't. Some people may be operating as automatons.

helpfulclippy•2mo ago
> If people don't have an inner voice, it also must be the case the some people (these people?) don't have consciousness.

Don’t see how you got to that.

roncesvalles•2mo ago
If something as (ostensibly) fundamental as an inner voice is "optional", chances are that consciousness is also optional.
helpfulclippy•2mo ago
The obvious error here is that an inner voice is not fundamental, and the fact that many people describe their consciousness in such different terms makes it much more likely that consciousness is just something that manifests in a variety of subjective experiences.
bolangi•2mo ago
Not sure how well this dovetails with the research presented in the article, but Grinder and Bandler's work -- which they named Neuro Linguistic Programing (derived I understand from analyzing the brief therapy and hypnotherapy techniques of Milton Erickson) -- postulated that people have dominant modes of thought: visual, auditory, and kinesthetic. They correlated these modes with eye movements they observed in subjects when asked to recall certain events.

In my personal experience, my mind became much less busy as a result of several steps. One being abandoning the theory of mind -- in contrast to spiritual practices such as Zen and forms of Hinduism, where controlling the mind, preventing its misbehavior, or getting rid of it somehow is frequently described as a goal, the mind's activity being to blame for a loss of a person's ability to be present in the here and now.

As a teenager, I can remember trying to plan in advance what I will say to a person when faced with a situation of conflict, or maybe desire toward the opposite sex, doubting that language will reliably sprout from my feelings when facing a person, whose facial reactions (and my dependence on their good will) pulls me out of my mental emotional kinesthetic grounding.

As humans we use language, however, it seems possible to live in our experience. Some people who are alienated from their experience, or overwhelmed by others, seek refuge in language.

There is obviously a gap between research such as this, and how someone can make sense of their agency in life, finding their way forward when confronted with conflict, uncertainty, etc.

wobfan•2mo ago
I have no clue, have not read the PDF, and am naive and dumb on this topic. But my naive thought recently was how important language must be for our thought, or even be our thoughts, based on how well LLMs work. Needless to say I'm no expert on either topic. But my naive impression was, given that LLMs work on nothing more than words and predictors, the evidence that they almost feel like a real human makes me think that our thoughts are heavily influenced or even purely based on language and massively defined by it.
wahnfrieden•2mo ago
It mimics the outputs of our thought. Good and useful mimicry doesn’t mean the mechanism must be the same
lll-o-lll•2mo ago
Seeing as there are people with no internal monologue (no inner voice), language is clearly not required for thought.
alfiedotwtf•2mo ago
How loud and clear are these internal monologues?
ACCount37•2mo ago
Can you replicate an algorithm just by looking at its inputs and outputs? Yes, sometimes.

Will it be a full copy of the original algorithm - the same exact implementation? Often not.

Will it be close enough to be useful? Maybe.

LLMs use human language data as inputs and outputs, and they learn (mostly) from human language. But they have non-language internals. It's those internal algorithms, trained by relations seen in language data, that give LLMs their power.

phforms•2mo ago
Maybe the structure and operation in LLMs is a somewhat accurate model of the structure and operation of our brains and maybe the actual representation of “thought” is different between the human brain and LLMs. Then it might be the case that what makes the LLM “feel human” depends not so much on the actual thinking stuff but how that stuff is related and how this process of thought unfolds.

I personally believe that our thinking is fundamentally grounded/embodied in abstract/generalized representations of our actions and experiences. These representations are diagrammatic in nature, because only diagrams allow us to act on general objects in (almost) the same way to how we act on real-world objects. With “diagrams” I mean not necessarily visual or static artefacts, they can be much more elusive, kinaesthetic and dynamic. Sometimes I am conscious of them when I think, sometimes they are more “hidden” underneath a symbolic/language layer.

suddenlybananas•2mo ago
I don't know how Federenko squares this view with her own work which directly contradicts it [1]. In this work, they find that the language network activated for "meaningful" non-linguistic stimuli such as the sounds of someone getting ready in the morning (e.g. yawning, brushing teeth, etc.). It seems entirely contrary to her arguments in this article and she doesn't even acknowledge it.

[1] https://direct.mit.edu/nol/article/5/2/385/119141

Peteragain•2mo ago
A beautifully written paper but I do feel it missed a major point. Vygotsky pointed out that "in ontogenesis one can discern a pre intellectual stage in the development of speech, and a pre linguistic stage in the development of thought"[Kozulin 1990 p153]. The pre intellectual nature of language can be interpreted as "performative" language (eg "ouch!" or "I pronounce you man and wife") but what does pre linguistic thinking look like? The contemporary answer I'd propose is that it looks like situated action/ radical enactivism / behaviour-based robotics.(see for example Gallagher's 2020 "Action and Interaction") In terms of LLMs, the idea is that rather than "distributed representations", LLMs are indeed using "glorified auto complete" to predict the future and hence look like they are thinking symbolically to us humans because that is how we (think we) think. Paper plug: see Https://arxiv.org/abs/2402.08403
grumbel•2mo ago
Might be correct for reasonably narrow definitions of language and thought, but it falls a bit short in considering the extended mind thesis. A whole lot of our thinking happens with pen&paper, their digital successor or other items out there in the world. We don't solve complex problems in our head alone, we solve them by interacting iteratively with the real world, and that in turn often involves some kind of language, even if it's just us reading our own scribbles.

Another issue is that a lot of tasks in the modern world are rooted in language, law or philosophy is in large part just word games, you won't be able to get far thinking about them without language, as those concept don't have any direct correlate that you could experience by other means.

Overall I do agree that there are plenty of problems we can solve without language, but the type of problems that can and can't be solve without language would need some further delineation.

necovek•2mo ago
While I agree we do a lot of our problem solving with symbolic languages (streams of images), even if we define "thought" as symbolic language processing, I believe many great experts in philosophy and law do internalize the relationships between concepts and operate on it on a more subconscious level to get there faster, going back to the symbolic language to validate their reasoning processes.

I wouldn't call those underlying processes "thinking", but it is a matter of definition.

This is also why those who just use LLMs to write those court submissions we've read about fail: there was no non-thinking reasoning happening, but just a stream of words coming out, and then you need to validate everything, which is hard, time-consuming and... boring.

James_K•2mo ago
I think it depends what you mean by language. There is a kind of symbolic logic that happens in the brain, and as a programmer I might liken it to a programming language, but the biological term is defined differently. Language, as far as it is unique to humans, is the serialisation of those internal logical structures in the same way text file is the serialisation of the logical objects within a programming language. What throws most people here is that the internal structures can develop in response to language and mirror it in some ways. As a concrete example, there is certainly a part of my brain that has developed to process algebraic equations. I can clearly see this as distinct from the part that would serialise them and allow me to write out the equation stored internally. In that way, the language of mathematics has precipitated the creation of an internal pattern of thought which one could easily confuse for its serialisation. It seems reasonable to assume that natural language could have similar interactions with the logical parts of the mind. Constructs such as “if/then” and “before/after” may be acquired through language, but exist separate from it.

Language is, therefore, instrumental to human thought as distinct from animal thought because it allows us to more easily acquire and develop new patterns of thinking.

NonHyloMorph•2mo ago
I think the terminology here isn't sharp. One of the first headlines is: "Language is not necessary nor sufficient for thought" I disagree. Language is not necessary for cognitive processes in individuals/organisms. It is absolutely necessary for what we commonly refer to as thought (bit of a pretentious we: it involves you in the group of people who have some idea about philosophy (e.g. baseline-heidegger)/the humanities/psychoanalysis etc.) that which we refer to as thought. Thought can be a decentralised process that is happening "between" individuals ("Die Sprache spricht" - Language is speaking by heidegger points into that direction). Thought is also, imho, a symbolic process (which involves sign systems, mathematics, languages, images). Not everything going on as a cognitive process is therefor constituting thought. That's why one can act thoughtless- but not "cognitionless".
Lionga•2mo ago
Based on your definition a child that can not speak/understand language yet can not think? Hint: It clearly can.

There are a lot of things I can think about that I do not have words for. I can only communicate these things in a unclear way, as language is clearly a subset of thought, not a superset.

Only if your definition of thought is that is is language based, which is just typical philosophy circular logic.

pessimizer•2mo ago
I've started to believe that language is often anti-thought. When we are doing what LLMs do, we aren't really thinking, we're just imitating sounds based on a sound stimulus.

Learning a second language let me notice how much of language has no content. When you're listening to meaningless things in your second language, you think you're misunderstanding what they're saying. When you listen to meaningless things in your first language, you've been taught to let the right texture of words slip right in. That you can reproduce an original and passable variation of this emptiness on command makes it seem like it's really cells indicating that they're from the same organism, not "thought." Not being able to do it triggers an immune response.

The fact that we can use it to encode thoughts for later review confuses us about what it is. The reason why it can be used to encode thoughts is because it was used to train us from birth, paired with actual simultaneous physical stimulus. But the physical stimulus is the important part, language is just a spurious association. A spurious association that ultimately is used to carry messages from the dead and the absent, so is essential to how human evolution has proceeded, but it's still an abused, repurposed protocol.

I'm an epiphenomenalist, though.

suddenlybananas•2mo ago
>Learning a second language let me notice how much of language has no content.

What on earth do you mean?

MarkusQ•2mo ago
I see what you did there. :)
Peteragain•2mo ago
Okay so rephrasing the question, how should we characterise the type of thinking we do without language? And the more interesting question IMO what thinking can an agent do without symbolic representation?

The original Vygotsky claim was that learning a language introduces the human mind to thinking in terms of symbols. Cats don't do it; infants don't either.

balamatom•2mo ago
Neither do, necessarily, language users.
Peteragain•2mo ago
One can certainly use language to _do_ things without thinking. Polly was a robot that gave a tour of the MIT labs, but it used pre recorded descriptions at various locations. The HUMANS gave meaning to the sounds.
balamatom•2mo ago
I think we should expand this spectrum of simulacra to include Geesesee (which once used to mean Generalized Cargo Cult)

Imagine a post-apocalyptic scenario where people keep the tradition of following Polly the Robot in a ritual tour of Labs of Eemaeetee - but none remember what the sounds made by Polly used to refer to, or indeed that they referred to anything. That wouldn't preclude humans to learn to reproduce Polly's liturgy, or even burn at the stake curious folk trying to decode its ancient meaning.

Well, I think we've already been there for a while.

naasking•2mo ago
I think there are other sorts of reasoning, like spatial reasoning. If you're trying to sort a set of physical items in front of you in order of size, are you thinking about the items linguistically, or is your mind working on some different internal representation?

It's more the latter for me. I don't think there's necessarily one type of internal thought, I think there's likely a multimodal landscape of thought. Maybe spatial reasoning modes are more geometric, and linguistic modes are more sequential.

I think the human brain builds predictive models for all of its abilities for planning and control, and I think all of these likely have a type of thought for planning future "moves".

graemefawcett•2mo ago
The nice thing about the transformer architecture is that they can cross these domains, to an extent. I have a very spatial way of reasoning through problems and using an LLM, especially an agentic one like Claude Code with access to my local file system as a research assistanmt, is a great aid.

I just have to remember how I built something and where the code is. We can take a quick dive into the code base and I don't have to yet again attempt to serialize my mental model of my system into something someone else may understand.

It can be difficult to explain why using the path on the underlying mount volume's EBS volume to carry meta data through filebeat, logstash, redis and kinesis to that little log stream processor was in fact the cleanest solution and how SMS was invented. It's easier when you can get the LLM to do it ;)

Isamu•2mo ago
>what thinking can an agent do without symbolic representation?

The language model is exclusively built upon the symbols present in the training set, but various layers can capture higher level patterns of symbols and patterns of patterns. Depending on how you define symbolic representation, the manipulation of the more abstract patterns of patterns may be what you are getting at.

Peteragain•2mo ago
I think the argument is that yes LLMs find patterns in token sequences. Assign tokens to moves in a chess game and the tokens are predictive of what happened in the past and of what chess players will do in the future. The LLM is not doing semantics; the humans who generated the corpus are doing the thinking. The LLM has no representation of goals or plans, rooks or bishops, it's just glorified auto complete from a corpus of tokens that we humans understand as refering to things in the world.
Isamu•2mo ago
>The LLM is not doing semantics; the humans who generated the corpus are doing the thinking.

Agreed, this bears repeating. This point is not obvious to someone interacting with the LLM. Because it is able to mash up custom responses doesn’t make it a thinking machine, the thinking was done ahead of time as is the case when you read a book. What passes for intelligence here is the mash-up, a smooth blending of digested text, which was selected by statistical relevance.

graemefawcett•2mo ago
They're "repeat after me" machines, not "think for me" machines.

For the former task, they're brilliant but everyone seems to have fallen for the branding and forgotten the technology behind it. Given an input, they set off a chain reaction of probability that results in structured language, in the form of tokens, as the output. The structure of that language is easier to predict - you ask it for an app that's your next business idea and it'll give you an app that looks like your next business idea. And that's it.

Because that's all you've given it. It's not going to fill in the blanks for you. It can't. Not its job.

If you were building a workflow, would you put something called "Generative" in one of those diamond shaped boxes that normally controls flow? That sounds more like a source to me, something to be filtered and gated and shaped before use.

That's what context is supposed to be for. Not "here's a series of instructions now go do them"

They'll be lost before they get to number three, they have no sense of time you know. Cause and effect is a simulation at best. They have TodoWrites now, those are brilliant for best approximation which is really all we need at the moment, but procedural prompting is still why everyone thinks "AI" (/Generative/AI) is broken.

They're going to give the same structured text regardless, you asked for a program after all. Give them more context, you call it RAG, I call it a nice chat - whatever it is, you are responsible for the thinking in the partnership. They're the hyperactive neurodivergent kid that can type 180wps and remembers all of StackOverflow, you're the patient parent that needs to remind them to clean their room before they go out (or completely remove all traces of the legacy version of feature X that you just upgraded so you don't end up with 4 overlapping graph representations). You're responsible for the remembering, you're responsible for the thinking - they're just responsible for amplifying your thoughts and letting you explore solution spaces you might not have had the time for otherwise.

Or you can build something to help you do that. Structured memory (mine's spatial, the direction of the edges itself encodes meaning) with computational markdown as the storage mechanism so we can embed code, data and prose in the same node.

I demoed a thing on here the other day that shows how I setup Rspec tests that execute when you read the spec that describes the feature you're building. A logical evolution of Obie's keynote. Now they just do it automatically (mostly, if they're new - fresh context - I have to reference the tag that explains the pattern so they pick it up first)

It's still not thinking in the traditional sense of the word, where some level of conscious rationality is responsible for breakthrough. Given, however, how much of human progress has been through accident (Fleming, Spencer, Goodyear, Fahlberg, Rontgen, Hofmann) or misunderstanding (Penzias and Wilson, Post Its, Viagra).

Most of human break through has been through pattern recognition, conscious or unconscious. For that, language is all that is needed and language is sufficient. If an idea can be described by language and if we suppose that the grammar of a language allows its atoms (and therefore its ideas) to be composed and decomposed, then does it not allow then that a consciousness (machine or otherwise) trained in the use of that language can form new ideas through the mere act of synthesis?

Peteragain•2mo ago
That's how it works, but what does an LLM do is the real question. I'm working on the idea that this statistical model can be used for control. And that is enough for the evolution of agency. The claim from Vygotsky is that thinking with symbols is given to us by our learning of language. "Cultural linguistics".
mpascale00•2mo ago
I think you make a good point that much of what we call thinking is really discourse either with another ^[0], with media, or with one's own self. These are largely mediated by language, but still there are other forms of communicative _art_ which externalize thought.

The other thoughts here largely provide within-indivudal examples: others noted Hellen Keller and that some folks do not experience internal monologue. These tell us about the sort of thinking that does happen within a person, but I think that there are many forms of communication which are not linguistic, and therefore there is also external thinking which is non-linguistic.

The observation that not all thought utilizes linguistic representations (see particularly the annotated references in the bibliography) tells us something about the representations that may be useful for reasoning, thought, etc. That though language _can_ represent the world it is both not the only way and certainly not the only way used by biological beings.

^[0]: It Takes Two to Think https://www.nature.com/articles/s41587-023-02074-2

trueismywork•2mo ago
I disagree. There can be thought without any way to express it any langauge yet. Only with a lot of communication can we get to the an approximation of what it means and hence it can mean slightly different thing ti everyone. Koans can be a good example of this
habbekrats•2mo ago
i think you are right, but its hard to explain as ppl can interpret your words in many ways depending on their context.

i think this: you dont need language for an idea, to have it, or be creative.

to think about it outside of that, like asking critical questions, inner dialogue _about_ the ideas and creativity, that is i think what is 'thought' and that requires language as its sort of inner communication....

DrierCycle•2mo ago
Language may ultimately be maladaptive as it is arbitrary and disconnected from thought. Who cares about the gibberish of logic/philosophy when survival is at stake in ecological balance? The key idea is, there are events. They are real. The words we use are false/inaccurate externalizations of those events. Words and symbols are bottlenecks that place the events out of analog reach but fool us by our own simulation processes into thinking they are accurate.

Words are essentially very poor forms of interoception or metacognition. They "explain" our thoughts to us by fooling us. Yet how much of the senses/perceptions are accessible in consciousness. Not very much. The computer serves to further the maladaption by both accelerating the symbols and sutomating them, which puts the initial real events even further from reach. The only game is how much we can fool the species through the lowres inputs the PFC demands. This appears to be a sizable value center for Silicon Valley, and it seems to require coders to ignore the whole of experience and rely solely on the bottleneck simulations centers of the PFC which themselves are disconnected from direct sensory access. Computers, 'social' media, AI, code, VR essentially "play" the PFC.

How these basic thought experiments that have been tested in cog neuroscience since the 90s in the overthrow of the cog sci models of the 40s-80s were not taught as primer classes in AI and comp sci is beyond me. It takes now third gen neurobiology crossed with linguistics to set the record straight.

These are not controversial ideas now.

drdeca•2mo ago
What does "PFC" stand for?
DrierCycle•2mo ago
Sorry, pre frontal cortex.
fsckboy•2mo ago
>(e.g. baseline-heidegger)/the humanities/psychoanalysis etc.)

and pre-heidegger, pre-psychoanalysis, what then, how did somebody, e.g. heidegger, think of those thoughts without the vocabulary to do so? ahhh, apparently, they didn't need to. Turns out, language is not required for thought, thought can invent language.

heavymemory•2mo ago
If thought needed words, you’d be unable to think of anything you can’t yet describe
heavymemory•2mo ago
6th time in the last year that this was posted, apparently
iainctduncan•2mo ago
Any improvising musician or athlete of a complex sport knows with absolute certainty that language is not necessary for thought. And in fact, we spend years learning to turn off all linguistic thought –it degrades performance.
DrierCycle•2mo ago
The key is that there is no content to thought. It's all nested oscillations. It can't be extracted as symbols, so there is no connection between them. Words play the role of a sportscaster reading the minds of the players by observing their behavior. How accurate are they or are we about ourselves? Not very.
ineedasername•2mo ago
There are a few things here.

First) This is correct in a trivial and incorrect in profound ways.

Trivial Correct: Clearly language is, at best, a lossy channel for thought. It isn't thought compressed, it is thought where the map would be too complex for language and so we draw a kindergarten scribble we all agree on, and that covers a lot of ground as a an imperfect pointer. This description is itself imperfect, but as a rough sketch not too controversial.

Profound Incorrect: As pointers, it facilitates thought in complex ways that would be incredible difficult otherwise. Abstractions you can build on like building blocks and, so long as your careful about understanding where the word ends and doesn't encompass the full thing, you reduce the risk of reifying the word overmuch. It's not thought, but is isn't thought in some-- not all-- of the ways in which a building's walls is not its interior spaces. Of course it isn't. The space would be there either way, but keeping it all arranged so nicely and easily to reference different elements of it, that is more than just convenience and it is inextricable from language, or at least some representational system for doing this sort of thing.

Second) It is so strange to see this sort of thing written about, in this way, as if it were a new conception, a new view of language. But then I look at the researchers involved: near always backgrounds outside the formal study of linguistics, language itself, and instead focused in other areas adjacent or related. Even computational linguistics-- perhaps especially computational linguistics. The educational pathway there is much more commonly coming from computational paths to applications to language, rather than vice versa. This is much less the case with Bioinformatics and Computational Biology, where traditional biology is much more often within a student's foundation. (This is not anecdotal, analysis of student pathways through academic studies is a past area of my own professional career)

Through the lens of the history of academia over the past few decades, this is not all that surprising. Chomsky's fault (my opinion) for trying to wall off the discipline from other areas of study or perspective other than his own.

nivertech•2mo ago
Thinking is communicating with yourself