frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Tiny Clippy – A native Office Assistant built in Rust and egui

https://github.com/salva-imm/tiny-clippy
1•salvadorda656•43s ago•0 comments

LegalArgumentException: From Courtrooms to Clojure – Sen [video]

https://www.youtube.com/watch?v=cmMQbsOTX-o
1•adityaathalye•3m ago•0 comments

US moves to deport 5-year-old detained in Minnesota

https://www.reuters.com/legal/government/us-moves-deport-5-year-old-detained-minnesota-2026-02-06/
1•petethomas•6m ago•1 comments

If you lose your passport in Austria, head for McDonald's Golden Arches

https://www.cbsnews.com/news/us-embassy-mcdonalds-restaurants-austria-hotline-americans-consular-...
1•thunderbong•11m ago•0 comments

Show HN: Mermaid Formatter – CLI and library to auto-format Mermaid diagrams

https://github.com/chenyanchen/mermaid-formatter
1•astm•27m ago•0 comments

RFCs vs. READMEs: The Evolution of Protocols

https://h3manth.com/scribe/rfcs-vs-readmes/
2•init0•33m ago•1 comments

Kanchipuram Saris and Thinking Machines

https://altermag.com/articles/kanchipuram-saris-and-thinking-machines
1•trojanalert•33m ago•0 comments

Chinese chemical supplier causes global baby formula recall

https://www.reuters.com/business/healthcare-pharmaceuticals/nestle-widens-french-infant-formula-r...
1•fkdk•36m ago•0 comments

I've used AI to write 100% of my code for a year as an engineer

https://old.reddit.com/r/ClaudeCode/comments/1qxvobt/ive_used_ai_to_write_100_of_my_code_for_1_ye...
1•ukuina•38m ago•1 comments

Looking for 4 Autistic Co-Founders for AI Startup (Equity-Based)

1•au-ai-aisl•49m ago•1 comments

AI-native capabilities, a new API Catalog, and updated plans and pricing

https://blog.postman.com/new-capabilities-march-2026/
1•thunderbong•49m ago•0 comments

What changed in tech from 2010 to 2020?

https://www.tedsanders.com/what-changed-in-tech-from-2010-to-2020/
2•endorphine•54m ago•0 comments

From Human Ergonomics to Agent Ergonomics

https://wesmckinney.com/blog/agent-ergonomics/
1•Anon84•58m ago•0 comments

Advanced Inertial Reference Sphere

https://en.wikipedia.org/wiki/Advanced_Inertial_Reference_Sphere
1•cyanf•59m ago•0 comments

Toyota Developing a Console-Grade, Open-Source Game Engine with Flutter and Dart

https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine
1•computer23•1h ago•0 comments

Typing for Love or Money: The Hidden Labor Behind Modern Literary Masterpieces

https://publicdomainreview.org/essay/typing-for-love-or-money/
1•prismatic•1h ago•0 comments

Show HN: A longitudinal health record built from fragmented medical data

https://myaether.live
1•takmak007•1h ago•0 comments

CoreWeave's $30B Bet on GPU Market Infrastructure

https://davefriedman.substack.com/p/coreweaves-30-billion-bet-on-gpu
1•gmays•1h ago•0 comments

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•1h ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
4•cwwc•1h ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•1h ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
3•eeko_systems•1h ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
3•neogoose•1h ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
2•mav5431•1h ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
3•sizzle•1h ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•1h ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•1h ago•1 comments

Baldur's Gate to be turned into TV series – without the game's developers

https://www.bbc.com/news/articles/c24g457y534o
3•vunderba•1h ago•0 comments

Interview with 'Just use a VPS' bro (OpenClaw version) [video]

https://www.youtube.com/watch?v=40SnEd1RWUU
2•dangtony98•1h ago•0 comments

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•2h ago•0 comments
Open in hackernews

Ask HN: Can anybody clarify why OpenAI reasoning now shows non-English thoughts?

22•johnnyApplePRNG•7mo ago
People have noticed for a while now that Google's Bard/Gemini has inserted random hindi/bengali words often. [0]

I just caught this in an o3-pro thought process: "and customizing for low difficulty. কাজ করছে!"

That last set of chars is apparently Bengali for "working!".

I just find it curious that similar "errors" are appearing from multiple different models... what is the training method or reasoning that these alternate languages can creep in, does anyone know?

[0] https://www.reddit.com/r/Bard/comments/18zk2tb/bard_speaking_random_languages/

Comments

yen223•7mo ago
I have no idea what's going on with ChatGPT, but I can say it's pretty common for multilingual people to be thinking about things in a different language from what they are currently speaking.
johnnyApplePRNG•7mo ago
Interesting thanks yea I forgot that even I used to be able to think in another language long ago!
latentsea•7mo ago
Language itself structures how to think about things too. There are some thoughts that are easier to have in one language vs another because the language naturally expresses an idea a particular way that is possible, but less natural to express in another.
puttycat•7mo ago
Multilingual LLMs dont have a clear boundary between languages. They will appear to have one since they maximize likelihoods, so asking something in English will most likely produce an English continuation, etc.

In other circumstances they might take a different path (in terms of output probability decoding) through other character sets, if the probabilities justify this.

johnnyApplePRNG•7mo ago
I understand that but how often/common could it possibly be to mix in a single bengali word/phrase like that into a larger english one?

Perhaps it's more common in the parts of the world where bengali and english are more commonly spoken in general?

Why so much bengali/hindi then and why not other languages?

epa•7mo ago
There are many users in India training these models. There is also a lot more content out there the models are consuming.
groby_b•7mo ago
And not to forget, many (most?) Indians are bilingual. Multilingual speakers tend to skip languages within conversation if both parties are fluent -> training material includes those switches.
daeken•7mo ago
This has been really interesting to me. I've been learning Spanish for a while and will mix un poco español en my sentences with ChatGPT all the time, and it's cool to see the same thing reflected back to me. It's not uncommon for a response to be 75% in English with 25% Spanish at the beginnings and ends especially. All of my conversation titles are in Spanish because I always start them with "Hola", so whatever model sets the title just assumes Spanish for it, regardless of what the rest of the message is.
outside1234•7mo ago
I’m so glad I am not the only one that does this!
pixl97•7mo ago
I was on a vacation in the last week and was waiting at a restaurant for a while. The lady behind me was seemingly switching between english and spanish every few sentences which caught my attention. I can only assume the person on the other side was a bilingual also. Someone in their family had a medical emergency and was in the hospital. What's interesting is it seemed like the sentences talking about medical stuff were in very good english, and the spanish sentences were about other things (I can't interpret very fast at all). With the speed and fluency of their conversation it seemed like there was not any cost on their part using either language.
tstrimple•7mo ago
Language is incredibly interesting to me. Especially when it’s blended or becomes its own pidgin dialect. Multilingual societies are fascinating.
tehlike•7mo ago
I am bilingual.

My phrases switch to the language I learned them on very easily.

Computer terms are almost always English.

A lot of idioms I learned in my adult life are going to stay English, even if a Turkish equivalent exists and I later learned about them.

BrandoElFollito•7mo ago
I am bilingual as well, my children are trilingual.

I find out that it is way easier for me to translate to or from English (not a native speaker) to any of the languages I am bilingual in, than between these languages. It is very hard for me to listen to one, and speak the other.

tehlike•7mo ago
I struggle to fluently translate childrens books from English to Turkish, but it wouldn't be too hard to do Turkish to English, I imagine. There would be loss of some nuance, but general meaning would transfer.
BrandoElFollito•7mo ago
In my case this is more some kind of compartmentalization. When I speak on of my two languages, I can translate to English because English is a learned one. Same if I want to translate to German (that I speak just a tiny bit). It doesn't work that well between them.

I can of course fluently translate but the words are not going to be really great. I cannot really understand this.

BrandoElFollito•7mo ago
I spent a good time in the middle east and loved to listen to my friends arguing in Arabic.

To my French ear or sounded like they were sentencing me to terrible things (and were always surprised they sounded like this :)), up until the random "router" or "framework" which was the core of the fight.

I love to listen to languages I do not understand (a great source is Radio Green) and try to get from the words what they are talking about.

Another one is one of my closest friend, a German, who speaks a very soft English. This until he described me how to drive somewhere (pre-GPS era) and the names he was using were like lashes.

Speaking various languages is a blessing

ASalazarMX•7mo ago
I usually interact with LLMs in English. A few weeks ago I made a Gemini gem that tries to consider two opposite sides, moderator included. Somehow it started including bits of Spanish in some of its answers, which I actually don't mind because that's my primary language.

I assumed it knew I speak Spanish from other conversations, my Google profile, geolocation, etc. Maybe my English has enough hints that it was learned by a native Spanish speaker?

hiAndrewQuinn•7mo ago
There have been a nonzero number of times that asking Gemini about something in Finnish about the demoscene or early 1990s tech has returned much more... colorful answers than what I saw with equivalent questioning in English.
Vilian•7mo ago
Colorful answers?
ipsum2•7mo ago
Models like O3 are rewarded for the final output, not the intermediary thinking steps. So whatever it generates as "thoughts" that gives a better answer gets a higher score.

The DeepSeek-R1 paper has a section on this, where they 'punish' the model if it thinks in a different language to make the thinking tokens more readable. Probably Anthropic does this too.

jmward01•7mo ago
It would be interesting to study when this type of behavior emerges to see what the patterns are. It could give insights into language or culture specific reasoning patterns and subjects that are easier to convey in one language or another. Is it easier to understand math word problems in XXX or YYY? What about relationships?
atlex2•7mo ago
Definitely curious what circuits light-up from a Neuralese perspective. We want reasoning traces that are both faithful to the thought process and also interpretable. If the other language segments are lighting up meanings much different than their translations, that would raise questions for me.
tough•7mo ago
I've seen also russian and chinese which i certainly have never speaked to it nor understand
janalsncm•7mo ago
Others have mentioned that DeepSeek R1 also noticed this “problem”. I believe there are two things going on here.

One, the model is no longer being trained to output likely tokens or tokens likely to satisfy pairwise preferences. So the model doesn’t care. You have to explicitly punish the model for language switching, which dilutes the reasoning reward.

Two, I believe there has been some research on models representing similar ideas in multiple languages in similar areas. Sparse autoencoders have shown this. So if the translated text makes sense, I think this is why. If not, I have no idea.

NooneAtAll3•7mo ago
I remember watching video mentioning it (https://www.youtube.com/shorts/Vv5Ia6C5vYk)

The main suspicion is that it's more compact?

neilv•7mo ago
If the reasoning didn't need to be exposed to a user, are there any ways in which you get better performance or effect by using the same LLM methods, but using a language better suited to that? (Existing language or bespoke.)

(Inspired by movies and TV shows, when characters switch from English to a different language, such as French or Mandarin, to better express something. Maybe there's a compound word in German for that.)

Bjorkbat•7mo ago
I don't actually think this is the case, but nonetheless I think it would be kind of funny if LLMs somehow "discovered" linguistic relativity (https://en.wikipedia.org/wiki/Linguistic_relativity).
mindcrime•7mo ago
LLM's aren't humans and there's no reason to expect their "thinking"[1] to behave exactly - or even much - like human thinking. In particular, they don't need to "think" in one language. More concretely, in the DeepSeek R1 paper[2] they observed this "thought language mixing" and did some experiments on suppressing it... and the model results got worse. So I wouldn't personally think of it as an "error", but rather as just an artifact of how these things work.

[1]: By this I mean "whatever it is they do that can be thought of as sorta kind roughly analogous to what we generally call thinking." I'm not interested in getting into a debate (here) about the exact nature of thinking and whether or not it's "correct" to refer to LLM's as "thinking". It's a colloquialism that I find useful in this context, nothing more.

[2]: https://arxiv.org/pdf/2501.12948

learningstud•7mo ago
I really don't get why people would want AI to think like humans even remotely, especially when we don't even know how humans think. Most people simply cannot provide justification for whatever comes out of their mouths, e.g. try explaining how planimeters work for contours that are not differentiable at every point. This question will bring out the LLM-like behavior in people.
diwank•7mo ago
This isn’t entirely surprising. Language-model “reasoning” is basically the model internally exploring possibilities in token-space. These models are trained on enormous multilingual datasets and optimized purely for next-token prediction, not language purity. When reasoning traces or scratchpads are revealed directly (as OpenAI occasionally does with o-series models or DeepSeek-R1-zero), it’s common to see models slip into code-switching or even random language fragments, simply because it’s more token-efficient in their latent space.

For example, the DeepSeek team explicitly reported this behavior in their R1-zero paper, noting that purely unsupervised reasoning emerges naturally but brings some “language mixing” along. Interestingly, they found a small supervised fine-tuning (SFT) step with language-consistency rewards slightly improved readability, though it came with trade-offs (DeepSeek blog post).

My guess is OpenAI has typically used a smaller summarizer model to sanitize reasoning outputs before display (they mentioned summarization/filtering briefly at Dev Day), but perhaps lately they’ve started relaxing that step, causing more multilingual slips to leak through. It’d be great to get clarity from them directly on whether this is intentional experimentation or just a side-effect.

[1] DeepSeek-R1 paper that talks about poor readability and language mixing in R1-zero’s raw reasoning https://arxiv.org/abs/2501.12948

[2] OpenAI “Detecting misbehavior in frontier reasoning models” — explains use of a separate CoT “summarizer or sanitizer” before showing traces to end-users https://openai.com/index/chain-of-thought-monitoring/

rerdavies•7mo ago
Reminds me of the son of a friend of mine, who was raised bilingually (English and French). When he was 3, he would sometimes ask "is this English, or the other language?"
CMCDragonkai•7mo ago
Multilingual humans do this too, so not surprising that AI does this.
CMCDragonkai•7mo ago
In fact monolingual humans have quite a limited understanding of the world.
nsonha•7mo ago
No such thing as a monolingual human. Any language can be broken down to subsets that are associated with different ways of thinking. Another thing is globalization and culture export.
Incipient•7mo ago
I know plenty of bilingual people that have a very limited understanding of the world, and conversely monolinguists that have a very broad view.

One could even say assuming someone's level of worldly understanding based on how many languages they speak shows a fairly limited world view.

ta20240528•7mo ago
As I speaker of five languages, all but one fluently: why does my understanding of the world magically increase when I learn a new noun so say "sparrow" in the fifth that I'm learning?

Is it linear (25% more understanding for the fifth) or asymptotically? Does it increase across all domains equally (geology, poetry, ethics) or asymmetrically?

Seriously, explain it to me?

dpiers•7mo ago
Languages are thought encodings.

Most people can only encode/decode a single language but an LLM can move between them fluidly.

muzani•7mo ago
I do some AI training as a side gig and there has been a few recent updates on code-switching (i.e. speaking two languages at the same time) in the last few months. It's possible that these changes may have caused such behavior recently.
drivingmenuts•7mo ago
I see this as a problem. You can't make an LLM "unlearn" something; once it's in there, it's in there. If I have a huge database, I can easily delete swathes of useless data, but I cannot do the same with an LLM. It's not a living, thinking being - it's a program running on a computer; a device that we, in other circumstances, can add information to or remove it from. We can suppress certain things, but that information is still in there, taking up space and can still possibly be accessed.

We are intentionally undoing one of the things that makes computers useful.

throwpoaster•7mo ago
Multilingual humans do this too. Sometimes a concept is easier to shorthand in one language versus another. It’s somehow “closer”.
NoahZuniga•7mo ago
I feel like most other comments are missing something important, the o3-pro thought process you see in the chatgpt ui is a summary. So although the model might think in different languages, the summary (presumably done by a different model) will translate it into your UI language. It seems like this summarization AI messed up, and gave you some text in a different language.