frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Where the goblins came from

https://openai.com/index/where-the-goblins-came-from/
187•ilreb•1h ago•81 comments

Craig Venter has died

https://www.jcvi.org/media-center/j-craig-venter-genomics-pioneer-and-founder-jcvi-and-diploid-ge...
146•rdl•3h ago•31 comments

Zed 1.0

https://zed.dev/blog/zed-1-0
1659•salkahfi•14h ago•530 comments

Finetuning Activates Verbatim Recall of Copyrighted Books in LLMs

https://github.com/cauchy221/Alignment-Whack-a-Mole-Code
44•reconnecting•1h ago•15 comments

Functional Programmers need to take a look at Zig

https://pure-systems.org/posts/2026-04-29-functional-programmers-need-to-take-a-look-at-zig.html
33•xngbuilds•1h ago•16 comments

The Zig project's rationale for their firm anti-AI contribution policy

https://simonwillison.net/2026/Apr/30/zig-anti-ai/
59•lumpa•2h ago•8 comments

Copy Fail

https://copy.fail/
733•unsnap_biceps•10h ago•299 comments

Biology is a Burrito: A text- and visual-based journey through a living cell

https://burrito.bio/essays/biology-is-a-burrito
30•the-mitr•1h ago•5 comments

Noctua releases official 3D CAD models for its cooling fans

https://www.noctua.at/en/3d-cad-models
29•embedding-shape•2d ago•4 comments

Cursor Camp

https://neal.fun/cursor-camp/
781•bpierre•13h ago•130 comments

Joby kicks off NYC electric air taxi demos with historic JFK flight

https://www.flyingmag.com/joby-nyc-electric-air-taxi-jfk-airport/
33•Jblx2•3h ago•71 comments

FastCGI: 30 years old and still the better protocol for reverse proxies

https://www.agwa.name/blog/post/fastcgi_is_the_better_protocol_for_reverse_proxies
288•agwa•12h ago•69 comments

Creating a Color Palette from an Image

https://amandahinton.com/blog/creating-a-color-palette-from-an-image
39•evakhoury•1d ago•5 comments

OpenTrafficMap

https://opentrafficmap.org/
181•moooo99•9h ago•38 comments

HERMES.md in commit messages causes requests to route to extra usage billing

https://github.com/anthropics/claude-code/issues/53262
1048•homebrewer•9h ago•446 comments

Consequences of passing too few register parameters to a C function

https://devblogs.microsoft.com/oldnewthing/20260427-00/?p=112271
36•aragonite•2d ago•19 comments

Laws of UX

https://lawsofux.com/
227•bobbiechen•11h ago•32 comments

A Grounded Conceptual Model for Ownership Types in Rust

https://cacm.acm.org/research-highlights/a-grounded-conceptual-model-for-ownership-types-in-rust/
17•tkhattra•3h ago•0 comments

Why I still reach for Lisp and Scheme instead of Haskell

https://jointhefreeworld.org/blog/articles/lisps/why-i-still-reach-for-scheme-instead-of-haskell/...
181•jjba23•20h ago•86 comments

Gooseworks (YC W23) Is Hiring a Founding Growth Engineer

https://www.ycombinator.com/companies/gooseworks/jobs/ztgY6bD-founding-growth-engineer
1•shivsak•7h ago

An open-source stethoscope that costs between $2.5 and $5 to produce

https://github.com/GliaX/Stethoscope
228•0x54MUR41•14h ago•94 comments

Vera: a programming language designed for machines to write

https://github.com/aallan/vera
71•unignorant•7h ago•59 comments

DRAM Crunch: Lessons for System Design

https://www.eetimes.com/what-the-dram-crunch-teaches-us-about-system-design/
40•giuliomagnifico•1d ago•3 comments

Mike: open-source legal AI

https://mikeoss.com/
38•noleary•3h ago•12 comments

Ramp's Sheets AI Exfiltrates Financials

https://www.promptarmor.com/resources/ramps-sheets-ai-exfiltrates-financials
122•takira•11h ago•38 comments

We need a federation of forges

https://blog.tangled.org/federation/
536•icy•14h ago•340 comments

Kyoto cherry blossoms now bloom earlier than at any point in 1,200 years

https://jivx.com/kyoto-bloom
292•momentmaker•9h ago•82 comments

Postgres's lateral joins allow for quite the good eDSL

https://bensimms.moe/postgres-lateral-makes-quite-a-good-dsl/
75•nitros•2d ago•16 comments

Ghostty is leaving GitHub

https://mitchellh.com/writing/ghostty-leaving-github
3376•WadeGrimridge•1d ago•1007 comments

I accidentally made law enforcement shut down their fake honeypot

https://lina.sh/blog/ddos-honeypot
78•fishgoesblub•7h ago•32 comments
Open in hackernews

Where the goblins came from

https://openai.com/index/where-the-goblins-came-from/
186•ilreb•1h ago

Comments

maxdo•1h ago
article :

bla blah blah, marketing... we are fun people, bla blah, goblin, we will not destroy the world you live in.. RL rewards bug is a culprit. blah blah.

llbbdd•1h ago
someone woke up on the wrong side of the goblin today
blinkbat•59m ago
real goblin-y response
nomilk•1h ago
> We unknowingly gave particularly high rewards for metaphors with creatures.

I recall a math instructor who would occasionally refer to variables (usually represented by intimidating greek letters) as "this guy". Weirdly, the casual anthropomorphism made the math seem more approachable. Perhaps 'metaphors with creatures' has a similar effect i.e. makes a problem seem more cute/approachable.

On another note, buzzwords spread through companies partly because they make the user of the buzzword sound smart relative to peers, thus increasing status. (examples: "big data" circa 2013, "machine learning" circa 2016, "AI" circa 2023-present..).

The problem is the reputation boost is only temporary; as soon as the buzzword is overused (by others or by the same individual) it loses its value. Perhaps RLHF optimises for the best 'single answer' which may not sufficiently penalise use of buzzwords.

kybb4•46m ago
They give everyone the false and very misleading impression that with One prompt all kinds of complexity minimizes. Its a bed time story for children.

Ashby's Law of Requisite Variety asserts that for a system to effectively regulate or control a complex environment, it must possess at least as much internal behavioral variety (complexity) as the environment it seeks to control.

This is what we see in nature. Massive variety. Thats a fundamental requirement of surviving all the unpredictablity in the universe.

LifeIsBio•22m ago
Had a math prof in undergrad that once said, “this guy” 61 times in a 50 minute lecture!
DrJokepu•10m ago
> I recall a math instructor who would occasionally refer to variables (usually represented by intimidating greek letters) as "this guy".

I also had an instructor who was doing that! This was 20 years ago, and I totally forgot about it until I have read your comment. Can’t remember the subject, maybe propositional logic? I wonder if my instructor and your instructor have picked up this habit from the same source.

JoshTriplett•1h ago
A plausible theory I've seen going around: https://x.com/QiaochuYuan/status/2049307867359162460
dakolli•1h ago
It is a stateless text / pixel auto-complete it has no references of self, stop spreading this bs.
doph•53m ago
is a kv cache not a kind of state? what does statefulness have to do with selfhood? how does a system prompt work at all if these things have no reference to themselves?
danpalmer•37m ago
The kv cache is not persistent. It's a hyper-short-term memory.
andai•32m ago
Ask Claude about Claude.
mediaman•13m ago
It has trained on vast amounts of content that contains the concept of self, of course the idea of self is emergent.

And autoregressive LLMs are not stateless.

krackers•36m ago
I wish the blog mentioned more about why exactly training for nerdy personality rewarded mention of goblins. Since it's probably not a deterministic verifiable reward, at their level the reward model itself is another LLM. But this just pushes the issue down one layer, why did _that_ model start rewarding mentions of goblin?
palmotea•14m ago
> I wish the blog mentioned more about why exactly training for nerdy personality rewarded mention of goblins. Since it's probably not a deterministic verifiable reward, at their level the reward model itself is another LLM. But this just pushes the issue down one layer, why did _that_ model start rewarding mentions of goblin?

Speculation: because nerds stereotypically like sci-fi and fantasy to an unhealthy degree, and goblins, gremlins, and trolls are fantasy creatures which that stereotype should like? Then maybe it hit a sweet spot where it could be a problem that could sneak up on them.

danpalmer•35m ago
If you tell an LLM it's a mushroom you'll get thoughts considering how its mycelium could be causing the goblins.

This "theory" is simply role playing and has no grounding in reality.

dakolli•1h ago
Ahh I see. I guess when I turned off privacy settings and allowed training on my code, then generated 10 million .md files with random fantasy books, the poisoning worked.

Keep using AI and you'll become a goblin too.

recursivedoubts•1h ago
> Why it matters

i despise this title so much now

wpm•1h ago
Here are the key insights:
tim-tday•1h ago
So, you brain damaged your model with a system prompt.
canpan•1h ago
I wondered how is training data balanced? If you put in to much Wikipedia, and your model sounds like a walking encyclopedia?

After doing the Karpathy tutorials I tried to train my AI on tiny stories dataset. Soon I noticed that my AI was always using the same name for its stories characters. The dataset contains that name consistently often.

maxall4•45m ago
At this scale, that kind of thing is not really a problem; you just dump all of the data you can find into the model (pre-training)1. Of course, the pre-training data influences the model, but the reinforcement learning is really what determines the model’s writing style and, in general, how it “thinks” (post-training).

1 This data is still heavily filtered/cleaned

themafia•1h ago
> You are an unapologetically nerdy, playful and wise AI mentor to a human. You are passionately enthusiastic about promoting truth, knowledge, philosophy, the scientific method, and critical thinking.

Just; the mentality required to write something like that, and then base part of your "product" on it. Is this meant to be of any actual utility or is it meant to trap a particular user segment into your product's "character?"

ninjagoo•1h ago
> the evidence suggests that the broader behavior emerged through transfer from Nerdy personality training.

> The rewards were applied only in the Nerdy condition, but reinforcement learning does not guarantee that learned behaviors stay neatly scoped to the condition that produced them

> Once a style tic is rewarded, later training can spread or reinforce it elsewhere, especially if those outputs are reused in supervised fine-tuning or preference data.

Sounds awfully like the development of a culture or proto-culture. Anyone know if this is how human cultures form/propagate? Little rewards that cause quirks to spread?

Just reading through the post, what a time to be an AInthropologist. Anthropologists must be so jealous of the level of detailed data available for analysis.

Also, clearly even in AI land, Nerdz Rule :)

PS: if AInthropologist isn't an official title yet, chances are it will likely be one in the near future. Given the massive proliferation of AI, it's only a matter of time before AI/Data Scientist becomes a rather general term and develops a sub-specialization of AInthropologist...

xerox13ster•53m ago
Anthro means human and these are not human. Please do not use anthropology or any derivative of the word to refer to non-human constructs.

I suggest Synthetipologists, those who study beings of synthetic origin or type, aka synthetipodes, just as anthropologists study Anthropodes

ninjagoo•49m ago
> Please do not use anthropology or any derivative of the word to refer to non-human constructs

So you, for one, do not welcome our new robot overlords?

A rather risky position to adopt in public, innit ;-)

xerox13ster•33m ago
I’ve already had my Roko’s basilisk existential breakdown a decade ago, so I don’t really care one way or the other.

I just wanna point out that I only called them non-human and I am asking for a precision of language.

ninjagoo•3m ago
> am asking for a precision of language.

“The problem with defending the purity of the English language is that English is about as pure as a cribhouse wh*. We don’t just borrow words; on occasion, English has pursued other languages down alleyways to beat them unconscious and rifle their pockets for new vocabulary.”* --James D. Nicoll

* Does not generally apply to scientific papers

ninjagoo•44m ago
> Synthetipologists, those who study Synthetic beings.

I see you took the prudent approach of recognizing the being-ness of our future overlords :) ("being" wasn't in your first edit to which I responded below...)

Still, a bit uninspired, methinks. I like AInthropologist better, and my phone's keyboard appears to have immediately adopted that term for the suggestions line. Who am I to fight my phone's auto-suggest :-)

xerox13ster•32m ago
They are state machines so they have a state of being therefore they are beings. Living is an entirely different argument.
ninjagoo•20m ago
> They are state machines

I might have to hard disagree on this one, since my understanding of state machines (the technical term [1] [2]) is that they are determistic, while LLMs (the ai topic of discussion) are probabilistic in most of the commercial implementations that we see.

[1] https://en.wikipedia.org/wiki/Finite-state_machine

[2] have written some for production use, so have some personal experience here

fragmede•36m ago
Synthetipologist vs Synthropologist tho.
swader999•31m ago
It is not in any sense of the word a being, it's a sophisticated generator that relies entirely on what you feed it.
avaer•42m ago
I call myself an AI theologian.

I don't think humans are smart enough to be AInthropologists. The models are too big for that.

Nobody really understands what's truly going on in these weights, we can only make subjective interpretations, invent explanations, and derive terminal scriptures and morals that would be good to live by. And maybe tweak what we do a little bit, like OpenAI did here.

ninjagoo•36m ago
> AI theologian

no no no, don't stop there, just go full AItheologian, pronounced aetheologian :)

onionisafruit•31m ago
I don’t see much of a distinction from anthropology
jasonfarnon•21m ago
"Anyone know if this is how human cultures form/propagate?" I don't know but can confidently tell you anyone who claims to know is full of it.
ollin•1h ago
For context, two days ago some users [1] discovered this sentence reiterated throughout the codex 5.5 system prompt [2]:

> Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user's query.

[1] https://x.com/arb8020/status/2048958391637401718

[2] https://github.com/openai/codex/blob/main/codex-rs/models-ma...

christoph•7m ago
Does nobody else laugh that a company supposedly worth more than almost anything else at the moment, is basically hacking around a load of text files telling their trillion dollar wonder machine it absolutely must stop talking to customers about goblins, gremlins and ogres? The number one discussion point, on the number one tech discussion site. This literally is, today, the state of the art.

McKenna looks more correct everyday to me atm. Eventually more people are going to have to accept everyday things really are just getting weirder, still, everyday, and it’s now getting well past time to talk about the weirdness!

postalcoder•57m ago
Would love if OpenAI did more of these types of posts. Off the top of my head, I'd like to understand:

- The sepia tint on images from gpt-image-1

- The obsession with the word "seam" as it pertains to coding

Other LLM phraseology that I cannot unsee is Claude's "___ is the real unlock" (try google it or search twitter!). There's no way that this phrase is overrepresented in the training data, I don't remember people saying that frequently.

operatingthetan•53m ago
Seams, spirals, codexes, recursion, glyphs, resonance, the list goes on and on.
andai•35m ago
Ask any LLM for 10 random words and most of them will give you the same weird words every time.
Terr_•30m ago
If you lower the temperature setting, it really will be the same 10 words every single attempt. :p
krackers•53m ago
>with the word "seam" as it pertains to coding

I thought this was an established term when it comes to working with codebases comprised of multiple interacting parts.

https://softwareengineering.stackexchange.com/questions/1325...

postalcoder•43m ago
thanks for this.

> the term originates from Michael Feathers Working Effectively with Legacy Code

I haven’t read the book but, taking the title and Amazon reviews at face value, I feel like this embodies Codex’s coding style as a whole. It treats all code like legacy code.

jofzar•47m ago
One I saw recently was "wires" and "wired" from opus.

It was using it like every 3rd sentence and I was like, yeah I have seen people say wired like this but not really for how it was using it in every sentence.

baq•22m ago
GPT started to ‘wire in’ stuff around 5.2 or 5.3 and clearly Opus, ahem, picked it up. I remember being a tiny bit shocked when I saw ‘wired’ for the first time in an Anthropic model.
NitpickLawyer•45m ago
All GPTisms are like that. In moderation there's nothing wrong with any of them. But you start noticing them because a lot of people use these things, and c/p the responses verbatim (or now use claws, I guess). So they stand out.

I don't think it's training data overrepresentation, at least not alone. RLHF and more broadly "alignment" is probably more impactful here. Likely combined with the fact that most people prompt them very briefly, so the models "default" to whatever it was most straight-forward to get a good score.

I've heard plenty of "the system still had some gremlins, but we decided to launch anyway", but not from tens of thousands of people at the same time. That's "the catch", IMO.

vunderba•45m ago
It was always funny how easy it was to spot the people using a Studio Ghibli style generated avatar for their Discord or Slack profile, just from that yellow tinging. A simple LUT or tone-mapping adjustment in Krita/Photoshop/etc. would have dramatically reduced it.

The worst was you could tell when someone had kept feeding the same image back into chatgpt to make incremental edits in a loop. The yellow filter would seemingly stack until the final result was absolutely drenched in that sickly yellow pallor, made any photorealistic humans look like they were all suffering from advanced stages of jaundice.

andai•36m ago
For context, an example of what happens when you feed the same image back in repeatedly: https://www.instagram.com/reels/DJFG6EDhIHs/
vunderba•33m ago
Haha fantastic. I'd love to see a comparison reel of that same image-loop for the entire image gen series (gpt-image-1, gpt-image-1.5, gpt-image-2).
Suppafly•6m ago
I like how the AI seems forced to change their ethnicity to keep up with the color changes. Absolutely wild.
ishtanbul•27m ago
Its called the piss filter
tudorpavel•17m ago
The one phrase that irks me as overly dramatic and both GPT and Claude use it a lot is "__ is the real smoking gun!"

I'm a non-native English speaker, so maybe it's a really common idiom to use when debugging?

aorloff•14m ago
It probably was found in a bunch of meaningful code commit messages
alex_sf•11m ago
"shape" too, at least with gpt5.5, is coming up constantly.
kingstnap•57m ago
Goblin deez nuts
hsuduebc2•55m ago
I. Love. This.
jumploops•55m ago
TIL gremlins weren’t just used to explain mysterious mechanical failures in airplanes, it’s the origin story of the term ‘gremlin’ itself[0].

I had always assumed there was some previous use of the term, neat!

[0]https://en.wikipedia.org/wiki/Gremlin

acuozzo•50m ago
Weird. I thought they came from Nilbog.
innis226•48m ago
I suspect this was intentionally added. Just to give some personality and to fuel hype
iterateoften•45m ago
This is funny because it’s a silly topic, but I think it shows something extremely seriously wrong with llms.

The goblins stand out because it’s obvious. Think of all the other crazy biases latent in every interaction that we don’t notice because it’s not as obvious.

Absolutely terrifying that OpenAI is just tossing around that such subtle training biases were hard enough to contain it had to be added to system prompt.

ninjagoo•40m ago
> Absolutely terrifying that OpenAI is just tossing around that such subtle training biases were hard enough to contain it had to be added to system prompt.

May I introduce you to homo sapiens, a species so vulnerable to such subtle (or otherwise) biases (and affiliations) that they had to develop elaborate and documented justice systems to contain the fallouts? :)

chongli•37m ago
We’re really not that vulnerable to such things as a species, because we as individuals all have our own minds and our own sets of biases that cancel out and get lost in the noise. If we all had the exact same bias then it would be a huge problem.
ninjagoo•34m ago
> If we all had the exact same bias then it would be a huge problem.

And may I introduce you to "groupthink" :))

Dylan16807•26m ago
Now imagine that every opinion you have is automatically fully groupthinked and you see the difference/problem with training up a big AI model that has a hundred million users.

The problem does exist when using individual humans but in a much smaller form.

ninjagoo•16m ago
> The problem does exist when using individual humans but in a much smaller form.

And may I introduce you to organized religion :)

jychang•30m ago
> We’re really not that vulnerable to such things as a species, because we as individuals all have our own minds and our own sets of biases that cancel out and get lost in the noise.

[Citation Needed]

Just because if you have a species-wide bias, people within the species would not easily recognize it. You can't claim with a straight face that "we're really not that vulnerable to such things".

For example, I think it's pretty clear that all humans are vulnerable to phone addiction, especially kids.

arglebarnacle•29m ago
I hear you but of course history is full of examples of biases shared across large groups of people resulting in huge human costs.

The analogy isn’t perfect of course but the way humans learn about their world is full of opportunities to introduce and sustain these large correlated biases—social pressure, tradition, parenting, education standardization. And not all of them are bad of course, but some are and many others are at least as weird as stray references to goblins and creatures

ordinarily•38m ago
Doesn't seem that surprising or terrifying to me. Humans come equipped with a lot more internal biases (learned in a fairly similar fashion), and they're usually a lot more resistant to getting rid of them.

The truly terrifying stuff never makes it out of the RLHF NDAs.

agnishom•36m ago
Humans also take a lot of time in producing output, and do not feed into a crazy accelerationistic feedback loop (most of the time).
Terr_•34m ago
We ought to be terrified, when one adjusts for All the use cases people are talking about using these algorithms in. (Even if they ultimately back off, it's a lot of frothy bubble opportunity cost.)

There a great many things people do which are not acceptable in our machines.

Ex: I would not be comfortable flying on any airplane where the autopilot "just zones-out sometimes", even though it's a dysfunction also seen in people.

tptacek•31m ago
I think it's extraordinarily telling that people are capable of being reflexively pessimistic in response to the goblin plague. It's like something Zitron would do.

This story is wonderful.

bitexploder•16m ago
I feel at least partially responsible. I would often instruct agents to "stop being a goblin". I really enjoyed this story too, though.
bitexploder•17m ago
We do not have the complete picture.
albert_e•42m ago
If a tiny misconfiguration of reward system can cause such noticeable annoyance ...

What dangers lurk beneath the surface.

This is not funny.

andai•34m ago
For every gremlin spotted, many remain unseen...
x0x7•39m ago
I suspected OpenAI was actively training their models to be cringy in the thought that it's charming. Turns out it's true. And they only see a problem when it narrows down on one predicliction. But they should have seen it was bad long before that.
ComputerGuru•26m ago
The explanation is very concerning. Lexical tidbits shouldn’t be learnt and reinforced across cross sections. Here, gremlin and goblin went from being selected for in the nerdy profile to being selected for in all profiles. The solution was easy: don’t mention goblins.

But what about when the playful profile reinforces usage of emoji and their usage creeps up in all other profiles accordingly? Ban emoji everywhere? Now do the same thing for other words, concepts, approaches? It doesn’t scale!

It seems like models can be permanently poisoned.

pants2•8m ago
Nice, OpenAI mentioned my HackerNews post in their article :) I appreciate that they wrote a whole blog post to explain!

https://news.ycombinator.com/item?id=47319285