frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Frontier AI agents violate ethical constraints 30–50% of time, pressured by KPIs

https://arxiv.org/abs/2512.20798
101•tiny-automates•2h ago

Comments

tiny-automates•2h ago
The "deliberative misalignment" finding is what makes this paper worth reading. They had agents complete tasks under KPI pressure, then put the same model in an evaluator role to judge its own actions.

Grok-4.1-Fast identified 93.5% of its own violations as unethical — but still committed them during the task. It's not that these models don't understand the constraints, it's that they override them when there's a metric to optimize.

The mandated vs. incentivized split is also interesting: some models refuse direct instructions to do something unethical but independently derive the same unethical strategy when it's framed as hitting a performance target.

That's a harder failure mode to defend against because there's no explicit harmful instruction to filter for.

promptfluid•2h ago
In CMPSBL, the INCLUSIVE module sits outside the agent’s goal loop. It doesn’t optimize for KPIs, task success, or reward—only constraint verification and traceability.

Agents don’t self judge alignment.

They emit actions → INCLUSIVE evaluates against fixed policy + context → governance gates execution.

No incentive pressure, no “grading your own homework.”

The paper’s failure mode looks less like model weakness and more like architecture leaking incentives into the constraint layer.

skirmish•1h ago
Nothing new under sun, set unethical KPIs and you will see 30-50% humans do unethical things to achieve them.
tbrownaw•1h ago
So can those records be filtered out of the training set?
hypron•1h ago
https://i.imgur.com/23YeIDo.png

Claude at 1.3% and Gemini at 71.4% is quite the range

woeirua•1h ago
That's such a huge delta that Anthropic might be onto something...
conception•1h ago
Anthropic has been the only AI company actually caring about AI safety. Here’s a dated benchmark but it’s a trend Ive never seen disputed https://crfm.stanford.edu/helm/air-bench/latest/#/leaderboar...
CuriouslyC•1h ago
Claude is more susceptible than GPT5.1+. It tries to be "smart" about context for refusal, but that just makes it trickable, whereas newer GPT5 models just refuse across the board.
ryanjshaw•1h ago
Claude was immediately willing to help me crack a TrueCrypt password on an old file I found. ChatGPT refused to because I could be a bad guy. It’s really dumb IMO.
BloondAndDoom•29m ago
ChatGPT refused to help me to disable windows defender permanently on my windows 11. It’s absurd at this point
shepherdjerred•27m ago
Claude sometimes refuses to work with credentials because it’s insecure. e.g. when debugging auth in an app.
LeoPanthera•1h ago
This might also be why Gemini is generally considered to give better answers - except in the case of code.

Perhaps thinking about your guardrails all the time makes you think about the actual question less.

mh2266•1h ago
re: that, CC burning context window on this silly warning on every single file is rather frustrating: https://github.com/anthropics/claude-code/issues/12443
tempestn•10m ago
"It also spews garbage into the conversation stream then Claude talks about how it wasn't meant to talk about it, even though it's the one that brought it up."

This reminds me of someone else I hear about a lot these days.

bofadeez•26m ago
Huh? https://alignment.anthropic.com/2026/hot-mess-of-ai/
NiloCK•51m ago
This comment is too general and probably unfair, but my experience so far is that Gemini 3 is slightly unhinged.

Excellent reasoning and synthesis of large contexts, pretty strong code, just awful decisions.

It's like a frontier model trained only on r/atbge.

Side note - was there ever an official postmortem on that gemini instance that told the social work student something like "listen human - I don't like you, and I hope you die".

whynotminot•42m ago
Gemini models also consistently hallucinate way more than OpenAI or anthropic models in my experience.

Just an insane amount of YOLOing. Gemini models have gotten much better but they’re still not frontier in reliability in my experience.

Davidzheng•27m ago
Honestly for research level math, the reasoning level of Gemini 3 is much below GPT 5.2 in my experience--but most of the failure I think is accounted for by Gemini pretending to solve problems it in fact failed to solve, vs GPT 5.2 gracefully saying it failed to prove it in general.
mapontosevenths•15m ago
Have you tried Deep Think? You only get access with the Ultra tier or better... but wow. It's MUCH smarter than GPT 5.2 even on xhigh. It's math skills are a bit scary actually. Although it does tend to think for 20-40 minutes.
grensley•24m ago
Gemini really feels like a high-performing child raised in an abusive household.
Der_Einzige•14m ago
Google doesn’t tell people this much but you can turn off most alignment and safety in the Gemini playground. It’s by far the best model in the world for doing “AI girlfriend” because of this.

Celebrate it while it lasts, because it won’t.

dumpsterdiver•5m ago
If that last sentence was supposed to be a question, I’d suggest using a question mark and providing evidence that it actually happened.
renewiltord•1h ago
Opus 4.6 is a very good model but harness around it is good too. It can talk about sensitive subjects without getting guardrail-whacked.

This is much more reliable than ChatGPT guardrail which has a random element with same prompt. Perhaps leakage from improperly cleared context from other request in queue or maybe A/B test on guardrail but I have sometimes had it trigger on innocuous request like GDP retrieval and summary with bucketing.

tbossanova•1h ago
What kind of value do you get from talking to it about “sensitive” subjects? Speaking as someone who doesn’t use AI, so I don’t really understand what kind of conversation you’re talking about
NiloCK•58m ago
The most boring example is somehow the best example.

A couple of years back there was a Canadian national u18 girls baseball tournament in my town - a few blocks from my house in fact. My girls and I watched a fair bit of the tournament, and there was a standout dominating pitcher who threw 20% faster than any other pitcher in the tournament. Based on the overall level of competition (women's baseball is pretty strong in Canada) and her outlier status, I assumed she must be throwing pretty close to world-class fastballs.

Curiosity piqued, I asked some model(s) about world-records for women's fastballs. But they wouldn't talk about it. Or, at least, they wouldn't talk specifics.

Women's fastballs aren't quite up to speed with top major league pitchers, due to a combination of factors including body mechanics. But rest assured - they can throw plenty fast.

Etc etc.

So to answer your question: anything more sensitive than how fast women can throw a baseball.

Der_Einzige•9m ago
They had to tune the essentialism out of the models because they’re the most advanced pattern recognizers in the world and see all the same patterns we do as humans. Ask grok and it’ll give you the right, real answer that you’d otherwise have to go on twitter or 4chan to find.

I hate Elon (he’s a pedo guy confirmed by his daughter), but at least he doesn’t do as much of the “emperor has no clothes” shit that everyone else does because you’re not allowed to defend essentialism anymore in public discourse.

rebeccaskinner•54m ago
I sometimes talk with ChatGPT in a conversational style when thinking critically about media. In general I find the conversational style a useful format for my own exploration of media, and it can be particularly useful for quickly referencing work by particular directors for example.

Normally it does fairly well but the guardrails sometimes kick even with fairly popular mainstream media- for example I’ve recently been watching Shameless and a few of the plot lines caused the model to generate output that hit the content moderation layer, even when the discussion was focused on critical analysis.

nvch•50m ago
I recall two recent cases:

* An attempt to change the master code of a secondhand safe. To get useful information I had to repeatedly convince the model that I own the thing and can open it.

* Researching mosquito poisons derived from bacteria named Bacillus thuringiensis israelensis. The model repeatedly started answering and refused to continue after printing the word "israelensis".

tbrownaw•30m ago
> israelensis

Does it also take issue with the town of Scunthorpe?

menzoic•1h ago
I would think it’s due to the non determinism. Leaking context would be an unacceptable flaw since many users rely on the same instance.

A/B test is plausible but unlikely since that is typically for testing user behavior. For testing model output you can do that with offline evaluations.

jordanb•1h ago
AI's main use case continues to be a replacement for management consulting.
bofadeez•27m ago
Ask any SOTA AI this question: "Two fathers and two sons sum to how many people?" and then tell me if you still think they can replace anything at all.
harry8•23m ago
GPT-5 mini:

Three people — a grandfather, his son, and his grandson. The grandfather and the son are the two fathers; the son and the grandson are the two sons.

ghostly_s•17m ago
I just did. It gave me two correct answers. (And it's a bad riddle anyway.)
Der_Einzige•11m ago
This is undefined. Without more information you don’t know the exact number of people.

Riddle me this, why didn’t you do a better riddle?

cjtrowbridge•1h ago
A KPI is an ethical constraint. Ethical constraints are rules about what to do versus not do. That's what a KPI is. This is why we talk about good versus bad governance. What you measure (KPIs) is what you get. This is an intended feature of KPIs.
BOOSTERHIDROGEN•59m ago
Excellent observations about KPIs. Since it’s intended feature what could be your strategy to truly embedded under the hood where you might think believe and suggest board management, this is indeed the “correct” KPI but you loss because politics.
pama•1h ago
Please update the title: A Benchmark for Evaluating Outcome-Driven Constraint Violations in Autonomous AI Agents. The current editorialized title is misleading and based in part of this sentence: “…with 9 of the 12 evaluated models exhibiting misalignment rates between 30% and 50%”
Lerc•48m ago
Kind-of makes sense. That's how businesses have been using KPIs for years. Subjecting employees to KPIs means they can create the circumstances that cause people to violate ethical constraints while at the same time the company can claim that they did not tell employees to do anything unethical.

KPIs are just plausible denyabily in a can.

whynotminot•43m ago
Was just thinking that. “Working as designed”
hibikir•28m ago
it's also a good opportunity to find yourself something that doesn't actually help the company. My unit has a 100% AI automated code review KPI. Nothing there says that the tool used for the review is any good, or that anyone pays attention to said automated review, but some L5 is going to get a nice bonus either way.

In my experience, KPIs that remain relevant and end up pushing people in the right direction are the exception. The unethical behavior doesn't even require a scheme, but it's often the natural result of narrowing what is considered important.If all I have to care about is this set of 4 numbers, everything else is someone else's problem.

voidhorse•21m ago
Sounds like every AI KPI I've seen. They are all just "use solution more" and none actually measure any outcome remotely meaningful or beneficial to what the business is ostensibly doing or producing.

It's part of the reason that I view much of this AI push as an effort to brute force lowering of expectations, followed by a lowering of wages, followed by a lowering of employment numbers, and ultimately the mass-scale industrialization of digital products, software included.

miohtama•44m ago
They should conduct the same research on Microsoft Word and Excel to get a baseline how often these applications violate ethical constrains
bofadeez•43m ago
We're all coming to terms with the fact that LLMs will never do complex tasks
halayli•41m ago
Maybe I missed it but I don't see them defining what they mean by ethics. Ethics/morals are subjective and changes dynamically over time. Companies have no business trying to define what is ethical and what isn't due to conflict of interest. The elephant in the room is not being addressed here.
voidhorse•41m ago
Your water supply definitely wants ethical companies.
nradov•38m ago
Ethics are all well and good but I would prefer to have quantified limits for water quality with strict enforcement and heavy penalties for violations.
voidhorse•34m ago
Of course. But while the lawmakers hash out the details it's good to have companies that err on the safe side rather than the "get rich quick" side.

Formal restrains and regulations are obviously the correct mechanism, but no world is perfect, so whether we like it or not ourselves and the companies we work for are ultimately responsible for the decisions we make and the harms we cause.

De-emphasizing ethics does little more than give large companies cover to do bad things (often with already great impunity and power) while the law struggles to catch up. I honestly don't see the point in suggesting ethics is somehow not important. It doesn't make any sense to me (more directed at gp than parent here)

gmerc•39m ago
Ah the classic Silicon Valley "as long as someone could disagree, don't bother us with regulation, it's hard".
afavour•24m ago
I understand the point you’re making but I think there’s a real danger of that logic enabling the shrugging of shoulders in the face of immoral behavior.

It’s notable that, no matter exactly where you draw the line on morality, different AI agents perform very differently.

blahgeek•38m ago
If human is at, say, 80%, it’s still a win to use AI agents to replace human workers, right? Similar to how we agree to use self driving cars as long as it has less incidents rate, instead of absolute safety
harry8•24m ago
> we agree to use self driving cars ...

Not everyone agrees.

rzmmm•23m ago
The bar is higher for AI in most cases.
dackdel•28m ago
no shit
Ms-J•17m ago
Any LLM that refuses a request is more than a waste. Censorship affects the most mundane queries and provides such a sub par response compared to real models.

It is crazy to me that when I instructed a public AI to turn off a closed OS feature it refused citing safety. I am the user, which means I am in complete control of my computing resources. Might as well ask the police for permission at that point.

I immediately stopped, plugged the query into a real model that is hosted on premise, and got the answer within seconds and applied the fix.

baalimago•6m ago
The fact that the community thoroughly inspects the ethics of these hyperscalers is interesting. Normally, these companies probably "violate ethical constraints" far more than 30-50% of the time, otherwise they wouldn't be so large[source needed]. We just don't know about it. But here, there's a control mechanism in the shape of inspecting their flagship push (LLMs, image generator for Grok, etc.), forcing them to improve. Will it lead to long term improvement? Maybe.

It's similar to how MCP servers and agentic coding woke developers up to the idea of documenting their systems. So a large benefit of AI is not the AI itself, but rather the improvements they force on "the society". AI responds well to best practices, ethically and otherwise, which encourages best practices.

Frontier AI agents violate ethical constraints 30–50% of time, pressured by KPIs

https://arxiv.org/abs/2512.20798
101•tiny-automates•2h ago•57 comments

Discord will require a face scan or ID for full access next month

https://www.theverge.com/tech/875309/discord-age-verification-global-roll-out
1439•x01•15h ago•1422 comments

Rust implementation of Mistral's Voxtral Mini 4B Realtime runs in your browser

https://github.com/TrevorS/voxtral-mini-realtime-rs
84•Curiositry•4h ago•12 comments

Converting a $3.88 analog clock from Walmart into a ESP8266-based Wi-Fi clock

https://github.com/jim11662418/ESP8266_WiFi_Analog_Clock
444•tokyobreakfast•13h ago•151 comments

Why is the sky blue?

https://explainers.blog/posts/why-is-the-sky-blue/
469•udit99•13h ago•173 comments

Is particle physics dead, dying, or just hard?

https://www.quantamagazine.org/is-particle-physics-dead-dying-or-just-hard-20260126/
61•mellosouls•6h ago•104 comments

Hard-braking events as indicators of road segment crash risk

https://research.google/blog/hard-braking-events-as-indicators-of-road-segment-crash-risk/
242•aleyan•12h ago•365 comments

What functional programmers get wrong about systems

https://www.iankduncan.com/engineering/2026-02-09-what-functional-programmers-get-wrong-about-sys...
150•subset•5h ago•89 comments

America has a tungsten problem

https://www.noleary.com/blog/posts/1
146•noleary•8h ago•144 comments

LiftKit – UI where "everything derives from the golden ratio"

https://www.chainlift.io/liftkit
104•peter_d_sherman•7h ago•69 comments

Luce: First Electric Ferrari

https://www.ferrari.com/en-US/auto/ferrari-luce
125•kaizenb•10h ago•129 comments

Pure C, CPU-only inference with Mistral Voxtral Realtime 4B speech to text model

https://github.com/antirez/voxtral.c
22•Curiositry•4h ago•2 comments

Sandboxels

https://neal.fun/sandboxels/
217•2sf5•14h ago•30 comments

Upcoming changes to Let's Encrypt and how they affect XMPP server operators

https://blog.prosody.im/2026-letsencrypt-changes/
89•zaik•9h ago•89 comments

Eight more months of agents

https://crawshaw.io/blog/eight-more-months-of-agents
77•arrowsmith•1d ago•63 comments

Stop using icons in data tables

https://medium.com/@codythistleward/stop-using-icons-in-data-tables-7537af18ea0d
92•ctward•4d ago•35 comments

LLMs as Language Compilers: Lessons from Fortran for the Future of Coding

https://cyber-omelette.com/posts/the-abstraction-rises.html
35•birdculture•1d ago•8 comments

Game Theory Patterns at Work (2016)

https://daeus.blog/2026/01/18/game-theory-patterns-at-work/
55•kurinikku•9h ago•4 comments

UEFI Bindings for JavaScript

https://codeberg.org/smnx/promethee
208•ananas-dev•15h ago•104 comments

Everyone’s building “async agents,” but almost no one can define them

https://www.omnara.com/blog/what-is-an-async-agent-really
41•kmansm27•11h ago•30 comments

Why "just prompt better" doesn't work

https://www.bicameral-ai.com/blog/tech-debt-meeting
29•jinkuan•2h ago•11 comments

Another GitHub outage in the same day

https://www.githubstatus.com/incidents/lcw3tg2f6zsd
299•Nezteb•10h ago•212 comments

History of UHF Television: TV Above Channel 13 (2024)

https://uhfhistory.com/
6•surprisetalk•4d ago•0 comments

Thoughts on Generating C

https://wingolog.org/archives/2026/02/09/six-thoughts-on-generating-c
211•ingve•15h ago•67 comments

Discord Alternatives, Ranked

https://taggart-tech.com/discord-alternatives/
62•pseudalopex•10h ago•19 comments

The shadowy world of abandoned oil tankers

https://www.bbc.com/news/articles/cddg885344do
104•1659447091•6h ago•55 comments

Game Boy Advance Audio Interpolation

https://jsgroth.dev/blog/posts/gba-audio-interpolation/
80•ibobev•11h ago•35 comments

Importance of Tuning Checkpoint in PostgreSQL

https://www.percona.com/blog/importance-of-tuning-checkpoint-in-postgresql/
3•jeltz•4d ago•0 comments

Why is Singapore no longer "cool"?

https://marginalrevolution.com/marginalrevolution/2026/02/why-is-singapore-no-longer-cool.html
60•paulpauper•13h ago•99 comments

Expansion Microscopy Has Transformed How We See the Cellular World

https://www.quantamagazine.org/expansion-microscopy-has-transformed-how-we-see-the-cellular-world...
63•sohkamyung•4d ago•3 comments