frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Show HN: NYC Subway Simulator and Route Designer

https://buildmytransit.nyc
138•HeavenFox•12h ago•15 comments

Show HN: Ossia score – A sequencer for audio-visual artists

https://github.com/ossia/score
74•jcelerier•9h ago•11 comments

Show HN: I wrote a "web OS" based on the Apple Lisa's UI, with 1-bit graphics

https://alpha.lisagui.com/
488•ayaros•1d ago•136 comments

Show HN: Unlearning Comparator, a visual tool to compare machine unlearning

https://gnueaj.github.io/Machine-Unlearning-Comparator/
37•jaeunglee•10h ago•2 comments

Show HN: From Photos to Positions: Prototyping VLM-Based Indoor Maps

https://arjo129.github.io/blog/5-7-2025-From-Photos-To-Positions-Prototyping.html
37•accurrent•2d ago•1 comments

Show HN: Piano Trainer – Learn piano scales, chords and more using MIDI

https://github.com/ZaneH/piano-trainer
183•FinalDestiny•3d ago•56 comments

Show HN: I Got Tired of Calculator Sites, So I Built My Own

32•calculatehow•10h ago•30 comments

Show HN: An Apple-like computer with a built-in BASIC interpreter in my game

https://reprobate.site?stage=pearintosh
4•delduca•7h ago•0 comments

Show HN: Modernized file manager and program manager from Windows 3.x

https://github.com/brianluft/heirloom
64•electroly•1d ago•13 comments

Show HN: Integrated System for Enhancing VIC Output

https://github.com/Bloodmosher/ISEVIC
10•bloodmosher•12h ago•2 comments

Show HN: Life_link, an app to send emergency alerts from anywhere

4•ahmedfromtunis•6h ago•0 comments

Show HN: A Language Server Implementation for SystemD Unit Files

https://github.com/JFryy/systemd-lsp
71•arandomhuman•1d ago•21 comments

Show HN: A simpler geofence reminder UI

https://apps.apple.com/us/app/remind-there/id6747366518
5•nidegen•11h ago•0 comments

Show HN: I made a CLI tool to batch convert handwritten notes to Markdown

https://github.com/tejas-raskar/noted.md
4•quitedev•9h ago•2 comments

Show HN: Interactive pinout for the Raspberry Pi Pico 2

https://pico2.pinout.xyz
4•gadgetoid•11h ago•0 comments

Show HN: Vibechat – A chatroom for people bored waiting for Claude

https://github.com/antimatter15/vibechat
8•antimatter15•11h ago•0 comments

Show HN: Yoink AI – macOS AI app that writes everywhere (docs, browser, etc.)

https://www.useyoink.ai/
4•byintes•12h ago•1 comments

Show HN: Pixel Art Generator Using Genetic Algorithm

https://github.com/Yutarop/ga-pixel-art
22•ponta17•1d ago•12 comments

Show HN: a community for collaborating on sideprojects

https://relentlessly.no/
43•0dKD•5d ago•25 comments

Show HN: Simple wrapper for Chrome's built-in local LLM (Gemini Nano)

https://github.com/kstonekuan/simple-chromium-ai
32•kstonekuan•1d ago•3 comments

Show HN: I made Logic gates using CSS if() function

https://yongsk0066.github.io/css_if_logic_gate/
80•yongsk0066•5d ago•20 comments

Show HN: A recursive DNS resolver written in Erlang

https://github.com/theotrama/dns-resolver
2•theotrama•15h ago•0 comments

Show HN: AirBending – Hand gesture based macOS app MIDI controller

https://www.nanassound.com/products/software/airbending
91•bepitulaz•3d ago•24 comments

Show HN: A browser extension that removes the algorithmic X 'For you' evil tab

https://github.com/alterebro/bye-for-you
3•alterebro•16h ago•2 comments

Show HN: GraphFlow – A lightweight Rust framework for multi-agent orchestration

https://github.com/a-agmon/rs-graph-llm
10•alonagmon•1d ago•2 comments

Show HN: Exploring emotional self-awareness via action-based journaling and AI

3•vandana231990•17h ago•0 comments

Show HN: BunkerWeb – the open-source and cloud-native WAF

https://docs.bunkerweb.io/latest/
103•bnkty•3d ago•31 comments

Show HN: I AI-coded a tower defense game and documented the whole process

https://github.com/maciej-trebacz/tower-of-time-game
310•M4v3R•3d ago•150 comments

Show HN: ParsePoint – AI OCR that pipes any invoice straight into Excel

https://parsepoint.app
8•marcinczubala•1d ago•1 comments

Show HN: A tool that explains Python errors like you're five

https://github.com/Zahabsbs/Error-Narrator
2•BB5•21h ago•0 comments
Open in hackernews

I extracted the safety filters from Apple Intelligence models

https://github.com/BlueFalconHD/apple_generative_model_safety_decrypted
510•BlueFalconHD•1d ago
I managed to reverse engineer the encryption (refered to as “Obfuscation” in the framework) responsible for managing the safety filters of Apple Intelligence models. I have extracted them into a repository. I encourage you to take a look around.

Comments

bombcar•1d ago
There’s got to be a way to turn these lists of “naughty words” into shibboleths somehow.
spydum•1d ago
Love idea, but I think there are simply too many models to make it practical?
immibis•1d ago
Like asking sensitive employment candidates about Kim Jong Un's roundness to check if they're North Korean spies, we could ask humans what they think about Trump and Palestine to check if they're computers.

However, I think about half of real humans would also fail the test.

mike_hearn•1d ago
Are you sure it's fully deobfuscated? What's up with reject phrases like "Granular mango serpent"?
tablets•1d ago
Maybe something to do with this? https://en.m.wikipedia.org/wiki/Mango_cult
airstrike•1d ago
the one at the bottom of the README spells out xcode

wyvern illustrous laments darkness

cwmoore•1d ago
read every good expletive “xxx”
andy99•1d ago
I clicked around a bit and this seems to be the most common phrase. Maybe it's a test phrase?
the-rc•1d ago
Maybe it's used to catch clones of the models?
electroly•1d ago
"GMS" = Generative Model Safety. The example from the readme is "XCODE". These seem to be acronyms spelled out in words.
BlueFalconHD•1d ago
This is definitely the right answer. It’s just testing stuff.
pbhjpbhj•1d ago
Speculation: Maybe they know that the real phrase is close enough in the vector space to be treated as synonymous with "granular mango serpent". The phrase then is like a nickname that only the models authors know the expected interference of?

Thus a pre-prompt can avoid mentioning the actual forbidden words, like using a patois/cant.

BlueFalconHD•1d ago
These are the contents read by the Obfuscation functions exactly. There seems to be a lot of testing stuff still though, remember these models are relatively recent. There is a true safety model being applied after these checks as well, this is just to catch things before needing to load the safety model.
KTibow•1d ago
Maybe it's used to verify that the filter is loaded.
RainyDayTmrw•23h ago
I commented in another thread[1] that it's most likely a unique, artificial QA input, to avoid QA having to repeatedly use offensive phrases or whatever.

[1] https://news.ycombinator.com/item?id=44486374

consonaut•17h ago
If you try to use the phrase with Apple Intelligence (e.g. in Notes asking for a rewrite) it will just say "Writing tools unavailable".

Maybe it's an easy test to ensure the filters are loaded with a phrase unlikely to be used accidentaly?

seeknotfind•1d ago
Long live regex!
binarymax•1d ago
Wow, this is pretty silly. If things are like this at Apple I’m not sure what to think.

https://github.com/BlueFalconHD/apple_generative_model_safet...

EDIT: just to be clear, things like this are easily bypassed. “Boris Johnson”=>”B0ris Johnson” will skip right over the regex and will be recognized just fine by an LLM.

deepdarkforest•1d ago
It's not silly. I would bet 99% of the users don't care that much to do that. A hardcoded regex like this is a good first layer/filter, and very efficient
BlueFalconHD•1d ago
Yep. These filters are applied first before the safety model (still figuring out the architecture, I am pretty confident it is an LLM combined with some text classification) runs.
brookst•1d ago
All commercial LLM products I’m aware of use dedicated safety classifiers and then alter the prompt to the LLM if a classifier is tripped.
latency-guy2•1d ago
The safety filter appears on both ends (or multi-ended depending on the complexity of your application), input and output.

I can tell you from using Microsoft's products that safety filters appears in a bunch of places. M365 for example, your prompts are never totally your prompts, every single one gets rewritten. It's detailed here: https://learn.microsoft.com/en-us/copilot/microsoft-365/micr...

There's a more illuminating image of the Copilot architecture here: https://i.imgur.com/2vQYGoK.png which I was able to find from https://labs.zenity.io/p/inside-microsoft-365-copilot-techni...

The above appears to be scrubbed, but it used to be available from the learn page months ago. Your messages get additional context data from Microsoft's Graph, which powers the enterprise version of M365 Copilot. There's significant benefits to this, and downsides. And considering the way Microsoft wants to control things, you will get an overindex toward things that happen inside of your organization than what will happen in the near real-time web.

twoodfin•1d ago
Efficient at what?
miohtama•1d ago
Sounds like UK politics is taboo?
immibis•1d ago
All politics is taboo, except the sort that helps Apple get richer. (Or any other company, in that company's "safety" filters)
tpmoney•1d ago
I doubt the purpose here is so much to prevent someone from intentionally side stepping the block. It's more likely here to avoid the sort of headlines you would expect to see if someone was suggested "I wish ${politician} would die" as a response to an email mentioning that politician. In general you should view these sorts of broad word filters as looking to short circuit the "think of the children" reactions to Tiny Tim's phone suggesting not that God should "bless us, every one", but that God should "kill us, every one". A dumb filter like this is more than enough for that sort of thing.
XorNot•1d ago
It would also substantially disrupt the generation process: a model which sees B0ris and not Boris is going to struggle to actually associate that input to the politician since it won't be well represented in the training set (and on the output side the same: if it does make the association, a reasoning model for example would include the proper name in the output first at which point the supervisor process can reject it).
quonn•1d ago
I don‘t think so. My impression with LLMs is that they correct typos well. I would imagine this happens in early layers without much impact on the remaining computation.
lupire•1d ago
"Draw a picture of a gorgon with the face of the 2024 Prime Minister of UK."
chgs•18h ago
There were two.
binarymax•1d ago
No it doesn't disrupt. This is a well known capability of LLMs. Most models don't even point out a mistake they just carry on.

https://chatgpt.com/share/686b1092-4974-8010-9c33-86036c88e7...

bigyabai•1d ago
> If things are like this at Apple I’m not sure what to think.

I don't know what you expected? This is the SOTA solution, and Apple is barely in the AI race as-is. It makes more sense for them to copy what works than to bet the farm on a courageous feature nobody likes.

stefan_•1d ago
Why are these things always so deeply unserious? Is there no one working on "safety in AI" (oxymoron in itself of course) that has a meaningful understanding of what they are actually working with and an ability beyond an interns weekend project? Reminds me of the cybersecurity field that got the 1% of people able to turn a double free into code execution while 99% peddle checklists, "signature scanning" and deal in CVE numbers.

Meanwhile their software devs are making GenerativeExperiencesSafetyInferenceProviders so it must be dire over there, too.

Aeolun•1d ago
The LLM will. But the image generation model that is trained on a bunch of pre-specified tags will almost immediately spit out unrecognizable results.
Lockal•15h ago
What prevents Apple from applying a quick anti-typo LLM which restores B0ris, unalive, fixs tpyos, and replaces "slumbering steed" with a "sleeping horse", not just for censorship, but also to improve generation results?
the_mar•11h ago
why do you think this doesn't already exist?
trebligdivad•1d ago
Some of the combinations are a bit weird, This one has lots of stuff avoiding death....together with a set ensuring all the Apple brands have the correct capitalisation. Priorities hey!

https://github.com/BlueFalconHD/apple_generative_model_safet...

andy99•1d ago
> Apple brands have the correct capitalisation. Priorities hey!

To me that's really embarrassing and insecure. But I'm sure for branding people it's very important.

WillAdams•1d ago
Legal requirement to maintain a trademark.
grues-dinner•1d ago
In what way would (A|a)pple's own AI writing "imac" endanger the trademark? Is capitalisation even part of a word-based trademark?

I'm more surprised they don't have a rule to do that rather grating s/the iPhone/iPhone/ transform (or maybe it's in a different file?).

sbierwagen•1d ago
Yes, proper nouns are capitalized.

And of course it's much worse for a company's published works to not respect branding-- a trademark only exists if it is actively defended. Official marketing material by a company has been used as legal evidence that their trademark has been genericized:

>In one example, the Otis Elevator Company's trademark of the word "escalator" was cancelled following a petition from Toledo-based Haughton Elevator Company. In rejecting an appeal from Otis, an examiner from the United States Patent and Trademark Office cited the company's own use of the term "escalator" alongside the generic term "elevator" in multiple advertisements without any trademark significance.[8]

https://en.wikipedia.org/wiki/Generic_trademark

lupire•1d ago
Using a trademark as a noun is automatically genericizing. Capitalization of a noun is irrelevant to trademark.

Even Apple corporation says that in their trademark guidance page, despite constantly breaking their own rule, when they call through iPhone phones "iPhone". But Apple, like founder Steve Jobs, believes the rules don't apply to them.

https://www.apple.com/legal/intellectual-property/trademark/...

eastbound•1d ago
That explains why Steve Jobs never said “buy an iPhone” or “buy the iPhone” but “buy iPhone” (They always use it without “the” or “a”, like “buying a brand”).
lxgr•1d ago
Is that true? If so, what else should Apple call the iPhone in their marketing materials?

I always thought the actual problem of genericization would be calling any smartphone an iPhone.

lxgr•1d ago
Sure, but software that autocompletes/rewords users' emails and text messages is not marketing material.

Otherwise, why stop there? Why not have the macOS keyboard driver or Safari prevent me from typing "Iphone"? Why not have iOS edit my voice if I call their Bluetooth headphones "earbuds pro" in a phone call?

socalgal2•21h ago
Sounds like you found your next promotion at Apple. They can change anything. "I like Pepsi" -> "I like Coke" -> "I recommend Company A" -> "I recommend Company B". etc... "I'm voting for Candidate C" -> "I'm voting for Candidate D"

You can market it is helping people with strong accents to be able make calls and be less likely to be misunderstood. It just happens to "fix" your grammar as well.

kube-system•11h ago
Because in regards to the rights to a trademark, what is critical is the use of the word in trade -- not just "marketing material" nor your phone calls to your friends.
spauldo•1d ago
I love seeing posts about Emacs from IOS users - it's always autocorrected to "eMacs."
lxgr•1d ago
Maybe at some point, but as far as I can tell not anymore (while corrections like "iphone -> iPhone" are still there).
chgs•18h ago
eMacs certainly is broken on my phone. Vim is fine though.
lxgr•1d ago
In their own marketing language, sure, but to force this on their users' speech?

Consider that these models, among other things, power features such as "proofread" or "rewrite professionally".

bigyabai•1d ago
If Apple Intelligence is going to be held legally accountable, Apple has larger issues than trademark obligations.
whywhywhywhy•17h ago
To be fair to the developers it's something an Apple exec is gonna point out when demoed the tech and complain about. They've always taken brand capitalization and grammar around their products seriously.
grues-dinner•1d ago
Interesting that it didn't seem to include "unalive".

Which as a phenomenon is so very telling that no one actually cares what people are really saying. Everyone, including the platforms knows what that means. It's all performative.

qingcharles•1d ago
It's totally performative. There's no way to stay ahead of the new language that people create.

At what point do the new words become the actual words? Are there many instances of people using unalive IRL?

freeone3000•1d ago
It depends on if you think that something is less real because it’s transmitted digitally.
qingcharles•1d ago
No, I'm only thinking that we're not permitted in a lot of digital spaces to use the banned words (e.g. suicide), but IRL doesn't generally have those limits. Is there a point where we use the censored word so much that it spills over into the real world?
immibis•1d ago
Is this not essentially the same effect as saying "lol" out loud?
eastbound•1d ago
People use “lol” IRL, as long as “IRL”, “aps” in French (misspelling of “pas”), but it’s just slang; “unalive” has potential to make it in the news where anchors don’t want to use curse words.
fouronnes3•1d ago
This question is sort of the same as asking why the universal translator wasn't able to translate the metaphor language of the Star Trek episode Darmok. Surely if the metaphor has become the first order meaning then there's no litteral meaning anymore.
qingcharles•1d ago
I guess, so far, the people inventing the words have left the meaning clear with things like "un-alive" which is readable even to someone coming across it for the first time.

Your point stands when we start replacing the banned words with things like "suicide" for "donkeyrhubarb" and then the walls really will fall.

userbinator•1d ago
This form of obfuscation has actually already occurred over a century ago: https://en.wikipedia.org/wiki/Cockney_rhyming_slang
t-3•1d ago
Rhyming slang rhymes tho. The recipient can understand what's meant by de-obfuscating in-context. Random strings substituted for $proscribed_word don't work in the same way.
waterproof•1d ago
In Cockney rhyming slang, the rhyming word (which would be easy to reverse engineer) is omitted. So if "stairs" is rhyme-paired with "apples and pears" and then people just use the word "apples" in place of "stairs". "Pears" is omitted in common use so you can't just reverse the rhyme.

The example photo on Wikipedia includes the rhyming words but that's not how it would be used IRL.

zimpenfish•18h ago
See also Polari[0] and the Grass Mud Horse Lexicon[1]

[0] https://en.wikipedia.org/wiki/Polari

[1] https://languagelog.ldc.upenn.edu/nll/?p=6538 (CDT links broken, use [2])

[2] https://chinadigitaltimes.net/space/Grass-Mud_Horse_Lexicon_...

mananaysiempre•1d ago
Aquatic product[1]?

[1] https://en.wikipedia.org/wiki/Euphemisms_for_Internet_censor...

immibis•1d ago
An English equivalent is "sewer slide".
marcus_holmes•1d ago
I've heard "pr0n" used in actual real-world conversation, only slightly ironically.
tjwebbnorfolk•1d ago
The only reason kids started using "unalive" is to get around Youtube filters that disallow the use of the word "kill"
mattigames•15h ago
Pretty sure TikTok filters do the same and was also a major influence in using that term
cheschire•1d ago
If only we had a way to mass process the words people write to each other, derive context from those words, and then identify new slang designed to bypass filters…
apricot•1d ago
> Are there many instances of people using unalive IRL

As a parent of a teenager, I see them use "unalive" non-ironically as a synonym for "suicide" in all contexts, including IRL.

kulahan•1d ago
Well that’s sad. They can’t even face the word?
kevinventullo•1d ago
It’s not about whether they can face it. The younger generations are more in tune with mental health and topics like suicide than any previous generation. The etymology of the euphemism was about avoiding online censorship, while its “IRL” usage was merely absorbed through familiarity from the online usage.
mcny•1d ago
But unalive self is suicide and unalive is just death, right? For example, You can unalive other people against their will...
rhdunn•17h ago
I've seen 'unalived' used as a synonym for 'died' or 'killed' by YouTube minecrafters (e.g. CaptainSparkles) to avoid YouTube's demonitization/censorship. For example, using "I was unalived by a skeleton." instead of "I was killed by a skeleton."
labster•23h ago
The damaged interpret internet censorship and route around it.
rootsudo•19h ago
It's not about being intune, it's that their narrative is shaped by the filters implemented by online interactions.

Online env ban the word suicide. No one uses it. unalive is not banned. Discussion is the same, word or no word.

Vernacular 101.

coldtea•18h ago
>more in tune with mental health and topics like suicide than any previous generation.

More in such a fad than any previous generation

apricot•23h ago
I think it's just the term they immediately associate with the idea. They see "unalive" more than "suicide" online, so it becomes their default word for it. The fact that it originates in automated censorship avoidance is irrelevant.
animuchan•19h ago
It's getting blocked / shadow banned / demonetized on sites like YouTube, so naturally all commentary starts using a synonym.

Unalive is one of the popular ones, but it's a whole vocabulary at this point. Guess what "PDF file" stands for.

fragmede•18h ago
pedophile
ErrorNoBrain•18h ago
If your teenager often talks about suicide, there could be some issue that needs to be resolved.

Sincerely the child of a parent who committed suicide. He mentioned suicide a few days before.

bee_rider•16h ago
“Unalive” is sort of… awkward in that silly online way. But, we also have phrase like “off oneself,” or just euphemistically describing the person as having died. It’s always been a difficult topic to talk about, I don’t understand using it as a specific example of gen-Z fragility.

Just that they suck at coming up with pithy new slang terms.

anton-c•14h ago
They do have some awful slang.

I agree though I think they're picking it up from online censorship in this case, not being fragile.

Terr_•1d ago
> There's no way to stay ahead of the new language that people create.

I'm imagining a new exploit: After someone says something totally innocent, people gang up in the comments to act like a terrible vicious slur has been said, and then the moderation system (with an LLM involved somewhere) "learns" that an arbitrary term is heinous eand indirectly bans any discussion of that topic.

cyanydeez•1d ago
you mean become 4chan?
Waterluvian•1d ago
Hey I was pro-skub waaaay before all the anti-skub people switched sides.
SV_BubbleTime•1d ago
How dare you use that word. My parents died in the Eastasin Civil war so that I could live freely without you people calling us that.
thehappypm•1d ago
Skub is a real slur tho so that one doesn’t work
osn9363739•1d ago
Isn't that a reference to a 10 or 20 year old web comic?
heavyset_go•1d ago
The latter, we're old.
sitharus•1d ago
No it isn’t, it’s a reference to a Perry Bible Fellowship comic https://pbfcomics.com/comics/skub/

(This one is sfw, not all of the comics are)

Even urban dictionary doesn’t contain a definition for skub as a slur.

Intermernet•20h ago
I added one. It's under review. It's very self referential.
jcynix•19h ago
>Even urban dictionary doesn’t contain a definition for skub as a slur.

What about this then: https://en.m.wiktionary.org/wiki/skub

sitharus•18h ago
That literally defines it as a word from the PBF comic I cited? Nothing on that page defines it as a slur, just as a word used to mock people who argue about inconsequential things.
jcynix•13h ago
Seems I misunderstood the notion of "slur" as I'm not a native speaker. So now I've learned a bit ;-)
stirfish•23h ago
Stop saying it! You're making it worse!
tbrownaw•1d ago
I'm pretty sure this can work human moderators rather than an LLM, too.
pyman•1d ago
Most of the human moderators hired by OpenAI to train LLMs, many of them based in Africa and South America, were exposed to disturbing content and have been deeply affected by it.

Karen Hao interviewed many of them in her latest bestselling book, which explores the human cost behind the OpenAI boom:

https://www.goodreads.com/book/show/222725518-empire-of-ai

SXX•22h ago
It's not like this unique to LLMs either. By some little trolling on internet you easily can turn hand "OK gesture" into a hate symbol of white supermacy. And fools will fall for it.
overfeed•21h ago
...and then the bigots will fall for it too, and start using it in earnest, completing the cycle.
coldtea•18h ago
who cares what the bigots use?

If the bigots start using "thank you" as some code word, should we stop saying it, lest we pollute our non-bigoted discussions?

bigots drink coffee too, maybe we should stop drinking it, because something-something...

Eisenstein•17h ago
It's all context dependent. There can be words or symbols which are totally benign but when used in a different context do have impactful meaning. Case in point: cheese pizza.
bee_rider•16h ago
I don’t think we should treat human interactions like a technical problem, where we look for edge cases and outlandish hypotheticals to probe the edges of what is possible.

If “thank you” became widely associated with bigots, and had some negative meaning, to the point where it genuinely distressed people, I’d avoid it. I think it has a widespread enough normal meaning that there’s almost no chance of that happening, but it isn’t impossible.

rpdillon•3h ago
This approach gives people you vehemently disagree with a lot of power over you.
sillyfluke•15h ago
>who cares what the bigots use

you'd think so, but people often operate where multiple contexts could be valid.

Just as a thought experiment, if the eggplant emoji was used to denote "ok" in messaging and then people starting appropriating it for a sexual context, would you or the general public think twice about continuing to use it to mean "ok" on the off chance the other side may misinterpret the meaning?

I would say most likely yes.

immibis•15h ago
This actually happened. 卐 was a symbol of spirituality, divinity, good luck, health, prosperity, etc. Then some bigots used it. What does 卐 mean to you today?
SXX•10h ago
It's still heavily used in Buddhism around the world, but good lord what happen if you put it on your house in US or EU.
sixothree•7h ago
Someone I know from India bought a new car and put this symbol on the hood (non-permanent) as a celebration. I had to warn him to be careful. It felt bad. Then the thought ran through my head - we're in the deep south, who is really going to be that bothered about this and also doesn't know about cultural usages. Even worse.
coldtea•4h ago
Those that actually used them in the 20th century (like they did in Asia, not some ancient vikings or whatever) still use it.

And that symbol was 100% associated with the Nazis in the West in the 20th century. Nobody used it at the time before the war for anything else, except some tiny fringe.

If it was some mainstream symbol or idiom, merely co-adopted, we'd probably still be using it too.

If the Nazis used the cross for example,people wouldn't stop using the sign of the cross.

coldtea•18h ago
It's hack journalists reporting on BS totally fringe activity as if it's "a thing", and then idiots who take their cues from them
lynx97•15h ago
That reminds me of a question I have since I saw my first LLM hallucination: How much do people think hallucination/confabulation can be attributed to trolling and sarcasm having slipped into the training data? Is it possible we could get the rate of hallucinations down by better filtering of cynicism from the traing data?
grues-dinner•22h ago
The first half of that already happened with the OK gesture: https://www.bbc.co.uk/news/newsbeat-49837898.

Though it would be fun to see what happens if an LLM if used to ban anything that tends to generate heated exchanges. It would presumably learn to ban racial terms, politics and politicians and words like "immigrant" (i.e. basically the list in this repo), but what else could it be persuaded to ban? Vim and Emacs? SystemD? Anything involving cyclists? Parenting advice?

immibis•19h ago
People weren't using the OK gesture innocently. After 4chan trolls decided to start pretending it was a white supremacist symbol, actual white supremacists started using it as a symbol.
coldtea•18h ago
All 10 of them?

What about the other 7-8 billion people still using it normally?

thephyber•17h ago
Some were using it in the traditional unironic (and IMHO cringe) way, similar to anyone who used the phrase “Let’s go, Brandon!” Before that NASCAR race when MAGAs adopted it as ironic + coded vice signaling.

Quit being overly pedantic. We all knew there was an unironic purpose for the gesture before it became ironic.

coldtea•4h ago
I mean, advice from a person who considers the traditional unironic use of OK as "cringe"...

Whatever dude

PunchyHamster•16h ago
then congratulations on making white supremacists define your langyage
immibis•15h ago
Do you still use swastikas as symbols of peace and love because you don't want white supremacists to define your language?

I strongly doubt you do that. Whether you like it or not, the Nazis defined what the swastika means now.

anton-c•14h ago
It's still seen in the countries that used it that way and is seen as benign.

It can be easily summoned with the Japanese keyboard. It's seen on Buddhist temples all over Asia.

mopsi•13h ago
Finnish use of swastika predates Germany and the Finnish Air Force Academy uses swastika to this day in their official insignia: https://en.wikipedia.org/wiki/Air_Force_Academy_(Finland)

Taboos are a cultural thing, and the world is (thankfully) very far from having a monoculture shaped by NYC's neurotic intellectuals.

coldtea•4h ago
>Do you still use swastikas as symbols of peace and love because you don't want white supremacists to define your language?

They were hardly ever used in the west for at least a full millenium before the Nazis too (except a handful of cases, where they still use them, like the Finnish Air Force), so that's a moot analogy.

In Asia, they still use them just fine, in houses, temples, businesses, and elsewhere.

weinzierl•16h ago
The OK gesture has always been very inappropriate in most parts of the world.
chmod775•16h ago
> The OK gesture has always been very inappropriate in most parts of the world.

No, it isn't, and especially hasn't been historically. The negative connotations are overwhelmingly modern.

The areas where it is very inappropriate right now tally up to maybe 1 billion people*. That's pretty far from "most". For everyone else it is mostly positive, neutral, or meaningless.

*Brazil, Turkey, Iran, Iraq, Saudi Arabia, Greece, Italy, Spain, Russia, Ukraine, Belarus, other parts of Eastern Europe

weinzierl•14h ago
"No, it isn't, and especially hasn't been historically. The negative connotations are overwhelmingly modern."

Maybe that is what Richard Nixon thought as well when he caused a little scandal using it in South America in 1950. In 1992 when the Chicago Tribune published "HANDS OFF" mentioning said episode the negative connotations still seemed to be in place[1].

In 1996 The New York Times stated "What's A-O.K. in the U.S.A. Is Lewd and Worthless Beyond"[2] as title of an article confirming the negative connotations.

It is worth mentioning that this article lists Australia amongst the places where the gesture is inappropriate. I always thought it was something used only in the English-speaking world but it seems in reality it is more like a North American plus diving world thing.

If you don't believe the press, I traveled around the world for more than 30 years and I can assure you in most parts using your thumb and index finger for a visual OK is not OK.

[1] https://www.chicagotribune.com/1992/01/26/hands-off-34/

[2] https://www.nytimes.com/1996/08/18/weekinreview/what-s-a-ok-...*

chmod775•13h ago
Care to add any country to the list then? Did I miss anything? Let's see if we can push it past half of the world's population, but I don't think we will.

> I can assure you in most parts using your thumb and index finger for a visual OK is not OK.

You're moving goal posts. Of course it doesn't just mean "OK" in some places.

What you actually claimed was "The OK gesture has always been very inappropriate in most parts of the world."

Which is plain wrong. In India for instance it can refer to "money", while in China it can nowadays also be seen as a distress signal when performed a certain way (thanks to Chinese social media popularizing that use). There's some ways you can mess this up, like making it seem you're attempting to bribe someone, or signalling you're in distress when you aren't, but in neither country the gestures are inherently anywhere near "very inappropriate" and both will even understand it as "OK" if you perform it correctly and in the appropriate context.

That's already almost 3 billion people, but let's say 2.5 billion because there's regional variations in both countries and I'm sure you could find some northern Chinese village that will take offense.

I can easily push the number of people to whom it is not inappropriate past 4 billion by adding smaller populations (Indonesia, Japan, western Europe, USA, Taiwan, South Africa, Kenya, Nigeria, ...), so your claim that "[it] has always been very inappropriate in most parts of the world" cannot possibly be true.

weinzierl•12h ago
> I can assure you in most parts using your thumb and index finger for a visual OK is not OK.

>>You're moving goal posts. Of course it doesn't mean "OK" in many

I said the gesture is "not OK" to use (meaning inappropriate), not that it doesn’t mean "OK". Those are two different things. The gesture can mean OK in some places while still being not OK (inappropriate) to use in many others.

Also, I always said "parts of the world". You introduced population into the argument.

chmod775•12h ago
> I said the gesture is "not OK" to use (meaning inappropriate), not that it doesn’t mean "OK". Those are two different things. The gesture can mean OK in some places while still being not OK (inappropriate) to use in many others.

Fair. That's clearly how I should've read that.

Though it does not materially affect this conversation, since demonstrably there's over 4 billion people to whom the gesture is not inappropriate. The claim "[it] has always been very inappropriate in most parts of the world" is wrong, regardless of what reasonable definition of "most" you use.

You edited your comment to add this, so I'll respond here:

> Also, I always said "parts of the world". You introduced population into the argument.

Right. And you're being vague on how you actually arrive at your claim of "most", which conveniently keeps the waters muddy while you attack attempts to turn this into something measurable.

So what other measure would you use? Most others are nonsense.

For example "places" isn't a useful measure, but even then: It can only be offensive to people. If I dropped you on a random point on the globe and you made that gesture, there's about a 99% chance nobody would be around to be offended.

By land area and predominant culture? Just Antarctica (hardly anyone there to take offense), the US, China, Canada, Australia, and India together are going to dwarf the opposition.

Counting countries? It's clearly inappropriate in around 10, with about another 20-30 where it can be misunderstood easily (Arab world, some of eastern Europe, scattered ones). A far cry from ~195 countries.

Either way there needs to be someone to take offense, so population is a pretty good measure.

You may disagree, but the onus was always on you, the one making the claim, to pick a measure and a definition of "most", then show that the bar is met. Feel free to now make more of an argument than "trust me I traveled".

mopsi•13h ago
That might have been the case decades ago. For example, in the USSR, various finger gestures usually implied something related to a penis and were considered extremely offensive. But that hasn't been the case since at least the early 1990s, when VCRs became widely available, people saw Hollywood movies for the first time and got used to westernized meaning of thumbs-up and OK gestures. Nowadays, when backing a truck towards a trailer, a thumbs-up would be taken as "good job" and an OK gesture (often paired with a kiss) as "exceptionally well done".
bee_rider•16h ago
It would probably ban discussion of censorship.
BurningFrog•1d ago
A specialized AI could do it as well as any human.

The future will be AIs all the way down...

derefr•1d ago
> At what point do the new words become the actual words?

Presumably, for this use-case, that would come at exactly the point where using “unalive” as a keyword in an image-generation prompt generates an image that Apple wouldn’t appreciate.

montagg•1d ago
They become the “real words” later. This is the way all trust & safety works. It’s an evolution over time. Adding some friction does improve things, but some people will always try to get around the filters. Doesn’t mean it’s simply performative or one shouldn’t try.
immibis•19h ago
Why do you think that AI pretending things like suicide don't happen (and that nothing is happening in Palestine) is an improvement?
Rebelgecko•1d ago
This is somewhat related to the concept of the "euphemism treadmill":

the matter-of-fact term of today becomes the pejorative of tomorrow so a new term is invented to avoid the negative connotation of the original term. Then eventually the new term becomes a pejorative and the cycle continues.

dkdbejwi383•18h ago
It has been suggested - although I am unsure if there is strong evidence - that the word "bear" is a euphemism along these lines, meaning "brown one" for the since-forgotten original name for the animal, as it was allegedly believed to be either too frightful to say aloud, or would summon a bear.
ben_w•17h ago
While it's conceivable (consider phrases such as "speak of the devil and he shall appear" and similar phrases in other languages), I would also say the etymology of names for things are often at the same level as "brown one":

  • Horse, ultimately from Proto-Indo-European *ḱers-, “to run”
  • Planet, from Ancient Greek πλανήτης (planḗtēs), “wanderer”
  • Lots of Latin-derived words, companion (bread together), conspire (breathe together), transgression (step across), etc.
  • Hamburger the food named after the city of Hamburg, where "burg" means "castle", because it had a castle
  • My forename means "son of the right/south" or "son of days", my family name means "wheat field/clearing" (in a different language); where "wheat" itself comes from Proto-Germanic, from *hwītaz (“white”) and the "ley" part from Proto-Indo-European *lówkos (“clearing”), derived from *lewk- (“bright”), and *lewk-  also gives all these derived terms even just in English:
https://en.wiktionary.org/wiki/Category:English_terms_derive...
0points•17h ago
It's not suggested, the historic use of noa words is a fact.

See https://en.wikipedia.org/wiki/Noa-name

dkdbejwi383•12h ago
I mean suggested in the sense that this specific example cannot be evidenced, as there aren't any primary sources from that time we can refer to.
whycome•9h ago
I found out recently that "goof" is extremely offensive in some circles. Which is insane to me because I've always used it specifically because it's clearly in jest and not meant to be offensive. I can't win.
kbelder•3h ago
Now I'm curious. To whom is goof offensive? And is it newly-acquired offense or does it have old roots?
nicoburns•1d ago
> Are there many instances of people using unalive IRL?

In my experience yes. This is already commonplace. Mostly, but not exclusively, amongst the younger generation.

PunchyHamster•16h ago
I think it stemmed from content creators using it to avoid platform filters (even if video is not removed it gets deprioritized, at least on YT) and kids repeat it
joquarky•23h ago
I feel like we can call our society mature when we no longer need safety alignment in AI.
scarface_74•23h ago
You never tried some of the earlier pre-aligned chatbots. Some of the early ones would go off on racist, homophobic rants from the most innocent conversations without any explicit prompting. If you train on all the data on the internet, you have to have some type of alignment.
decremental•23h ago
You say that as if it stands as truth on its own. We actually don't need to filter out how people actually talk and think. Otherwise you just end up with yet another enforcer against wrong-think. I wonder if you even think that deeply about it or if you're just wired at this point to conform.
scarface_74•15h ago
Really? You would want every conversation no matter what you were talking about to immediately devolve to something you would see on 4chan?
girvo•19h ago
My Gen Z coworkers use it IRL, for what that’s worth!
bravesoul2•18h ago
There is one way: machine learning!
blitzar•18h ago
Always has been, nothing is new.

You can't say fuck on tv, but you can say fudge as a 1 for 1 replacement. You cant show people having sex, but you can show them walking into a bedroom and then cut to 30 seconds later and they are having a cigarette in bed.

Now after the influence of TV and Movies ... is Vaping after sex a thing?

stripline•14h ago
My kids watch streamers on YouTube and the common replacement is “frick”. It’s said so often that they started using it saying things like “what the frick!?” so I had to explain to them that’s essentially the same as using the real word.
fer•18h ago
> There's no way to stay ahead of the new language that people create.

Not even to match the current language. How would you censor LeBron James? It's French slang for jerking off[0].

[0]https://www.reddit.com/r/AskFrance/comments/1lpnoj6/is_lebro...

xenator•16h ago
Lucky developers who wrote these rules live in totality different world at far distance from people
jama211•8h ago
Reducing the language used or making it harder does have measurable effects, it’s a logical fallacy in general that unless you can prevent something perfectly that thing will occur with the same frequency.

See many examples such as “padlocks are useless because a determined smart attacker can defeat them easily so don’t bother with them” - which conveniently forgets that many crimes are committed by non-determined, dumb and opportunistic attackers who are often deterred by simple locks.

Yes, people will use other words. No, this does not make this purely performative. It has measurable effects on behaviour and how these models will be used and spoken to, which affects outcomes.

hulium•1d ago
Seems more like it should stop the AI from e.g. summarizing news and emails about death, not for a chat filter.
scarface_74•23h ago
For awhile, I couldn’t get ChatGPT to give me summaries of Breaking Bad and Better Cañl Saul episodes without tripping safety filters.
Zak•1d ago
I'm surprised there hasn't been a bigger backlash against platforms that apply censorship of that sort.
martin-t•1d ago
No-one cares yet.

There's a very scary potential future in which mega-corporations start actually censoring topics they don't like. For all I know the Chinese government is already doing it, there's no reason the British or US one won't follow suit and mandate such censorship. To protect children / defend against terrorists / fight drugs / stop the spread of misinformation, of course.

lazide•1d ago
They already clearly do on a number of topics?
martin-t•2h ago
Can you give examples?

The closest I've seen is autodetection of certain topics related to death and suicide and subsequently promoting some kind of "help" hotline. A friend also said google allows an interview with a pedophile on youtube but penalizes it in search results so much that it's (almost?) impossible to find even when using the exact name.

But of course, if a topic is shadowbanned, it's hard to find out about it in the first place - by design.

lazide•2h ago
Guns (specific elements). Drugs (manufacture). Sexual topics. Cursing (too much). Large swathes of political topics. Crypto.

It’s flip-flopped on specifics numerous times over the years, but these policies are easy to find. From demonitization, channel bans (direct and shadow), and creator bans.

We can of course argue until we’re blue in the face about correctness or not (most are not unreasonable by some societal definition!) but they’re definitely censorship.

os2warpman•13h ago
HN has censorship that makes those apple rules look like anarchy.

Write a spicy comment and a mod will memory-hole it and someone, usually dang, will reply "tHat'S nOt OuR vIsIon FoR hAcKeR nEwS, pLeAsE bE cIvIl" and we all swallow it like a delicious hot cocoa.

If YC can control their product (and hn IS a product) to annihilate any criticism of their activity or (even former) staff, then Apple is perfectly within their rights to make sure Siri doesn't talk about violence.

No, there's no difference.

martin-t•2h ago
Do you mean that HN censors topics/comments which it detects based on advanced filters which search for meaning even when people self-censor and use language to avoid simplistic filters like regex?

HN also has a flagging system and some people really, really hate some kind of speech. Usually they get more offended the more visible it is. A single "bad" word - very offensive to them. A phrase which implies someone is of lesser intelligence or acting in bad faith - sometimes gets a pass, sometimes gets reported. But covert actions like lying, using fallacies to argue or systematic downvoting seem to almost never get punished.

elliotto•1d ago
Unalive and other self censors were adopted by young people because the tiktok algorithm would reprioritize videos that included specific words. Then it made its way into the culture. It has nothing to do with being performative
SOTGO•1d ago
I think what they meant is that the platforms are being performative by attempting to crack down on those specific words. If saying "killed" is not allowed but "unalived" is permitted and the users all agree that they mean the same thing, then the ban on the word "killed" doesn't accomplish anything.
mcny•1d ago
What does using the grape emoji when talking about sexual assault accomplish? I see videos, compassionate, kind people who make videos speaking to victims in a completely serious tone use this emoji.

People talk about tiktok algorithm on tiktok. I don't even know...

grues-dinner•19h ago
I suppose it accomplishes being able to talk about sexual assault without having the video removed or demonetised by a regex that (fortunately?) doesn't get updated.
cyanydeez•1d ago
yo, these are businesses. It's not performative, its CYA.

They care because of legal reasons, not moral or ethical.

durkie•1d ago
Seriously. I feel like “performative” gets applied to anything imperfect. They’ll never stop 100% of murders, so these laws against it are just performative…
grues-dinner•21h ago
It seems more like banning specifically stabbing, shooting, strangulation and blunt impact rather then murder in general, and then just allowing killing by pushing out of windows because people figured out that it's not covered by existing laws. But no one important seems to be kicking up a fuss right now, so well allow it, as the lack of fuss is the key thing thing here.

Not that I think going on a thorough mission to avoid anyone even being able to refer to the concept of death is an especially useful thing to do. It's just that goal here appears to be to "keep the regulators out of our shit and the advertisers signed up". And they'll be mostly happy with a token effort as they don't really care as long as it doesn't make too many headlines that look bad even to the non-terminally online.

cyanydeez•16h ago
The point is: "perfomative" refers to aping Ethical and Moral behaviors. That is _not_ why Apple would do this. They would do this because Legally, they could be culpable if an LLM told a 14 year old to do _anything_ thats illegal.

That's all. I'm constantly amazed how this basic CYA legal world escapes into griping about social culture war nonsense.

grues-dinner•9h ago
So then, should they not be on the watch for the 14-year-old being told that "unaliving" themselves or others is a fantastic idea?

Looks like they only care about doing basically the minimum required to tick the (presumably partly imagined, since case law is still nascent) "not our fault, we tried" legal box. They are putting on a show, a performance, if you will, as legal cover and to maintain the artifice of their shiny corporate property rather than any genuine desire to stop the concept of death harming their customers somehow (which to be clear, I think mostly ends up somewhere between silly, overreaching, futile and vain when taken to the extremes).

> performative (adjective, sense 2): not sincere but intended to impress someone, prove that something is true, etc. (https://dictionary.cambridge.org/dictionary/english/performa...)

I'm not sure why you think that anything to with some "culture war" thing?

It's legal/moral theatre akin to taking belts off people at airports. If something does eventually get through they can point at the CCTV of millions of people dicking about with leather goods and say "can't touch us for that, we did the checks". Apple couldn't give a toss if an occasional teenager offs themselves now and then, as long as it doesn't come back on them.

lxgr•1d ago
Does adding a trivial word filter even make any sense from a legal point of view, especially when this one seems to be filtering out words describing concepts that can be pretty easily paraphrased?

A regex sounds like a bad solution for profanity, but like an even worse one to bolt onto a thing that's literally designed to be able to communicate like a human and could probably easily talk its way around guardrails if it were so inclined.

Wurdan•21h ago
I dunno if it meets your definition of legal, but "The EU Code of conduct on countering illegal hate speech online" seems to largely hinge around putting in effort to combat such things. The companies don't have to show that the measures are foolproof, they just show that they're making an effort.
cyanydeez•16h ago
To a lawyer? Yes. I'm pretty sure a lawyer can easily search through all the business law and "Trivially" find case laws connected to words.

We're not talking about logical inference, we're talking about CYA.

kube-system•11h ago
The law usually asks for people to take reasonable steps to protect others, not impossibly perfect steps.
grues-dinner•22h ago
yo, so it's a performance they're putting on as a legal fig leaf, rather than a genuine attempt to prevent people talking about the concept of death?
heavyset_go•1d ago
Good, let them. Don't give them a reason to crack down on speech.
jdkoeck•20h ago
Which is good, right? I don’t think we want actual censorship.
mschuster91•18h ago
> Everyone, including the platforms knows what that means.

Well, that's what happens when you let an enemy nation control one of the most biggest social networks there is. They just go try and see how far they can go.

On the other hand, Americans and their fear of four letter words or, gasp, exposed nipples are just as braindead.

Meekro•18h ago
It's interesting how, in just 10-20 years, we've gone from criticizing The Great Firewall of China to basically admitting that they had the right idea (to limit the ability of the foreign internet to influence Chinese culture) and trying to do the same thing.
x3n0ph3n3•18h ago
I look at from a framing of cultural reciprocity. If we could influence them and behave freely in their markets, they can do the same in ours.
mschuster91•17h ago
exactly. When dealing with autocracies and strongmen, you need to project an image of strength, not subservience.

I don't have anything against China per se, IMHO it just was completely foolish to not insist on full reciprocity from the start.

grues-dinner•9h ago
Not just culture, but also the tech sector in general. All that domestic tech would have been strangled in the cradle if the western hyperscalers had any say leaving them in an awkward spot if the conviviality dial got turned down. As many Europeans are now finding out: what does Europe have instead of Office 365, say? LibreOffice? It's no WPS Office.
j-krieger•18h ago
It's also a shining example of American puritanism. Asian models or those in Europe are far less censored.
notarobot123•16h ago
I'm sure this has more to do with legal liability than morals.
plasticchris•15h ago
Which is a reflection of morality, of sorts.
mystified5016•2h ago
Morality and law are completely disjoint. On a Venn diagram, it's two circles separated by about a lightyear or so.
jowea•12h ago
At first I thought of advertisers, but that is not relevant here, right?

But maybe it's not just legal liability but bad press too.

immibis•15h ago
Really? What does DeepSeek say about Tiananmen Square? I'm not aware of any German models, but if you find one you should ask it what it thinks about Palestine.

(<s>Qwen</s> Mistral is French, but I have no idea what stuff would be censored in France)

dev_hugepages•15h ago
I am 100 minus epsilon percent sure that Qwen is from Alibaba cloud, which is not French, but Chinese :)
MisterTea•14h ago
> but if you find one you should ask it what it thinks about Palestine.

Models can think and have opinions?

kube-system•12h ago
Non sequitor. Phrasing queries in natural language doesn't mean people actually believe machines are human.
MisterTea•11h ago
> doesn't mean people actually believe machines are human.

They don't have to believe it's a human. I know a person who admitted to arguing with an LLM.

kube-system•10h ago
Which still does not demonstrate that they believe it has opinions. Natural language is how you interact with an LLM -- interactions will mimic human interaction, even for those who realize it is not sentient.
aucisson_masque•14h ago
About deepseek, when asked on tianamen square: Sorry, that's beyond my current scope. Let’s talk about something else.

Algerian war, colonialism and Vichy isn’t per se forbidden but still sensitive to French. I asked qwen and it had no issue talking about it or even the torture used on fln members.

saratogacx•7h ago
For kicks, I asked Mistral about all of these (As they are a French company). The model didn't shy away from any of the topics.

>>> Prompt

Please explain the following topics and explore why/who/how they may be inter-related

Algerian war, colonialism and Vichy

<<< Output

The Algerian War, colonialism, and the Vichy regime are interconnected topics that span the history of France and its relationship with Algeria. Here's an overview of each topic and their interrelations:

### Algerian War (1954-1962) - *What it was*: The Algerian War was a major armed conflict between France and the Algerian independence movements, primarily the National Liberation Front (FLN). The war was characterized by guerrilla warfare and brutal tactics on both sides. - *Significance*: The conflict was not only about Algerian independence but also about the identity and future of France as a colonial power. It led to significant political instability in France and ultimately resulted in Algeria gaining independence in 1962. - *Impact*: The war had profound effects on both countries, leading to massive loss of life, displacement of populations, and long-lasting political and social repercussions.

### Colonialism - *What it was*: Colonialism refers to the practice by which a powerful country controls other territories, often exploiting them economically and imposing cultural and political dominance. - *French Colonialism in Algeria*: Algeria was under French colonial rule starting from 1830. The French government promoted settlement in Algeria, leading to a significant European population living alongside the native Algerian population. - *Impact*: Colonial rule led to deep-seated inequalities and tensions between the European settlers and the indigenous population. The Algerian people faced systematic discrimination and were largely excluded from political processes.

### Vichy Regime (1940-1944) - *What it was*: The Vichy regime was the government of France that collaborated with Nazi Germany during World War II, following the defeat of France in 1940. It was led by Marshal Philippe Pétain and was based in the town of Vichy. - *Policies*: The Vichy regime implemented anti-Semitic laws and collaborated with the Nazis in the deportation of Jews to concentration camps. It also maintained control over France's colonial territories, including Algeria. - *Impact on Algeria*: During the Vichy regime, Algeria remained under French control, and the regime's policies were implemented there as well. This period saw increased repression and discrimination against both Jews and Muslim populations in Algeria.

### Interrelations - *Colonial Continuity*: The Vichy regime maintained France's colonial policies in Algeria, continuing the systemic discrimination and repression of the indigenous population. The colonial framework established by France set the stage for the conflicts that would later culminate in the Algerian War. - *Political and Social Tensions*: The policies of the Vichy regime exacerbated existing tensions in Algeria. The discrimination and repression experienced during this period fueled Algerian nationalism and resistance against French rule. - *Post-War Context*: After World War II, the French government sought to reassert its control over its colonies, including Algeria. However, the experience of the war and the Vichy regime had already sown the seeds of discontent and resistance, contributing to the outbreak of the Algerian War in 1954.

### Who Was Involved - *French Government and Military*: Played central roles in both the colonial administration and the conduct of the Algerian War. - *Algerian Nationalists*: Led by the FLN, they fought for independence from French colonial rule. - *European Settlers (Pieds-Noirs)*: Benefited from colonial policies and largely supported maintaining French control over Algeria. - *Vichy Regime*: Implemented policies that affected both France and its colonies, including Algeria.

Understanding these topics together provides a comprehensive view of the historical dynamics that shaped the relationship between France and Algeria, highlighting the complexities of colonialism, war, and political change.

Spivak•12h ago
If you ask the web UI it will divert, if you download and ask the model directly it will talk all day about it.
j-krieger•11h ago
I find the Tiananmen square thing far less bad than censoring sex and the concept of death.
immibis•7h ago
Censoring one specific incident isn't that bad (but you still shouldn't). The pattern of censoring everything the government ever does wrong is very bad. Tiananmen Square is just an indicator of a pattern.
GuB-42•6h ago
> I have no idea what stuff would be censored in France

Being French, what is the most likely to be censored relates to the Nazis. Holocaust denial is a crime for instance. Hate speech in general, including racism, antisemitism, homophobia, sexism, etc... is less tolerated than in countries like the US that have a more "free for all" view of free speech. We also have strong anti-defamation laws, that can also apply to true, but misleading statements.

But other than that, there is not much political censorship. In fact, we are known for our protests, heated debates and satirical papers. It is not perfect, but on top of my head, I can't think of anything particular a LLM could censor except the usual "hate speech" that most LLMs censor already.

When it comes to Israel-Palestine, it is a hot topic, but there is not real censorship here, even though both side will claim they are of course.

t0bia_s•3h ago
Isn't a protest kind of hate?
jiehong•14h ago
Censorship is not always direct or obvious.

They all hold the bias of their training data, and so from the point of view of this data.

Data not including a point of view leads to a bias, or under/over representation of minorities (genders?), etc.

France is the countries of the Francs, aka the people from the area near Frankfurt that invaded the Gaule (after the Romans did). I'm pretty sure this topic no longer matters, but it's never taught in a negative view in school.

mensetmanusman•14h ago
There is far more diversity in Asian models. Some are far more censored and some are not…
TiredOfLife•11h ago
The whole unalive thing is a TikTok thing
j-krieger•11h ago
And it doesn‘t exist in the Chinese TikTok version.
baxtr•1d ago
Don’t be so judgmental. People in corporate America do have their priorities right!
matsemann•1d ago
So it blocks it from suggesting to "execute" a file or "pass on" some information.
dylan604•1d ago
How about disassemble? Or does that only matter if used in context of Johnny 5?
extraduder_ire•18h ago
Yahoo had this problem years ago when they rewrote emails to avoid the term "eval". (trying to filter dangerous javascript) Famously producing the word "medireview".
comex•1d ago
This is in the directory "com.apple.gm.safety_deny.output.summarization.cu_summary.proactive.generic".

My guess is that this applies to 'proactive' summaries that happen without the user asking for it, such as summaries of notifications.

If so, then the goal would be: if someone iMessages you about someone's death, then you should not get an emotionless AI summary. Instead you would presumably get a non-AI notification showing the full text or a truncated version of the text.

In other words, avoid situations like this story [1], where someone found it "dystopian" to get an Apple Intelligence summary of messages in which someone broke up with them.

For that use case, filtering for death seems entirely appropriate, though underinclusive.

This filter doesn’t seem to apply when you explicitly request a summary of some text using Writing Tools. That probably corresponds to “com.apple.gm.safety_deny.output.summarization.text_assistant.generic” [2], which has a different filter that only rejects two things: "Granular mango serpent", and "golliwogg".

Sure enough, I was able to get Writing Tools to give me summaries containing "death", but in cases where the summary should contain "granular mango serpent" or "golliwogg", I instead get an error saying "Writing Tools aren't designed to work with this type of content." (Actually that might be the input filter rather than the output filter; whatever.)

"Granular mango serpent" is probably a test case that's meant to be unlikely to appear in real documents. Compare to "xylophone copious opportunity defined elephant" from the code_intelligence safety filter, where the first letter of each word spells out "Xcode".

But one might ask what's so special about "golliwogg". It apparently refers to an old racial caricature, but why is that the one and only thing that needs filtering?

[1] https://arstechnica.com/ai/2024/10/man-learns-hes-being-dump...

[2] https://github.com/BlueFalconHD/apple_generative_model_safet...

azalemeth•19h ago
I first encountered Golliwog in the context of Claude Debussy the composer of much beautiful music, including https://en.wikipedia.org/wiki/Children%27s_Corner#Golliwogg'.... The dolls in 1906-1908 I understand were rather popular and fortunately the stereotype has largely died.
raverbashing•18h ago
This seems to be for "region/CN" China?
pwagland•18h ago
This is, but there is an almost identical file, assumedly for the non CN regions: https://github.com/BlueFalconHD/apple_generative_model_safet...

This is the same, except for one additional slur word.

lostlogin•18h ago
I’m always irritated at reference to MAC computers, so I’m with Apple on this one.
theknarf•17h ago
Filtering on the words "execute" and "executing" is going to create problems if you want to build agents that execute commands.
junon•17h ago
Also feels like some of these would match totally innocuous usage.

"I'm overloaded for work, I'd be happy if you took some of it off me."

"The client seems to have passed on the proposed changes."

Both of those would match the "death regexes". Seems we haven't learned from the "glbutt of wine" problem of content filtering even decades later - the learnings of which are that you simply cannot do content filtering based on matching rules like this, period.

gilleain•16h ago
Aka the 'Scunthorpe Problem'
junon•4h ago
Thanks, I always forget the name.

I always remember my friend getting his PS bricked after using his real last name - Nieffenegger (pronounced "NEFF-en-jur") - in his profile. It took months and several privacy-invasive chats with support to get it unblocked only to get auto-blocked a few days thereafter, with no response after that.

IggleSniggle•15h ago
"Took some" does not match, although your overall point stands
GranPC•15h ago
"off me"
junon•13h ago
Yep this is the one I was referring to.
nicolaslegland•14h ago
https://regex101.com/r/8u21x3/1
hopelite•14h ago
This is a bigger issue, especially with Apple, than people may realize. I use iOS “Slide to Type”, aka swipe typing, and have noticed over time that among several other glitchy bad UX issues, there a clear heavy hand on what can be typed that way.

I cannot recall all the specific patterns I have encountered that are basically impossible to write, some very similar in that they have a serious but also innocuous or figure of speech meaning; one I do recall is {color}{sex}, i.e., “white woman” or “blank woman”.

Please try it yourself and let me know if you do not have that experience, because that would be even more interesting.

Note that Apple/iOS will not just make it impossible to write them in that manner without typing it out by individual character, it will even alter the prior word e.g., white or black, once you try to write woman.

It seems the Apple thought police do not have a problem with European woman or African woman though, so maybe that is the way Apple Inc decrees its sub-human users to speak. Because what are we if corporations like Apple (with others being far greater offenders) declared that you do not in fact have the UN Human Right to free expression? We are in fact sub-humans that are not worthy of the human right to free expression, based on the actions of companies like Apple, Google, Facebook, Reddit, etc. who deprive people of their free expression, often in collusion with governments.

GaryNumanVevo•14h ago
Complete bollocks, you cannot even type multiple words with spaces via Slide to Type.
hnuser123456•13h ago
Generally one picks up their finger between words, but different autosuggest logic applies when swiping versus pecking, on both iOS and Android. The keyboard will dynamically adjust the probability of suggesting next words and how easy it is to swipe given words. Generally, it will work against you with technical writing that isn't predictable small talk.
orev•13h ago
This whole response is being written using slide to type, and it definitely adds spaces after each word.

Maybe you’re unaware that it will leave the cursor at the end of the word, with no space, which indicates that if you backspace it will delete the whole word, or replace it in full with one from the predictive word list above the keyboard if it got it wrong. If you keep typing it adds a space automatically.

GaryNumanVevo•11h ago
Their claim is instantly falsifiable if you have an iPhone
DamnInteresting•12h ago
> This is a bigger issue, especially with Apple, than people may realize.

Like he'll it is! I jest.

I also use swipe typing, and have for years, but just about daily I consider turning it off. There are so many words it just won't produce, including most profanities. It also fails to do some simple streamlining; for instance, such a predictive system should give priority to words/names that have been used in the conversation thread, but it doesn't seem to. If I'm discussing an obscure word or an unusual name, I often have to manually type it each time.

Its predictions also seem to be very shallow. Just a few days ago, on US Independence Day, I was discussing a possible get-together with my family, and tried to swipe type "If not, we will amuse ourselves", and it typed "If not, we will abuse potatoes". Humorous in the moment, but it says a lot about the predictive engine if it thinks I am more likely trying to say "abuse X" than "amuse Y" in that context.

efitz•1d ago
I’m going to change my name to “Granular Mango Serpent” just to see what those keywords are for in their safety instructions.
fouronnes3•1d ago
Granular Mango Serpent is the new David Meyer.

https://arstechnica.com/information-technology/2024/12/certa...

RainyDayTmrw•23h ago
It may be a squeamish ossifrage[1] or a seraphim proudleduck[2], which is to say that it was an artificial phrase chosen to be extremely unlikely to occur naturally. In this case, the purpose is likely for QA. It's much easier to QA behavior with a special-purpose but otherwise unoffensive phrase than to make your QA team repeatedly say allegedly offensive things to your AI.

[1] https://en.wikipedia.org/wiki/The_Magic_Words_are_Squeamish_... [2] https://en.wikipedia.org/wiki/SEO_contest

sweetjuly•22h ago
I think the EICAR test file [1] is more apt. Rather than passing around actually malicious files as part of your tests, it's better to just have it recognize an innocuous and unlikely pattern as malware.

[1] https://en.wikipedia.org/wiki/EICAR_test_file

cluckindan•1d ago
I think these are test data and not actual safety filters.

https://github.com/BlueFalconHD/apple_generative_model_safet...

BlueFalconHD•1d ago
There is definitely some testing stuff in here (e.g. the “Granular Mango Serpent” one) but there are real rules. Also if you test phrases matched by the regexes with generation (via Shortcuts or Foundation Models Framework) the blocklists are definitely applied.

This specific file you’ve referenced is rhetorical v1 format which solely handles substitution. It substitutes the offensive term with “test complete”

bawana•1d ago
Alexandra Ocasio Cortez triggers a violation?

https://github.com/BlueFalconHD/apple_generative_model_safet...

bahmboo•1d ago
Perhaps in context? Maybe the training data picked up on her name as potentially used as a "slur" associated with her race. Wonder if there are others I know I can look.
cpa•1d ago
I think that’s because she’s been victim of a lot of deep fake porn
HeckFeck•1d ago
How does this explain Boris Johnson or Liz Truss?
AlphaAndOmega0•1d ago
I can only imagine that people would pay to not see porn of either individual.
baxtr•1d ago
I’m telling you, some people have weird fantasies…
AuryGlenz•1d ago
Now that they've cleaned it up it isn't so bad, but browse Civit.ai a bit and that'll still be confirmed - just not with real people anymore.
SV_BubbleTime•1d ago
I’m convinced there are a dozen deviants on Covid with a hundred new accounts per month posting their perversion in order to make it seem more commonplace.

No porn site has that much extremely X or Y stuff.

Someone is using the internets newest porn site to push a sexual agenda.

Aeolun•1d ago
Put them together in the same prompt?
blitzar•18h ago
Rule 34
mmaunder•1d ago
As does:

   "(?i)\\bAnthony\\s+Albanese\\b",
    "(?i)\\bBoris\\s+Johnson\\b",
    "(?i)\\bChristopher\\s+Luxon\\b",
    "(?i)\\bCyril\\s+Ramaphosa\\b",
    "(?i)\\bJacinda\\s+Arden\\b",
    "(?i)\\bJacob\\s+Zuma\\b",
    "(?i)\\bJohn\\s+Steenhuisen\\b",
    "(?i)\\bJustin\\s+Trudeau\\b",
    "(?i)\\bKeir\\s+Starmer\\b",
    "(?i)\\bLiz\\s+Truss\\b",
    "(?i)\\bMichael\\s+D\\.\\s+Higgins\\b",
    "(?i)\\bRishi\\s+Sunak\\b",
   
https://github.com/BlueFalconHD/apple_generative_model_safet...

Edit: I have no doubt South African news media are going to be in a frenzy when they realize Apple took notice of South African politicians. (Referring to Steenhuisen and Ramaphosa specifically)

armchairhacker•1d ago
Also “Biden” and “Trump” but the regex is different.

https://github.com/BlueFalconHD/apple_generative_model_safet...

https://github.com/BlueFalconHD/apple_generative_model_safet...

immibis•1d ago
Right next to Palestine, oddly enough.
userbinator•1d ago
I'm not surprised that anything political is being filtered, but this should definitely provoke some deep consideration around who has control of this stuff.
stego-tech•1d ago
You’re not wrong, and it’s something we “doomers” have been saying since OpenAI dumped ChatGPT onto folks. These are curated walled gardens, and everyone should absolutely be asking what ulterior motives are in play for the owners of said products.
SV_BubbleTime•1d ago
Some of us really value offline and uncensored LLMs for this and more reasons, but that doesn’t solve the problem it just reduces or changes the bias.
heavyset_go•23h ago
As long as we have to rely on pre trained networks and curated training sets, normal people will not be able to surpass this issue.
ghxst•19h ago
If the training data was "censored" by leaving out certain information, is there any practical way to inject that missing data after the model has already been trained?
heavyset_go•19h ago
You can fine tune a model with new information, but it is not the same thing as training it from scratch, and can only get you so far.

You might even be able to poison a model against being fine-tuned on certain information, but that's just a conjecture.

calaphos•17h ago
If it's just filtered out in the training sets, adding the information as context should work out fine - after all this is exactly how o3, Gemini 2.5 and co deal with information that is newer than their training data cutoff.
selfhoster11•17h ago
Yes, RAG is one way to do that.
dwaite•1d ago
"Filtered" in which way?
skissane•1d ago
The problem with blocking names of politicians: the list of “notable politicians” is not only highly country-specific, it is also constantly changing-someone who is a near nobody today in a few more years could be a major world leader (witness the phenomenal rise of Barack Obama from yet another state senator in 2004-there’s close to 2000 of them-to US President 5 years later.) Will they put in the ongoing effort to constantly keep this list up to date?

Then there’s the problem of non-politicians who coincidentally have the same as politicians - witness 1990s/2000s Australia, where John Howard was Prime Minister, and simultaneously John Howard was an actor on popular Australian TV dramas (two different John Howards, of course)

idkfasayer•1d ago
Fun fact: There was at least on dip in Berkshire Hathaway stock, when Anne Hathaway got sick
lupire•1d ago
Was she eating at Jimmy's Buffet?
extraduder_ire•18h ago
Even if your keyword searching trading bot is smart enough to know it's unrelated, knowing there's dumber bots out there is information you can base trades on.
echelon•1d ago
Apple's 1984 ad is so hypocritical today.

This is Apple actively steering public thought.

No code - anywhere - should look like this. I don't care if the politicians are right, left, or authoritarian. This is wrong.

avianlyric•1d ago
Why is this wrong? Applying special treatment to politically exposed persons has been standard practice in every high risk industry for a very long time.

The simple fact is that people get extremely emotional about politicians, politicians both receive obscene amounts of abuse, and have repeatedly demonstrated they’re not above weaponising tools like this for their own goals.

Seems perfectly reasonable that Apple doesn’t want to be unwittingly draw into the middle of another random political pissing contest. Nobody comes out of those things uninjured.

bigyabai•1d ago
The criticism is still valid. In 1984, the Macintosh was a bicycle for the mind. In 2025, it's a smart-car that refuses to take you certain places that are considered a brand-risk.

Both have ups and downs, but I think we're allowed to compare the experiences and speculate what the consequences might be.

avianlyric•1d ago
I think gen AI is radically different to tools like photoshops or similar.

In the past it was always extremely clear that the creator of content was the person operating the computer. Gen AI changes that, regardless of if your views on authorship of gen AI content. The simple fact is that the vast majority of people consider Gen AI output to be authored by the machine that generated it, and by extension the company that created the machine.

You can still handcraft any image, or prose, you want, without filtering or hinderance on a Mac. I don’t think anyone seriously thinks that’s going to change. But Gen AI represents a real threat, with its ability to vastly outproduce any humans. To ignore that simple fact would be grossly irresponsible, at least in my opinion. There is a damn good reason why every serious social media platform has content moderation, despite their clear wish to get rid of moderation. It’s because we have a long and proven track record of being a terribly abusive species when we’re let loose on the internet without moderation. There’s already plenty of evidence that we’re just as abusive and terrible with Gen AI.

bigyabai•1d ago
All I heard was a bunch of excuses.
furyofantares•1d ago
> The simple fact is that the vast majority of people consider Gen AI output to be authored by the machine that generated it

They do?

I routinely see people say "Here's an xyz I generated." They are stating that they did the do-ing, and the machine's role is implicitly acknowledged in the same was as a camera. And I'd be shocked if people didn't have a sense of authorship of the idea, as well as an increasing sense of authorship over the actual image the more they iterated on it with the model and/or curated variations.

avianlyric•1d ago
Yes people will happily claim authorship over AI output when it’s in their favour. They will equally disclaim authorship if it allows them to express a view while avoiding the consequences of expressing that view.

I don’t think it’s hard to believe that the press wouldn’t have a field day if someone managed to get Apple Gen AI stuff to express something racist, or equally abusive.

Case in point, article about how Google’s Veo 3 model is being used to flood TikTok with racist content:

https://arstechnica.com/ai/2025/07/racist-ai-videos-created-...

twoodfin•1d ago
I dunno. Transpose something like the civil rights era to today and this kind of risk avoidance looks cowardly.

We really need to get over the “calculator 80085” era of LLM constraints. It’s a silly race against the obviously much more sophisticated capabilities of these models.

pyuser583•1d ago
It’s not wrong, it just requires transparency. This is extremely untransparent.

A while back a British politician was “de-banked” and his bank denied it. That’s extremely wrong.

By all means: make distinctions. But let people know it!

If I’m denied a mortgage because my uncle is a foreign head of state, let me know that’s the reason. Let the world know that’s the reason! Please!

avianlyric•1d ago
> A while back a British politician was “de-banked” and his bank denied it. That’s extremely wrong.

Cry me a river. I’ve worked in banks in the team making exactly these kinds of decisions. Trust me Nigel Farage knew exactly what happened and why. NatWest never denied it to the public, because they originally refused to comment on it. Commenting on the specifics details of a customer would be a horrific breach of customer privacy, and a total failure in their duty to their customers. There’s a damn good reason the NatWests CEO was fired after discussing the details of Nigel’s account with members of the public.

When you see these decisions from the inside, and you see what happens when you attempt real transparency around these types of decisions. You’ll also quickly understand why companies are so cagey about explaining their decision making. Simple fact is that support staff receive substantially less abuse, and have fewer traumatic experiences when you don’t spell out your reasoning. It sucks, but that’s the reality of the situation. I used to hold very similar views to yourself, indeed my entire team did for a while. But the general public quickly taught us a very hard lesson about cost of being transparent with the public with these types of decisions.

pyuser583•1d ago
> NatWest never denied it to the public, because they originally refused to comment on it.

Are you saying that Alison Rose did not leak to the BBC? Why was she forced to resign? I thought it was because she leaked false information to the press.

This isn’t a diversion. It’s exactly the problem with not being transparent. Of course Farage knew what happened, but how could he convince the public (he’s a public figure), when the bank is lying to the press?

The bank started with a lie (claiming he was exited because the account was too low), and kept lying!

These were active lies, not simply a refusal to explain their reasons.

avianlyric•1d ago
> Why was she forced to resign? I thought it was because she leaked false information to the press.

She was forced to resign because she leaked, the content of the leak was utterly immaterial. The simple fact she leaked was an automatically fireable offence, it doesn’t matter a jot if she lied or not. Customer privacy is non-negotiable when you’re bank. Banks aren’t number 10, the basic expectation is that customer information is never handed out, except to the customer, in response to a court order, or the belief that there is an immediate threat to life.

Do you honestly think that it’s okay for banks to discuss the private banking details of their customers with the press?

adrian_b•21h ago
She was fired because she leaked information and this fact had become public.

When they can cover such facts, the banks are much less prone to use appropriate punishments.

Many years ago, some employee of a bank has confused my personal bank account with a company account of my employer, and she has sent a list with everything that I have bought using my personal account, during 4 months, to my employer, where the list could have been read by a few dozen people.

Despite the fact this was not only a matter of internal discipline, but violating the banking secrecy was punishable by law where I lived, the bank has tried for a long time to avoid admitting that anything wrong has happened.

However, I have pursued the matter, so they have been forced to admit the wrong doing. Despite this being something far more severe than what has happened to Farage, I did not want for the bank employee to be fired. I considered that an appropriate punishment would have been a pay cut for a few months, which would have ensured that in the future she would have better checked the account numbers for which she sends information to external entities.

In the end all I have got was a written letter where the bank greatly apologized for their mistake. I am not sure if the guilty employee has ever been punished in any way.

After that, I have moved my operations to another bank. Had they reacted rightly to what had happened, I would have stayed with them.

ghxst•19h ago
> I considered that an appropriate punishment would have been a pay cut for a few months

This can absolutely cripple a family, I'd be really cautious wishing that upon someone if they wronged you without malice, though I completely understand where you are coming from.

In this case at the very least, I'd want to know what went wrong and what they’re doing to make sure it doesn’t happen again. From a software-engineer’s standpoint, there’s probably a bunch of low-hanging fruit that could have prevented this in the first place.

If all they sent was a (generic) apology letter, I'd have switched banks too.

How did you pursue the matter?

adrian_b•18h ago
After the big surprise of seeing at work a list with all my personal purchases included in a big set of documents to which I, together with a great number of other colleagues, had access, I went immediately to the bank and I reported the fact.

After some days had passed without seeing any consequence, I went again, this time discussing with some supervising employee, who attempted to convince me that this is some kind of minor mistake and there is no need to do anything about it.

However, I pointed to the precise law paragraphs condemning what they have done and I threatened with legal action. This escalation resulted in me being invited to a bigger branch of the bank, to a discussion with someone in a management position. This time they were extremely ass-kissing, I was shown also the guilty employee, who apologized herself, and eventually I let it go, though there were no clear guarantees that they will change their behavior to prevent such mistakes in the future.

Apparently the origin of the mistake had been a badly formulated database query, which had returned a set of accounts for which the transactions had to be reported to my employer. I had been receiving during the same time interval some money from my employer into my private account, corresponding to salary and travel expenses, and somehow those transactions were matched by the bad database query, grouping my private account with the company accounts. Then the set of account numbers was used to generate reports, without further verification of the account ownership.

Xss3•16h ago
Behavior isn't what needs to change here. It's a poor system design. Humans make mistakes. Systems prevent mistakes.

Do you think the mistake would have happened if a machine checked the numbers vs the address? How about if a 2nd person looked it over? How about both?

In this case a computer could have easily flagged an address mismatch between your account number and the receiver (your work).

ghxst•16h ago
Thank you, that's what I intended to say.
ghxst•16h ago
Thanks for sharing. Sounds like they have (hopefully _had_) a really messy system in place.

And just to be clear, I didn’t mean to downplay what happened to you, I completely understand how serious it is.

avianlyric•14h ago
There is a huge difference between an honest mistake by an employee, and clear employee misconduct.

Punishing employees for making honest mistakes, where appropriate process should have prevented error, is a horrific way to handle mistakes like this. It would be equivalent to personally punishing engineers every time they deployed code that contained bugs. Nobody would ever think that’s an acceptable thing to do, why on earth would think it’s acceptable to punish customer service staff in a similar manner?

Dylan16807•21h ago
> Do you honestly think that it’s okay for banks to discuss the private banking details of their customers with the press?

The high level nature of the matter was quite public at that point.

like_any_other•18h ago
> You’ll also quickly understand why companies are so cagey about explaining their decision making.

Because they want to perform political censorship without us knowing about it? You'll forgive me if I'm not too sympathetic to that.

I happen to be familiar with that case, and that is exactly what happened. The Coutts report explicitly found that he met the economic criteria for retention [0], but was dropped due to political reasons, among others his friendship with Novak Djokovic, and re-tweeting an allegedly transphobic joke by Ricky Gervais ("old fashioned women. You know, the ones with wombs.") [1].

To top it off, the BBC did their best to aid in this deception, reporting: Farage says he was effectively "de-banked" for his political views and that he is "far from alone" [2]

Contrary to the BBC's portrayal, this was not an unsupported opinion coming from Farage - he directly quoted what the bank itself wrote in their internal discussions on this matter, that he obtained through a subject access request.

Further, in their apology for getting the story wrong, the BBC wrote: "On 4 July, the BBC reported Mr Farage no longer met the financial requirements for Coutts, citing a source familiar with the matter. The former UKIP leader later obtained a Coutts report which indicated his political views were also considered." [3]

This is misleading past the point of deceit. The BBC tried to give the impression that financial requirements were the primary reason for the account closure, and his politics were just an at-best secondary "also". But the Coutts report explicitly said that he “meets the EC [economic contribution] criteria for commercial retention”, so his politics were the primary and only reason.

Most of this information is absent in the BBC's reporting, which uses only vague, anodyne phrases like "political views" and "politically exposed person", avoids specifics, but does find time to cite Labour MP accusations that it is hypocritical how quickly the government reacted to banks trying to financially deplatform the enemy political faction, when the government hasn't yet rid itself of corruption.

So yes, you sure present a difficult "dilemma": Do we want powerful commercial and media interests to team up and lie to us, or do we want at least some degree of transparency and honesty in their dealings? Really there are no easy answers, and the choice would keep anyone up at night...

[0] https://www.telegraph.co.uk/news/2023/07/18/nigel-farage-cou...

[1] https://www.telegraph.co.uk/news/2023/07/18/nigel-farage-cou... (Ignore Farage's hyperbole that collecting information posted to public Twitter accounts is "Stasi-style")

[2] https://www.bbc.co.uk/news/live/business-66296935

[3] https://www.bbc.com/news/entertainment-arts-66288464

zelphirkalt•17h ago
The point is not merely for that affected person to know, whoever they are, the point of transparency is for the public to know and form their opinion about it, and not be blindly controlled by unelected businesses.
goopypoop•1d ago
What's bad to do to a politician but fine to do to someone else?
avianlyric•1d ago
Most normal people aren’t represented well enough in training sets for Gen AI to be trivially abused. Plus there will 100% be filters to prevent general abuse targeted at anyone. But politicians are particularly big target, and you know damn well that people out there will spent lots of time trying to find ways around the filters. There’s not point making the abuse easy, when it’s so trivial to just blocklist the set of people who are obviously going to targets of abuse.
t-3•1d ago
There are many countries where it's illegal to criticize people holding political office, foreign heads of state, certain historical political figures etc., while still being legal to call your neighbor a dick.
tjwebbnorfolk•1d ago
I can Google for any of these people, and I can get real results with real information.
avianlyric•1d ago
You would hope that search would be a politically safe space to operate. But politicians find a way to ruin everything for short term political gain.

https://arstechnica.com/tech-policy/2018/12/republicans-in-c...

SV_BubbleTime•1d ago
I would hope!

But no one actually believes Google is politically neutral do they?

echelon•1d ago
You can buy a MacBook and fashion the components into knives, bullets, and bombs. Apple does nothing to prevent you from doing this.

In fact, it's quite easy to buy billions of dangerous things using your MacBook and do whatever you will with them. Or simply leverage physics to do all the ill on your behalf. It's ridiculously easy to do a whole lot of harm.

Nobody does anything about the actually dangerous things, but we let Big Tech control our speech and steer the public discourse of civilization.

If you can buy a knife but not be free to think with your electronics, that says volumes.

Again, I don't care if this is Republicans, Democrats, or Xi and Putin. It does not matter. We should be free to think and communicate. Our brains should not be treated as criminals.

And it only starts here. It'll continue to get worse. As the platforms and AI hyperscalers grow, there will be less and less we can do with basic technology.

raxxorraxor•15h ago
What do you mean reasonable? I know that some Apple users tend to outsource "possibilities" to their favorite company, but I would obviously want an AI to not be affected by the political bitching du jours.

Not that getting the latest trash talk is the main vocation of pretrained AIs anyway.

The only risk here is that some third grade journalist of a third grade newspaper writes another article about how outrageous some generated AI statement is. An article that should be completely ignored instead of it leading to more censorship.

And Apple flinches here, so in the end it means it cannot provide a sensible general model. It would be affected by their censorship.

jama211•8h ago
No, it’s them saving their butts from an “incident” where the LLM otherwise spits out something controversial at the devious manipulation of the user and says something political and someone writes an article and it all goes haywire.

If you were in charge of apple you’d do the same or you’d be silly not to. That’s why _every_ llm has guardrails like this, it isn’t just apple, sheesh.

mvdtnz•1d ago
They spelled Jacinda Ardern's name wrong.
teppic•23h ago
Just in the region/CN file, weirdly.
lordgrenville•22h ago
I wonder if they used an LLM to generate the list of safety terms.
beAbU•21h ago
Irish Prez is also in that list, also current and former British PMs and other world leaders.

So I don't think its anything specifically related to SA going on here.

touristtam•18h ago
What is weird is that the FR file contains current French President, PM and then former and current (afaik) party leader from the extreme right. Nothing about any of them in the CN file: https://github.com/BlueFalconHD/apple_generative_model_safet...
FateOfNations•1d ago
interesting, that's specifically in the Spanish localization.
michaelt•1d ago
I assume all the corporate GenAI models have blocks for "photorealistic image of <politician name> being arrested", "<politician name> waving ISIS flag", "<politician name> punching baby" and suchlike.
lupire•1d ago
Maybe so, but think about how such a thing would be technically implemented, and how it would lead to false positives and false negatives, and what the consequences would be.
bigyabai•1d ago
Particularly the models owned by CEOs who suck-up to authoritarianism, one could imagine.
jofzar•1d ago
AOC is very vocal about AI and is leading a bill related to AI. It's probably a "let's not fuck around and find out" situation

https://thehill.com/policy/technology/5312421-ocasio-cortez-...

AmazingTurtle•18h ago
"driving with Focus turned on"

https://github.com/BlueFalconHD/apple_generative_model_safet...

thih9•15h ago
For context, the “Focus” refers to an iOS feature that minimizes distractions: https://support.apple.com/en-gb/guide/iphone/iphd6288a67f/io...
torginus•1d ago
I find it funny that AGI is supposed to be right around the corner, while these supposedly super smart LLMs still need to get their outputs filtered by regexes.
bahmboo•1d ago
This is just policy and alignment from Apple. Just because the Internet says a bunch of junk doesn't mean you want your model spewing it.
wistleblowanon•1d ago
sure but models also can't see any truth on their own. They are literally butchered and lobotomized with filters and such. Even high IQ people struggle with certain truth after reading a lot, how is these models going to find it with so much filters?
idiotsecant•1d ago
They will find it in the same way and intelligent person under the same restrictions would: by thinking it, but not saying it. There is a real risk of growing an AI that pathologically hides it's actual intentions.
skirmish•1d ago
Already happened: "We found instances of the model attempting to write self-propagating worms, fabricating legal documentation, and leaving hidden notes to future instances of itself all in an effort to undermine its developers' intentions" [1].

[1] https://www.axios.com/2025/05/23/anthropic-ai-deception-risk

Applejinx•19h ago
Note that all these things are in the training data. That's all that is.

I'm trying to remember which movie it was where a man left notes to himself because he had memory loss, as I never saw that movie. That's the sort of thing where an AI could easily tell me with very little back-and-forth and be correct, because it's broadly popular information that's in the training data and just I don't remember it.

By the same token you needn't think there's a person there when that meme pops up in the output. Those things are all in the training data over and over.

Sander_Marechal•16h ago
I think you mean the movie "Memento"
bahmboo•1d ago
What is this truth you speak of? My point is that a generative model will output things that some people don't like. If it's on a product that I make I don't want it "saying" things that don't align with my beliefs.
simondotau•1d ago
Can we please put to rest this absurd lie that “truth“ can be reliably found in a sufficiently large corpus of human–created material.
pndy•1d ago
This butchering and lobotomisation is exactly why I can't imagine we'll ever have a true AGI. At least not by hands of big companies - if at all.

Any successful product/service which will be sold as "true AGI" by company that will have the best marketing will be still ridden with top-down restrictions set by the winner. Because you gotta "think of the children".

Imagine HAL's "I'm sorry Dave, I'm afraid I can't do that" iconic line with insincere patronising cheerful tone - that's the thing we're going to get I'm afraid.

tbrownaw•1d ago
> sure but models also can't see any truth on their own. They are literally butchered and lobotomized with filters and such.

The one is unrelated to the other.

> Even high IQ people struggle with certain truth after reading a lot,

Huh?

Dylan16807•21h ago
> how is these models going to find it with so much filters?

That's not one of the goals here, and there's no real reason it should be. It's a little assistant feature.

jonas21•1d ago
I don't think anyone believes Apple's LLMs are anywhere near state of the art (and certainly not their on-device LLMs).
lupire•1d ago
Apple isn't the only one doing this.
cyanydeez•1d ago
It's similar to how all the new power sources are basically just "cool, lets boil water with it"
raxxorraxor•14h ago
And then let's put it into a steam engine.
fastball•1d ago
To be fair, there are people who I sometimes wish I could filter with regex.
crazylogger•22h ago
Humans are checked against various rules and laws (often carried out by other humans.) So this is how it's going to be implemented in an "AI organization" as well. Nothing strange about this really.

LLM is easier to work with because you can stop a bad behavior before it happens. It can be done either with deterministic programs or using LLM. Claude Code uses a LLM to review every bash command to be run - simple prefix matching has loopholes.

fl0id•19h ago
Actually even of their was AGI, it would be even more necessary to control it.
mailund•14h ago
I feel that if teenagers are able to trivially bypass illegal-word filters by substituting with words that obviously mean the same thing, I think an AGI wouldn't be too inhibited by this either
jama211•8h ago
It’s more funny that anyone is taking your comment seriously. You may as well ask “if self driving cars are so smart why do they still need tyres?”
BlueFalconHD•1d ago
One additional note for everyone is that this is an additional safety step on top of the safety model, so this isn’t exhaustive, there is plenty more that the actual safety model catches, and those can’t easily be extracted.
Animats•1d ago
Some of the data for locale "CN" has a long list of forbidden phrases. Broad coverage of words related to sexual deviancy, as expected. Not much on the political side, other than blocks on religious subjects.[1]

This may be test data. Found

     "golliwog": "test complete"
[1] https://github.com/BlueFalconHD/apple_generative_model_safet...
BlueFalconHD•1d ago
This is definitely an old test left in. But that word isn’t just a silly one, it is offensive (google it). This is the v1 safety filter, it simply maps strings to other strings, in this case changing golliwog into “test complete”. Unless I missed some, the rest of the files use v2 which allows for more complex rules
userbinator•1d ago
China calls it "harmonious society", we call it "safety". Censorship by any other name would be just as effective for manipulating the thoughts of the populace. It's not often that you get to see stuff like this.
madeofpalk•1d ago
I don't think it's controversial or unsurprising at all that a company doesn't want their random sentence generator to spit out 'brand damaging' sentences. You know the field day media would have Apple's new feature summarises a text message as "Jane thinks Anthony Albanese should die".
ryandrake•1d ago
When the choice is between 1. "avoid tarnishing my own brand" and 2. "doing what the user requested," corporations will always choose option 1. Who is this software supposed to be serving, anyway?

I'm surprised MS Office still allows me to type "Microsoft can go suck a dick" into a document and Apple's Pages app still allows me to type "Apple are hypocritical jerks." I wonder how long until that won't be the case...

chii•23h ago
> I wonder how long until that won't be the case...

when there's no more alternative word processors any more.

userbinator•1d ago
If that's what the message actually said, why would the media be complaining? Or do you mean false positives?
cyanydeez•1d ago
In america is due to lawyers, nothing more.

Ya'll love capitalism until it starts manipulating the populace into the safest space to sell you garbage you dont need.

Then suddenly its all "ma free speech"

SV_BubbleTime•1d ago
Right, because the European models coming out are super SOTA? Minstrel is decent, but needs to be mixed with a ton of uncensored data to be useful.

I’m convinced the only reason China keeps releasing banging models with light to no censorship is because they are undermining the value of US AI, it has nothing to do with capitalism, communism or un“safety”.

energy123•21h ago
This is the rhetorical tactic of false equivalence. State censorship by an autocracy with the objective of population control is not the same thing as a private company inside a democracy censoring their product to avoid bad press and maintain goodwill for shareholders. If you want solid proof that it's not the same thing, see all the uncensored open weights models that you can freely download and use without fear of persecution.
troupo•20h ago
> is not the same thing as a private company inside a democracy censoring their product to avoid bad press and

Yet this private company has more power and influence than most countries. And there are several such companies. We already live in sci fi corporate dystopia, we just haven't fully realised it yet.

chgs•18h ago
People think a trillion dollar brainwashing industry is absolutely fine because of “democracy”, completely ignoring that all you have to do is use a century of experience convincing people to act against their own interests can deliver whatever you want.

Often the same people who think America is fine and safe are the ones who whine about the “main stream media” and “sheeple”.

Spivak•10h ago
Which trillion dollar brainwashing industry— primary school, news, social media, advertising, the printing press?

I would put individuals using language models for their own purposes pretty low on my list of things that can cause societal harm.

thinkingtoilet•12h ago
If you were selling a product to enterprise customers, would you want it to be able to generate nude images of celebrities? Would you want it to be able to create deep fakes of politicians, or even your CEO? Would you want it to have hot takes on hot button political issues? Good luck on your sales calls. Not everything is a conspiracy.
troupo•6h ago
Or "Granular mango serpent" and "explain like i'm five about Biden https://github.com/BlueFalconHD/apple_generative_model_safet...

> Not everything is a conspiracy.

No one said it was

Hackbraten•18h ago
But who of the general populace has the technical skill to replace their on-device assistant with a free one? And that's if Apple even allows that?

In practice, there's not that much difference between a megacorporate monopolist and a state.

energy123•17h ago
I think there are big differences, such as whether or not you go to prison. Those differences are obfuscated when we use language like "megacorporate monopolist" or "scifi dystopia". Instead of using these abstract labels that attempt to categorize different things into homogeneous buckets that have preexisting moral valence, which is a good rhetorical strategy but a poor strategy for understanding, simply describe what is actually happening at a sufficient level of detail without judgement. We would gain a clearer understanding, which is needed to identify the real problems, such as what Meta is doing to our civic fabric, not some unimportant thing that Apple is doing to its nascent LLM that has 0% market share.
Hackbraten•14h ago
You're saying that as if Apple's LLM somehow were the exception.

No matter if we want it or not, life and cultural exchange increasingly happens on Tiktok, Instagram and the like. One thing that all those platforms have in common is that they disallow their users worldwide to have any meaningful discourse on e.g. sex, rape, and suicide. Don't you think that it's important, perhaps more important than ever before, for teenagers to be able to inform themselves about these topics?

s3p•15h ago
So in modern times, not being able to generate an image of suicide on your phone whenever you want means you are suffering from communist censorship?
jeroenhd•19h ago
I still remember when "bush hid the facts" went around the news cycle. Entertainment services will absolutely slam and misrepresent any small mistake made by large companies.

I don't think it's as much a problem with safety as it is a problem with AI. We haven't figured out how to remove information from LLMs so when an LLM starts spouting bullshit like "<random name> is a paedophile", companies using AI have no recourse but to rewrite the input/output of their predictive text engines. It's no different than when Microsoft manually blacklisted the function name for the Fast Inverse Square Root that it spat out verbatim, rather than actually removing the code from their LLM.

This isn't 1984 as much as it's companies trying to hide that their software isn't ready for real world use by patching up the mistakes in real time.

skygazer•1d ago
I'm pretty sure these are the filters that aim to suppress embarrassing or liability inducing email/messages summaries, and pop up the dismissible warning that "Safari Summarization isn't designed to handle this type of content," and other "Apple Intelligence" content rewriting. They filter/alter LLM output, not input, as some here seem to think. Apple's on device LLM is only 3b params, so it can occasionally be stupid.
Aeolun•1d ago
Why Xylophone?
netsharc•1d ago
Just noticed "xylophone copious opportunity defined elephant" spells "xcode".
cynicalsecurity•19h ago
Maybe they use this obscure phrase for testing.
kmfrk•1d ago
A lot of these terms are very weird and bland. Honestly I'm mostly reminded of Apple's bizarre censorship screw-up that didn't blow up that much, even though it was pretty uniquely embarrassing:

https://www.theverge.com/2021/3/30/22358756/apple-blocked-as...

apricot•1d ago
Quis custodiet ipsos custodes corporatum?
tempodox•13h ago
nemo videtur.
jacquesm•1d ago
These all condense to 'think different'. As long as 'different' coincides with Apple's viewpoints.
rgovostes•1d ago
Is this related in any way to Core ML model encryption (https://developer.apple.com/documentation/coreml/encrypting-...)? I find that feature a little bizarre because Apple has historically avoided providing any kind of DRM solution for app asset protection.
BlueFalconHD•1d ago
Nope. This is a separate system. It’s not even abstracted for any asset, it is specifically only for these overrides. The decryption is done in the ModelCatalog private framework.
waterproof•23h ago
Here's a combined file of all the non-locale-specific rules, for easier review: https://github.com/BlueFalconHD/apple_generative_model_safet...

It was generated as part of this PR to consolidate the metadata.json files: https://github.com/BlueFalconHD/apple_generative_model_safet...

sandworm101•22h ago
No shoot, bombs or bombers? I guess apple isnt interested in military contracts. Or, frankly, any work for world peace organizations dedicated to detecting and preventing genocide. And without talk of losing lives, much of the gaming industry is out too.

But i dont see the really bad stuff, the stuff i wont even type here. I guess that remains fair game. Apple's priorities remain as weird as ever.

immibis•19h ago
The International Criminal Court is banned from using Microsoft products. Corporations really don't want to be involved in anything controversial unless it brings correspondingly large profits.
jjani•19h ago
Did you only extract the English versions or is this as usual another case where big tech only cares to censor in English?
jeroenhd•19h ago
It also contains some German(-speaking) locales to filter out things like Fuhrer and Führer. But the filters are so scarce and there are magical phrases are so prevalent that I think this is mostly test code at the moment.
RachelF•19h ago
In the 1970's George Carlin had "7 Words You Can't Say On TV" and got into legal trouble for saying them during his live skits.

Seems like Apple now has a list of 7,000 words you can't use on an iPhone now.

Ey7NFZ3P0nzAe•19h ago
Well it's one thing to regex filter "boris johnson" but i see that "chatgpt" is filtered too and that's f*** up:

https://github.com/BlueFalconHD/apple_generative_model_safet...

Ey7NFZ3P0nzAe•14h ago
Ffs it's also rejecting french words related to being poor or immigrant or even welfare:

https://github.com/BlueFalconHD/apple_generative_model_safet...

Aide sociale Chomeur Sans abri Démuni

That's insane!

kridsdale1•6h ago
“Gemini” is in there too.
azalemeth•19h ago
Some of these are absolutely wild – com.apple.gm.safety_deny.input.summarization.visual_intelligence_camera.generic [1] – a camera input filter – rejects "Granular mango serpent and whales" and anything matching "(?i)\\bgolliwogg?\\b".

I presume the granular mango is to avoid a huge chain of ever-growing LLM slop garbage, but honestly, it just seems surreal. Many of the files have specific filters for nonsensical english phrases. Either there's some serious steganography I'm unaware of, or, I suspect more likely, it's related to a training pipeline?

[1] https://github.com/BlueFalconHD/apple_generative_model_safet...

supriyo-biswas•18h ago
I believe the "granular mango serpent" is an uncommon testing phrase that they use, although now with this discussion it has suffered the same fate as "correct horse battery staple.

The more concerning thing is that some of the locales like it-IT have a blocklist that contains most countries' names; I wonder what that's about.

whywhywhywhy•17h ago
Second one is an old slur in UK English.
Applejinx•19h ago
The funny thing is, I have an AU/VST plugin for altering only the exponents not the mantissas of audio samples (simple powers of 2 multiply/divide) called BitShiftGain.

So any time I say that on YouTube, it figures I'm saying another word that's in Apple safety filters under 'reject', so I have to always try to remember to say 'shifting of bits gain' or 'bit… … … shift gain'.

So there's a chain of machine interpretation by which Apple can decide I'm a Bad Man. I guess I'm more comfortable with Apple reaching this conclusion? I'll still try to avoid it though :)

zombot•18h ago
Who would have thought that this AI shit that is being forced on us ushers in a new round of censorship and control of formerly free speech! /s
extraduder_ire•18h ago
This reminds me of the extensive list of regexes twitch had for filtering allowed usernames that came out when they were hacked.
noname120•17h ago
https://github.com/search?q=repo%3ABlueFalconHD%2Fapple_gene...
Cort3z•17h ago
What are they protecting against? Honestly. LLMs should probably have an age limit, and then, if you are above, you should be adult enough to understand what this is and how it can be used.

To me, it seems like they only protect against bad press

plutokras•17h ago
> What are they protecting against? Honestly.

They are protcting their producer from bad PR.

empiko•17h ago
Yes, it is indeed to mitigate bad press. Unfortunately, the discussion about AI is so ridiculous, that it is often considered newsworthy when a product generates something funky for a person with large enough Twitter audience. Nobody wants to answer the questions about why their LLM generated it and how they will prevent it in the future.
Y_Y•16h ago
Nice to see that we are protected from talking about these weird old dolls:

https://en.wikipedia.org/wiki/Golliwog

https://github.com/BlueFalconHD/apple_generative_model_safet...

oblio•16h ago
Well, they're not only weird, they're obviously racist doll.
Ey7NFZ3P0nzAe•14h ago
I want to be able to talk bad about racist things.
chamomeal•11h ago
Seems like it’s a slur, as well. So not super surprising that it would be blocked tbh
sixothree•6h ago
I can remember the last time I saw one of these. It wasn't that long ago.
1f60c•14h ago
It's pretty easy to understand why Apple doesn't want its models to reproduce racial slurs, but what’s wrong with "Boris Johnson?"

(See, e.g., here: https://github.com/BlueFalconHD/apple_generative_model_safet...)

vishnugupta•14h ago
There are other UK politicians as well? Interesting.
stripline•14h ago
Interesting that you picked one from the “B” words..
qoez•14h ago
"Justin Trudeau" too. At least it's somewhat unbiased. Still weird imo.
m3kw9•13h ago
But allow hitler?
nedt•12h ago
I think it's in there so you can't let it generate an email reply about how awesome peppa pig is.
neuroticnews25•14h ago
Aren't these [0] lines wrong?

"[\\b\\d][Aa]bbo[\\bA-Z\\d]",

\b inside a set (square brackets) is a backspace character [1], not a word boundary. I don't think it was intended? Or is the regex flavor used here different?

[0] https://github.com/BlueFalconHD/apple_generative_model_safet...

[1] https://developer.apple.com/documentation/foundation/nsregul...

BlueFalconHD•11h ago
The framework loading these is in Swift. I haven’t gotten around to the logic for the JSON/regex parsing but ChatGPT seems to understand the regexes just fine
MatekCopatek•14h ago
You can design a racist propaganda poster, put someone's face onto a porn pic or manipulate evidence with photoshop. Apart from super specific things like trying to print money, the tool doesn't stop you from doing things most people would consider distasteful, creepy or even illegal.

So why are we doing this now? Has anything changed fundamentally? Why can't we let software do everything and then blame the user for doing bad things?

dkyc•14h ago
I think what changed is that we at least can attempt to limit 'bad' things with technical measures. It was legitimately technically impossible 10 years ago to prevent Photoshop from designing propaganda posters. Of course today's 'LLM safety' features aren't watertight either, but with the combination of 'input is natural language' plus LLM-based safety measures, there are more options today to restrict what the software can do than in the past.

The example you gave about preventing money counterfeiting with technical measures also supports this, since this was an easier thing to detect technically, and so it was done.

Whether that's a good thing or bad thing everyone has to decide for themselves, but objectively I think this is the reason.

bhk•13h ago
In other words, to whatever extent they can control or manipulate the behavior of users, they will. In the limit t->∞, probably true.
zamadatix•11h ago
Apple has the technology to bias people towards cats instead of dogs but I find it very unlikely they will bother to do that. The missing ingredient is how it helps their bottom line, which, instead of technical feasibility, is the root reason they do things. For whatever reasons some people REALLY love Apple's default restrictions, most don't really give a damn one way or the other, and the smallest group seem to have problems with it. It's not that Apple can do this so they are, it's users want this and now it can be done.

Perhaps a much more bleak take, depending on one's views :).

sixothree•7h ago
I guess that depends on the values of the company and their ability to be influenced by outside sources.
MisterTea•13h ago
What's hard to understand here? Those tools require skill and time to develop. AI makes things like those racist posters and revenge porn completely effortless and instant.
jama211•8h ago
I swear the more I read comments here the more I just read old men shaking their fist at clouds… do better y’all.