frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

What's up with all those equals signs anyway?

https://lars.ingebrigtsen.no/2026/02/02/whats-up-with-all-those-equals-signs-anyway/
228•todsacerdoti•3h ago•68 comments

Show HN: Minikv – Distributed key-value and object store in Rust (Raft, S3 API)

https://github.com/whispem/minikv
30•whispem•5h ago•5 comments

Floppinux – An Embedded Linux on a Single Floppy, 2025 Edition

https://krzysztofjankowski.com/floppinux/floppinux-2025.html
176•GalaxySnail•8h ago•110 comments

The Codex App

https://openai.com/index/introducing-the-codex-app/
714•meetpateltech•19h ago•529 comments

Show HN: Safe-now.live – Ultra-light emergency info site (<10KB)

https://safe-now.live
57•tinuviel•4h ago•10 comments

Anki ownership transferred to AnkiHub

https://forums.ankiweb.net/t/ankis-growing-up/68610
459•trms•16h ago•177 comments

LNAI – Define AI coding tool configs once, sync to Claude, Cursor, Codex, etc.

https://github.com/KrystianJonca/lnai
37•iamkrystian17•4h ago•17 comments

Todd C. Miller – Sudo maintainer for over 30 years

https://www.millert.dev/
510•wodniok•20h ago•258 comments

Emerge Career (YC S22) Is Hiring a Founding Product Designer

https://www.ycombinator.com/companies/emerge-career/jobs/omqT34S-founding-product-designer
1•gabesaruhashi•1h ago

How does misalignment scale with model intelligence and task complexity?

https://alignment.anthropic.com/2026/hot-mess-of-ai/
208•salkahfi•12h ago•63 comments

GitHub experience various partial-outages/degradations

https://www.githubstatus.com?todayis=2026-02-02
231•bhouston•15h ago•79 comments

See how many words you have written in Hacker News comments

https://serjaimelannister.github.io/hn-words/
97•Imustaskforhelp•3d ago•143 comments

Archive.today is directing a DDoS attack against my blog?

https://gyrovague.com/2026/02/01/archive-today-is-directing-a-ddos-attack-against-my-blog/
213•gyrovague-com•2d ago•88 comments

The Connection Machine CM-1 "Feynman" T-shirt

https://tamikothiel.com/cm/cm-tshirt.html
94•tosh•4d ago•19 comments

Ask HN: Who is hiring? (February 2026)

283•whoishiring•21h ago•357 comments

xAI joins SpaceX

https://www.spacex.com/updates#xai-joins-spacex
756•g-mork•15h ago•1686 comments

Carnegie Mellon Unversity Computer Club FTP Server

http://128.237.157.9/pub/
103•1vuio0pswjnm7•5d ago•20 comments

Why The Jetsons still matters (2012)

https://www.smithsonianmag.com/history/50-years-of-the-jetsons-why-the-show-still-matters-43459669/
23•fortran77•4d ago•3 comments

Hacking Moltbook

https://www.wiz.io/blog/exposed-moltbook-database-reveals-millions-of-api-keys
349•galnagli•21h ago•201 comments

4x faster network file sync with rclone (vs rsync) (2025)

https://www.jeffgeerling.com/blog/2025/4x-faster-network-file-sync-rclone-vs-rsync/
332•indigodaddy•4d ago•148 comments

Linux From Scratch ends SysVinit support

https://lists.linuxfromscratch.org/sympa/arc/lfs-announce/2026-02/msg00000.html
194•cf100clunk•19h ago•262 comments

The TSA's New $45 Fee to Fly Without ID Is Illegal

https://www.frommers.com/tips/airfare/the-tsa-new-45-fee-to-fly-without-id-is-illegal-says-regula...
466•donohoe•14h ago•535 comments

Rentahuman – The Meatspace Layer for AI

https://rentahuman.ai
62•p0nce•3h ago•48 comments

Zig Libc

https://ziglang.org/devlog/2026/#2026-01-31
300•ingve•19h ago•119 comments

Pretty soon, heat pumps will be able to store and distribute heat as needed

https://www.sintef.no/en/latest-news/2026/pretty-soon-heat-pumps-will-be-able-to-store-and-distri...
226•PaulHoule•1d ago•192 comments

Phenakistoscopes (1833)

https://publicdomainreview.org/collection/phenakistoscopes-1833/
21•tobr•2d ago•0 comments

Court orders restart of all US offshore wind power construction

https://arstechnica.com/science/2026/02/court-orders-restart-of-all-us-offshore-wind-construction/
418•ck2•14h ago•286 comments

Nano-vLLM: How a vLLM-style inference engine works

https://neutree.ai/blog/nano-vllm-part-1
261•yz-yu•1d ago•26 comments

Julia

https://borretti.me/fiction/julia
132•ashergill•14h ago•22 comments

Joedb, the Journal-Only Embedded Database

https://www.joedb.org/index.html
77•mci•3d ago•9 comments
Open in hackernews

Paris prosecutors raid France offices of Elon Musk's X

https://www.bbc.com/news/articles/ce3ex92557jo
75•vikaveri•3h ago

Comments

pogue•3h ago
Finally, someone is taking action against the CSAM machine operating seemingly without penalty.
chrisjj•2h ago
I am not a fan of Grok, but there has been zero evidence of it creating CSAM. For why, see https://www.iwf.org.uk/about-us/
secretsatan•2h ago
It doesn't mention grok?
chrisjj•1h ago
Sure does. Twice. E.g.

Musk's social media platform has recently been subject to intense scrutiny over sexualised images generated and edited on the site using its AI tool Grok.

mortarion•2h ago
CSAM does not have a universal definition. In Sweden for instance, CSAM is any image of an underage subject (real or realistic digital) designed to evoke a sexual response. If you take a picture of a 14 year old girl (age of consent is 15) and use Grok to give her bikini, or make her topless, then you are most definately producing and possessing CSAM.

No abuse of a real minor is needed.

worthless-trash•1h ago
As good as Australia's little boobie laws.
chrisjj•58m ago
https://www.theregister.com/2010/01/28/australian_censors/
chrisjj•1h ago
> CSAM does not have a universal definition.

Strange that there was no disagreement before "AI", right? Yet now we have a clutch of new "definitions" all of which dilute and weaken the meaning.

> In Sweden for instance, CSAM is any image of an underage subject (real or realistic digital) designed to evoke a sexual response.

No corroboration found on web. Quite the contrary, in fact:

"Sweden does not have a legislative definition of child sexual abuse material (CSAM)"

https://rm.coe.int/factsheet-sweden-the-protection-of-childr...

> If you take a picture of a 14 year old girl (age of consent is 15) and use Grok to give her bikini, or make her topless, then you are most definately producing and possessing CSAM.

> No abuse of a real minor is needed.

Even the Google "AI" knows better than that. CSAM "is considered a record of a crime, emphasizing that its existence represents the abuse of a child."

Putting a bikini on a photo of a child may be distasteful abuse of a photo, but it is not abuse of a child - in any current law.

lava_pidgeon•49m ago
" Strange that there was no disagreement before "AI", right? Yet now we have a clutch of new "definitions" all of which dilute and weaken the meaning. "

Are you from Sweden? Why do you think the definition was clear across the world and not changed "before AI"? Or is it some USDefaultism where Americans assume their definition was universal?

chrisjj•39m ago
> Are you from Sweden?

No. I used this interweb thing to fetch that document from Sweden, saving me a 1000-mile walk.

> Why do you think the definition was clear across the world and not changed "before AI"?

I didn't say it was clear. I said there was no disagreement.

And I said that because I saw only agreement. CSAM == child sexual abuse material == a record of child sexual abuse.

tokai•36m ago
"Sweden does not have a legislative definition of child sexual abuse material (CSAM)"

Because that is up to the courts to interpret. You cant use your common law experience to interpret the law in other countries.

fmbb•34m ago
> - in any current law.

It has been since at least 2012 here in Sweden. That case went to our highest court and they decided a manga drawing was CSAM (maybe you are hung up on this term though, it is obviously not the same in Swedish).

The holder was not convicted but that is besides the point about the material.

lawn•33m ago
In Swedish:

https://www.regeringen.se/contentassets/5f881006d4d346b199ca...

> Även en bild där ett barn t.ex. genom speciella kameraarrangemang framställs på ett sätt som är ägnat att vädja till sexualdriften, utan att det avbildade barnet kan sägas ha deltagit i ett sexuellt beteende vid avbildningen, kan omfattas av bestämmelsen.

Which translated means that the children does not have to be apart of sexual acts and indeed undressing a child using AI could be CSAM.

I say "could" because all laws are open to interpretation in Sweden and it depends on the specific image. But it's safe to say that many images produces by Grok are CSAM by Swedish standards.

rented_mule•21m ago
> Even the Google "AI" knows better than that. CSAM "is [...]"

Please don't use the "knowledge" of LLMs as evidence or support for anything. Generative models generate things that have some likelihood of being consistent with their input material, they don't "know" things.

Just last night, I did a Google search related to the cell tower recently constructed next to our local fire house. Above the search results, Gemini stated that the new tower is physically located on the Facebook page of the fire department.

Does this support the idea that "some physical cell towers are located on Facebook pages"? It does not. At best, it supports that the likelihood that the generated text is completely consistent with the model's input is less than 100% and/or that input to the model was factually incorrect.

logicchains•25m ago
You don't see a huge difference between abusing a child (and recording it) vs drawing/creating an image of a child in a sexual situation? Do you believe they should have the same legal treatment? In Japan for instance the latter is legal.
robtherobber•3h ago
> The prosecutor's office also said it was leaving X and would communicate on LinkedIn and Instagram from now on.

I mean, perhaps it's time to completely drop these US-owned, closed-source, algo-driven controversial platforms, and start treating the communication with the public that funds your existence in different terms. The goal should be to reach as many people, of course, but also to ensure that the method and medium of communication is in the interest of the public at large.

spacecadet•1h ago
This. What a joke. Im still waiting on my tax refund from NYC for plastering "twitter" stickers on every publicly funded vehicle.
valar_m•42m ago
>The goal should be to reach as many people, of course, but also to ensure that the method and medium of communication is in the interest of the public at large.

Who decides what communication is in the interest of the public at large? The Trump administration?

Mordisquitos•7m ago
I agree with you. In my opinion it was already bad enough that official institutions were using Twitter as a communication platform before it belonged to Musk and started to restrict visibility to non-logged in users, but at least Twitter was arguably a mostly open communication platform and could be misunderstood as a public service in the minds of the less well-informed. However, deciding to "communicate" at this day and age on LinkedIn and Instagram, neither of which ever made a passing attempt to pretend to be a public communications service, boggles the mind.
vessenes•1h ago
Interesting. This is basically the second enforcement on speech / images that France has done - first was Pavel Durov @ Telegram. He eventually made changes in Telegram's moderation infrastructure and I think was allowed to leave France sometime last year.

I don't love heavy-handed enforcement on speech issues, but I do really like a heterogenous cultural situation, so I think it's interesting and probably to the overall good to have a country pushing on these matters very hard, just as a matter of keeping a diverse set of global standards, something that adds cultural resilience for humanity.

linkedin is not a replacement for twitter, though. I'm curious if they'll come back post-settlement.

derrida•1h ago
I wouldn't equate the two.

There's someone who was being held responsible for what was in encrypted chats.

Then there's someone who published depictions of sexual abuse and minors.

Worlds apart.

cbeach•1h ago
Unlike Clinton, Gates et al, there is ZERO evidence that Musk ever visited the island, although he was invited by Epstein.

If you're going to make serious accusations like that you're going to need to provide some evidence.

techblueberry•53m ago
In November 2012, Epstein sent Musk an email asking “how many people will you be for the heli to island”.

“Probably just Talulah and me. What day/night will be the wildest party on your island?” Musk replied, in an apparent reference to his former wife Talulah Riley.

https://www.theguardian.com/technology/2026/jan/30/elon-musk...

I think there's just as much evidence Clinton did as Musk. Gates on the other hand.

antonymoose•38m ago
To my knowledge Musk asked to go but never actually went. Clinton, however, went a dozen or so times with Epstein on his private jet?

Has the latest release changed that narrative?

lawn•11m ago
Musk did ask to go after Epstein was sentenced.
rsynnott•10m ago
... Eh? This isn't about Musk's association with Epstein, it's about his CSAM generating magic robot (and also some other alleged dodgy practices around the GDPR etc).
btreecat•1h ago
>but I do really like a heterogenous cultural situation

Why isn't that a major red flag exactly?

tokai•38m ago
In what world is generating CSAM a speech issue? Its really doing a disservice to actual free speech issues to frame it was such.
logicchains•34m ago
The point of banning real CSAM is to stop the production of it, because the production is inherently harmful. The production of AI or human generated CSAM-like images does not inherently require the harm of children, so it's fundamentally a different consideration. That's why some countries, notably Japan, allow the production of hand-drawn material that in the US would be considered CSAM.
tokai•32m ago
That's not what we are discussing here. Even less when a lot of the material here is edits of real pictures.
logicchains•36m ago
>but I do really like a heterogenous cultural situation, so I think it's interesting and probably to the overall good to have a country pushing on these matters very hard

Censorship increases homogeneity, because it reduces the amount of ideas and opinions that are allowed to be expressed. The only resilience that comes from restricting people's speech is resilience of the people in power.

StopDisinfo910•35m ago
Very different charges however.

Durov was held on suspicion Telegram was willingly failing to moderate its platform and allowed drug trafficking and other illegal activities to take place.

X has allegedly illegally sent data to the US in violation of GDPR and contributed to child porn distribution.

Note that both are directly related to direct violation of data safety law or association with a separate criminal activities, neither is about speech.

pu_pe•1h ago
I suppose those are the offices from SpaceX now that they merged.
omnimus•51m ago
So France is raiding offices of US military contractor?
mkjs•32m ago
How is that relevant? Are you implying that being a US military contractor should make you immune to the laws of other countries that you operate in?

The onus is on the contractor to make sure any classified information is kept securely. If by raiding an office in France a bunch of US military secrets are found, it would suggest the company is not fit to have those kind of contracts.

afavour•1h ago
I’m sure Musk is going to say this is about free speech in an attempt to gin up his supporters. It isn’t. It’s about generating and distributing non consensual sexual imagery, including of minors. And, when notified, doing nothing about it. If anything it should be an embarrassment that France are the only ones doing this.

(it’ll be interesting to see if this discussion is allowed on HN. Almost every other discussion on this topic has been flagged…)

cbeach•1h ago
> when notified, doing nothing about it

When notified, he immediately:

  * "implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing" - https://www.bbc.co.uk/news/articles/ce8gz8g2qnlo 

  * locked image generation down to paid accounts only (i.e. those individuals that can be identified via their payment details).
Have the other AI companies followed suit? They were also allowing users to undress real people, but it seems the media is ignoring that and focussing their ire only on Musk's companies...
derrida•56m ago
The other LLMs probably don't have the training data in the first place.
afavour•56m ago
You and I must have different definitions of the word “immediately”. The article you posted is from January 15th. Here is a story from January 2nd:

https://www.bbc.com/news/articles/c98p1r4e6m8o

> Have the other AI companies followed suit? They were also allowing users to undress real people

No they weren’t? There were numerous examples of people feeding the same prompts to different AIs and having their requests refused. Not to mention, X was also publicly distributing that material, something other AI companies were not doing. Which is an entirely different legal liability.

bonesss•39m ago
The part of X’s reaction to their own publishing I’m most looking forward to seeing in slow-motion in the courts and press was their attempt at agency laundering by having their LLM generate an apology in first-person.

“Sorry I broke the law. Oops for reals tho.”

techblueberry•55m ago
So you're allowed to simulate child porn as long as you pay for it? How noble of them. No wonder he was in the Epstein files.
rsynnott•5m ago
> If anything it should be an embarrassment that France are the only ones doing this.

As mentioned in the article, the UK's ICO and the EC are also investigating.

France is notably keen on raids for this sort of thing, and a lot of things that would be basically a desk investigation in other countries result in a raid in France.

techblueberry•50m ago
I'm not saying I'm entirely against this, but just out of curiosity, what do they hope to find in a raid of the french offices, a folder labeled "Grok's CSAM Plan"?
afavour•44m ago
It was known that Grok was generating these images long before any action was taken. I imagine they’ll be looking for internal communications on what they were doing, or deciding not to do, doing during that time.
Mordisquitos•29m ago
What do they hope to find, specifically? Who knows, but maybe the prosecutors have a better awareness of specifics than us HN commenters who have not been involved in the investigation.

What may they find, hypothetically? Who knows, but maybe an internal email saying, for instance, 'Management says keep the nude photo functionality, just hide it behind a feature flag', or maybe 'Great idea to keep a backup of the images, but must cover our tracks', or perhaps 'Elon says no action on Grok nude images, we are officially unaware anything is happening.'

rsynnott•13m ago
> what do they hope to find in a raid of the french offices, a folder labeled "Grok's CSAM Plan"?

You would be _amazed_ at the things that people commit to email and similar.

Here's a Facebook one (leaked, not extracted by authorities): https://www.reuters.com/investigates/special-report/meta-ai-...