frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•9m ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•12m ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
1•helloplanets•14m ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•22m ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•24m ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•25m ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•25m ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
1•basilikum•28m ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•29m ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•33m ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
3•throwaw12•35m ago•1 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•35m ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•36m ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•38m ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•41m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•43m ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•49m ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•51m ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•56m ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•58m ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
1•lifeisstillgood•58m ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•1h ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•1h ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•1h ago•0 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•1h ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
2•ramenbytes•1h ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•1h ago•0 comments

Ed Zitron: The Hater's Guide to Microsoft

https://bsky.app/profile/edzitron.com/post/3me7ibeym2c2n
2•vintagedave•1h ago•1 comments

UK infants ill after drinking contaminated baby formula of Nestle and Danone

https://www.bbc.com/news/articles/c931rxnwn3lo
1•__natty__•1h ago•0 comments

Show HN: Android-based audio player for seniors – Homer Audio Player

https://homeraudioplayer.app
3•cinusek•1h ago•2 comments
Open in hackernews

VaultGemma: The most capable differentially private LLM

https://research.google/blog/vaultgemma-the-worlds-most-capable-differentially-private-llm/
125•meetpateltech•4mo ago

Comments

ForHackernews•4mo ago
Can someone explain what this actually means? I assume this still runs on Google's cloud so it's not 'private' in any meaningful sense.
stephantul•4mo ago
It does not run on Google’s cloud. You can download the model and host it yourself, locally or using a provider you trust.
ForHackernews•4mo ago
That's actually great. I didn't realize Google had any models that could be self-hosted.
pkaye•4mo ago
The Gemma models are available for self hosting. I've used these one on the ollama website myself.

https://ollama.com/library/gemma3

porridgeraisin•4mo ago
Differentially private means that:

training_algorithm(training data with a row that has "ForHackernews blood test report...") hard to distinguish from training_algorithm(training data without that) upto a factor of epsilon. They have explained further in the article itself with concrete values for epsilon.

drdaeman•4mo ago
I got that from the article, but I'm not getting what does it means in practice? What's the use case?
porridgeraisin•4mo ago
It is very difficult for someone to coax the model into regurgitating a sequence from the training data. So as you can imagine, the first usecase is going to be google training on your gmail inbox without me being able to prompt your emails out of it.

User-level DP on the other hand, which the article alludes to near the end, would mean that it's very difficult to make the model regurgitate a particular user's data.

Since this is a theoretical guarantee, you can do whatever prompt engineering you like, it will be really difficult all the same.

How difficult it is depends on a bunch of quantitative factors. Mostly, the value of epsilon.

You might think this would be useful for copyright protection as well, but there is a subtle difference. It's been a while and I'm hazy on the details, so I'll refer you to the Near Access Freeness paper which discusses it in detail and proposes another framework for that.

Workaccount2•4mo ago
If I am understanding this correctly, this is pretty damn cool. I got 15 minutes of research on it, but no better way to get corrected than be wrong on the internet.

Essentially it seems that they can statistical magic "fuzz" the training set in such a way that it becomes very difficult for the model to leak information from the training set, while still providing the same output whether or not that exact info was in the training set. So I suppose the goal would be something like the ability to train on medical data, while making it so the model won't be able to complete the prompt "Workaccount 2 has a serious medical condition called ______" and would give the same response regardless of whether or not I was present in the database.

porridgeraisin•4mo ago
Yes.

prob(training_process(data)(Work account 2 has a serious medical condition called) = anaemia) <= e^epsilon * prob(training_process(data without that piece of information)(Work account 2 has a serious medical condition called) = anaemia)) + delta

Here epsilon = 2, and delta is small. Basically, there is a theoretical guarantee that if it had trained on that sentence, it would be no more than 7x as likely to output that in response to any prompt, compared to when it hadn't trained on that sentence at all. Sentence here is defined to be 1024 tokens long[1].

You might think 7x is not that big of a deal, but note that this is a theoretical guarantee( and with some mathematics it's possible to get an even tighter bound(see: Renyi DP)). In practice, actually getting private data out of a DP-trained model is difficult even for epsilon=8 (corresponds to 2000x likely!).

Edit: [1] this can be problematic, if a piece of information greater than 1024 tokens long gets split into two sentences, then there is no theoretical guarantee across sequences. However this is an implementation detail of this model, I've yet to see the effect of increasing this number to a more reasonable value.

freedomben•4mo ago
Thanks, that's quite exciting, because personally the thing I'm most excited about AI is the medical and scientific research capabilities. Exciting times!
diggan•4mo ago
The actual weights: https://huggingface.co/google/vaultgemma-1b

> VaultGemma is a variant of the Gemma family of lightweight, state-of-the-art open models from Google. It is pre-trained from the ground up using Differential Privacy (DP). This provides strong, mathematically-backed privacy guarantees for its training data, limiting the extent to which the model's outputs can reveal information about any single training example.

> VaultGemma was trained using Tensor Processing Unit (TPU) hardware TPUv6e. Training large language models with the significant computational overhead of differential privacy requires specialized hardware. TPUs are designed to handle the massive computations involved, offering the performance, memory, and scalability necessary to train models like VaultGemma efficiently and sustainably.

Seems like it requires TPUs to run, as DP has a huge performance impact, so we're unlikely to see this in homelabs and similar environments, as far as I understand.

Edit: On second read, the TPUs were only used for training, but no description if anything specific for the hardware is needed, so assuming it's fine with a regular GPU?

Mond_•4mo ago
So far Gemma models were capable of running on ordinary GPUs or CPUs, and I think it's safe to assume that this trend is continuing here.
HenryMulligan•4mo ago
Ignoring what this model architecture could do and just considering what this model does do, why would I (or anyone) want to run this model (locally) to do <insert use-case>? Is it entirely a proof-of-concept for future training on medical data? Are they looking to use this to attempt to ethically justify training on (free-tier) user's personal data via the application of noise to the training data?
floridianfisher•4mo ago
The purpose is research
porridgeraisin•4mo ago
It's the last option.

The whole framing of DP is:

Probability that you reveal private info is same whether or not you train on a particular users data.

It is useful in many cases, but google the product company specifically is going to use it for ads.

malfist•4mo ago
You can hide that you pirated content for training
astrange•4mo ago
You can't hide that. You can't use technical measures to hide from discovery.

I think an entire book is a little too large to mask with this method and still end up learning anything.

faangguyindia•4mo ago
U can avoid book publisher lawsuit which Anthropic is dealing with using this approach
adt•4mo ago
https://lifearchitect.ai/models-table/
woah•4mo ago
This could be very good for scaling data while avoiding copyright claims since the copyright argument is a lot weaker (at least to the layman) if no memorization is happening. It even may open the door to Snow Crash like distributed training where people feed the model continuous streams of data of their computer use or even daily lives without worrying about PII leakage
Ossi61•4mo ago
Yes
Testor007•4mo ago
Will it leak data if you fine tune with DP logic ?