frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Tulip Creative Computer

https://github.com/shorepine/tulipcc
126•apitman•3h ago•30 comments

AI Generated Music Barred from Bandcamp

https://old.reddit.com/r/BandCamp/comments/1qbw8ba/ai_generated_music_on_bandcamp/
289•cdrnsf•2h ago•206 comments

Instagram AI Influencers Are Defaming Celebrities with Sex Scandals

https://www.404media.co/instagram-ai-influencers-are-defaming-celebrities-with-sex-scandals/
66•cdrnsf•1h ago•35 comments

Show HN: Ayder – HTTP-native durable event log written in C (curl as client)

https://github.com/A1darbek/ayder
32•Aydarbek•3h ago•7 comments

Influencers and OnlyFans models are dominating U.S. O-1 visa requests

https://www.theguardian.com/us-news/2026/jan/11/onlyfans-influencers-us-o-1-visa
258•bookofjoe•4h ago•176 comments

How to make a damn website (2024)

https://lmnt.me/blog/how-to-make-a-damn-website.html
46•birdculture•3h ago•19 comments

Scott Adams has died

https://www.youtube.com/watch?v=Rs_JrOIo3SE
471•ekianjo•5h ago•833 comments

Apple Creator Studio

https://www.apple.com/newsroom/2026/01/introducing-apple-creator-studio-an-inspiring-collection-o...
422•lemonlime227•6h ago•346 comments

Signal leaders warn agentic AI is an insecure, unreliable surveillance risk

https://coywolf.com/news/productivity/signal-president-and-vp-warn-agentic-ai-is-insecure-unrelia...
262•speckx•2h ago•83 comments

Inlining – The Ultimate Optimisation

https://xania.org/202512/17-inlining-the-ultimate-optimisation
16•PaulHoule•4d ago•7 comments

Text-based web browsers

https://cssence.com/2026/text-based-web-browsers/
265•pabs3•15h ago•97 comments

Legion Health (YC S21) Hiring Cracked Founding Eng for AI-Native Ops

https://jobs.ashbyhq.com/legionhealth/ffdd2b52-eb21-489e-b124-3c0804231424
1•ympatel•3h ago

Everything you never wanted to know about file locking (2010)

https://apenwarr.ca/log/20101213
43•SmartHypercube•5d ago•9 comments

Git Rebase for the Terrified

https://www.brethorsting.com/blog/2026/01/git-rebase-for-the-terrified/
203•aaronbrethorst•6d ago•217 comments

Show HN: An iOS budget app I've been maintaining since 2011

https://primoco.me/en/
121•Priotecs•10h ago•55 comments

A university got itself banned from the Linux kernel (2021)

https://www.theverge.com/2021/4/30/22410164/linux-kernel-university-of-minnesota-banned-open-source
35•italophil•1h ago•12 comments

What a year of solar and batteries saved us in 2025

https://scotthelme.co.uk/what-a-year-of-solar-and-batteries-really-saved-us-in-2025/
215•MattSayar•5h ago•263 comments

Going for Gold: The Story of the Golden Lego RCX and NXT

https://bricknerd.com/home/going-for-gold-the-story-of-the-golden-lego-rcx-and-nxt-9-9-21
7•kotaKat•4d ago•0 comments

Show HN: Ever wanted to look at yourself in Braille?

https://github.com/NishantJoshi00/dith
16•cat-whisperer•4d ago•3 comments

Show HN: Self-host Reddit – 2.38B posts, works offline, yours forever

https://github.com/19-84/redd-archiver
110•19-84•5h ago•17 comments

Show HN: FastScheduler – Decorator-first Python task scheduler, async support

https://github.com/MichielMe/fastscheduler
27•michielme•6h ago•6 comments

Confer – End to end encrypted AI chat

https://confer.to/
53•vednig•7h ago•46 comments

Cowork: Claude Code for the rest of your work

https://claude.com/blog/cowork-research-preview
1222•adocomplete•1d ago•521 comments

Local Journalism Is How Democracy Shows Up Close to Home

https://buckscountybeacon.com/2026/01/opinion-local-journalism-is-how-democracy-shows-up-close-to...
346•mooreds•7h ago•229 comments

The Case for Blogging in the Ruins

https://www.joanwestenberg.com/the-case-for-blogging-in-the-ruins/
51•herbertl•2h ago•6 comments

Show HN: SnackBase – Open-source, GxP-compliant back end for Python teams

https://snackbase.dev
50•lalitgehani•8h ago•6 comments

Anthropic invests $1.5M in the Python Software Foundation

https://discuss.python.org/t/anthropic-has-made-a-large-contribution-to-the-python-software-found...
327•ayhanfuat•5h ago•151 comments

Mozilla's open source AI strategy

https://blog.mozilla.org/en/mozilla/mozilla-open-source-ai-strategy/
170•nalinidash•8h ago•138 comments

Robotopia: A 3D, first-person, talking simulator

https://elbowgreasegames.substack.com/p/introducing-robotopia-a-3d-first
98•psawaya•5d ago•46 comments

Chromium Has Merged JpegXL

https://chromium-review.googlesource.com/c/chromium/src/+/7184969
381•thunderbong•14h ago•125 comments
Open in hackernews

Confer – End to end encrypted AI chat

https://confer.to/
52•vednig•7h ago
Signal creator Moxie Marlinspike wants to do for AI what he did for messaging - https://arstechnica.com/security/2026/01/signal-creator-moxi...

Private Inference: https://confer.to/blog/2026/01/private-inference/

Comments

AdmiralAsshat•7h ago
Well, if anyone could do it properly, Moxie certainly has the track record.
f_allwein•6h ago
Interesting! I wonder a) how much of an issue this addresses, ie how much are people worried about privacy when they use other LLMs? and b) how much of a disadvantage it is for Confer not to be able to read/ train in user data.
JohnFen•6h ago
Unless I misunderstand, this doesn't seem to address what I consider to be the largest privacy risk: the information you're providing to the LLM itself. Is there even a solution to that problem?

I mean, e2ee is great and welcome, of course. That's a wonderful thing. But I need more.

roughly•3h ago
Looks like Confer is hosting its own inference: https://confer.to/blog/2026/01/private-inference/

> LLMs are fundamentally stateless—input in, output out—which makes them ideal for this environment. For Confer, we run inference inside a confidential VM. Your prompts are encrypted from your device directly into the TEE using Noise Pipes, processed there, and responses are encrypted back. The host never sees plaintext.

I don’t know what model they’re using, but it looks like everything should be staying on their servers, not going back to, eg, OpenAI or Anthropic.

dang•1h ago
We'll add that link to the toptext as well. Thanks!

(It got submitted a few times but did not get any comments - might as well consolidate these threads)

jeroadhd•1h ago
That is a highly misleading statement: the GPU runs with real weights and real unencrypted user plaintext, since it has to multiply matrices of plain text, which is passed on to the supposedly "secure VM" (protected by Intel/Nvidia promises) and encrypted there. In no way is it e2e, unless you count the GPU as the "end".
Imustaskforhelp•1h ago
So what you are saying is that all the TEE and remote attestation and everything might work for CPU based workflows but they just don't work with GPU effectively being unencrpyted and anyone can read it from there?

Edit: https://news.ycombinator.com/item?id=46600839 this comment says that the gpu have such capabilities as well, So I am interested what you were mentioning in the first place?

AlanYx•1h ago
It is true that nVidia GPU-CC TEE is not secure against decapsulation attacks, but there is a lot of effort to minimize the attack surface. This recent paper gives a pretty good overview of the security architecture: https://arxiv.org/pdf/2507.02770
JohnFen•55m ago
> Looks like Confer is hosting its own inference

Even so, you're still exposing your data to Confer, and so you have to trust them that they'll behave as you want. That's a security problem that Confer doesn't help with.

I'm not saying Confer isn't useful, though. e2ee is very useful. But it isn't enough to make me feel comfortable.

internet_points•10m ago
> you're still exposing your data to Confer

They use a https://en.wikipedia.org/wiki/Trusted_execution_environment and iiuc claim that your client can confirm (attest) that the code they run doesn't leak your data, see https://confer.to/blog/2026/01/private-inference/

So you should be able to run https://github.com/conferlabs/confer-image yourself and get a hash of that and then confer.to will send you that same hash, but now it's been signed by Intel I guess? to tell you that yes not only did confer.to send you that hash, but that hash is indeed a hash of what's running inside the Trusted Execution Environment.

I feel like this needs diagrams.

jeroenhd•5h ago
An interesting take on the AI model. I'm not sure what their business model is like, as collecting training data is the one thing that free AI users "pay" in return for services, but at least this chat model seems honest.

Using remote attestation in the browser to attest the server rather than the client is refreshing.

Using passkeys to encrypt data does limit browser/hardware combinations, though. My Firefox+Bitwarden setup doesn't work with this, unfortunately. Firefox on Android also seems to be broken, but Chrome on Android works well at least.

datadrivenangel•5h ago
Get a fun error message on debian 13 with firefox v140:

"This application requires passkey with PRF extension support for secure encryption key storage. Your browser or device doesn't support these advanced features.Please use Chrome 116+, Firefox 139+, or Edge 141+ on a device with platform authentication (Face ID, Touch ID, Windows Hello, etc.)."

crtasm•1h ago
That is funny it won't even show us the homepage.

We are allowed into the blog though! https://confer.to/blog/

Marsymars•40m ago
I'm getting that that on macOS with Firefox 139+, for whatever reason...
butz•14m ago
Great new way to lock out potential new users. I bet large part of users interested in privacy are using Linux and some fork of Firefox.
hiimkeks•1h ago
I am confused. I get E2EE chat with a TEE, but the TEEs I know of (admittedly not an expert) are not powerful enough to do the actual inference, at least not any useful one. The blog posts published so far just glance over that.
dfajgljsldkjag•1h ago
It seems like the H100 gpu itself has some kind of secure execution environment built in. Not sure of the details but it appears that all data going to and from the gpu will be encrypted.

https://developer.nvidia.com/blog/confidential-computing-on-...

donaldihunter•1h ago
Yes, the TEE is CPU + H100 GPU.
shawnz•1h ago
I don't agree that this is end to end encrypted. For example, a compromise of the TEE would mean your data is exposed. In a truly end to end encrypted system, I wouldn't expect a server side compromise to be able to expose my data.

This is similar to the weasely language Google is now using with the Magic Cue feature ever since Android 16 QPR 1. When it launched, it was local only -- now it's local and in the cloud "with attestation". I don't like this trend and I don't think I'll be using such products

Stefan-H•1h ago
Just like your mobile device is one end of the end-to-end encryption, the TEE is the other end. If properly implemented, the TEE would measure all software and ensure that there are no side channels that the sensitive data could be read from.
paxys•58m ago
By that logic SSL/TLS is also end-to-end encryption, except it isn't
Stefan-H•52m ago
When the server is the final recipient of a message sent over TLS, then yes, that is end-to-end encryption (for instance if a load balancer is not decrypting traffic in the middle). If the message's final recipient is a third party, then you are correct, an additional layer of encryption would be necessary. The TEE is the execution environment that needs access to the decrypted data to process the AI operations, therefore it is one end of the end-to-end encryption.
paxys•51m ago
No need to make up hypotheticals. The server isn't the final destination for your LLM requests. The reply needs to come back to you.
charcircuit•24m ago
If Bob and Alice are in an E2EE chat Bob and Alice are the ends. Even if Bob asks Alice a question and she replies back to Bob, Alice is still an end.

Similarly with AI. The AI is one of the ends of the conversation.

shawnz•46m ago
This interpretation basically waters down the meaning of end-to-end encryption to the point of uselessness. You may as well just say "encryption".
Stefan-H•41m ago
E2EE is usually applied in contexts where the message's final recipient is NOT the server on the other end of a TLS connection, so yes, this scenario is a stretch. The point is that in the context of an AI chat app, you have to decide on the boundary that you draw around the server components that are processing the request and necessarily need access to decrypted data, and call that one "end" of the connection.
liuliu•56m ago
I agree it is more like e2teee, but I think there is really no alternative beyond TEE + anonymization. Privacy people want it locally, but it is 5 to 10 years away (or never, if the current economics works, there is no need to reverse the trend).
shawnz•50m ago
There's FHE, but that's probably an even more difficult technical challenge than doing everything locally
gardnr•44m ago
FHE would be ideal. Relevant conversation from 6 months ago:

https://news.ycombinator.com/item?id=44601023

ignoramous•47m ago
> ... 5 to 10 years away (or never, if the current economics works...

Think PCs in 5y to 10y that can run SoTA multi-modal LLMs (cf Mac Pro) will cost as much as cars do, and I reckon folks will buy it.

binary132•12m ago
ISTM that most people would rather give away their privacy than pay even a single cent for most things.
2bitencryption•55m ago
if (big if) you trust the execution environment, which is apparently auditable, and if (big if) you trust the TEE merkle hash used to sign the response is computer based on the TEE as claimed (and not a malicious actor spoofing a TEE that lives within an evil environment) and also if you trust the inference engine (vllm / sglanf, what have you) then I guess you can be confident the system is private.

Lots of ifs there, though. I do trust Moxie in terms of execution though. Doesn’t seem like the type of person to take half measures.

derefr•37m ago
"Server-side" is a bit of a misnomer here.

Sure, for e.g. E2E email, the expectation is that all the computation occurs on the client, and the server is a dumb store of opaque encrypted stuff.

In a traditional E2E chat app, on the other hand, you've still got a backend service acting as a dumb pipe, that shouldn't have the keys to decrypt traffic flowing through it; but you've also got multiple clients — not just your own that share your keybag, but the clients of other users you're communicating with. "E2E" in the context of a chat app, means "messages are encrypted within your client; messages can then only be decrypted within the destination client(s) [i.e. the client(s) of the user(s) in the message thread with you.]"

"E2E AI chat" would be E2E chat, with an LLM. The LLM is the other user in the chat thread with you; and this other user has its own distinct set of devices that it must interact through (because those devices are within the security boundary of its inference infrastructure.) So messages must decrypt on the LLM's side for it to read and reply to, just as they must decrypt on another human user's side for them to read and reply to. The LLM isn't the backend here; the chat servers acting as a "pipe" are the backend, while the LLM is on the same level of the network diagram as the user is.

Let's consider the trivial version of an "E2E AI chat" design, where you physically control and possess the inference infrastructure. The LLM infra is e.g. your home workstation with some beefy GPUs in it. In this version, you can just run Signal on the same workstation, and connect it to the locally-running inference model as an MCP server. Then all your other devices gain the ability to "E2E AI chat" with the agent that resides in your workstation.

The design question, being addressed by Moxie here, is what happens in the non-trivial case, when you aren't in physical possession of any inference infrastructure.

Which is obviously the applicable case to solve for most people, 100% of the time, since most people don't own and won't ever own fancy GPU workstations.

But, perhaps more interesting for us tech-heads that do consider buying such hardware, and would like to solve problems by designing architectures that make use of it... the same design question still pertains, at least somewhat, even when you do "own" the infra; just as long as you aren't in 100% continuous physical possession of it.

You would still want attestation (and whatever else is required here) even for an agent installed on your home workstation, so long as you're planning to ever communicate with it through your little chat gateway when you're not at home. (Which, I mean... why else would you bother with setting up an "E2E AI chat" in the first place, if not to be able to do that?)

Consider: your local flavor of state spooks could wait for you to leave your house; slip in and install a rootkit that directly reads from the inference backend's memory; and then disappear into the night before you get home. And, no matter how highly you presume your abilities to detect that your home has been intruded into / your computer has been modified / etc once you have physical access to those things again... you'd still want to be able to detect a compromise of your machine even before you get home, so that you'll know to avoid speaking to your agent (and thereby the nearby wiretap van) until then.

jeroadhd•1h ago
Again with the confidential VM and remote attestation crypto theater? Moxie has a good track record in general, and yet he seems to have a huge blindspot in trusting Intel broken "trusted VM" computing for some inexplicable reason. He designed the user backups of Signal messages to server with similar crypto secure "enclave" snake-oil.
liuliu•1h ago
I think there is only so much you can do practically. Without a secure "enclave", there isn't really much you can do. What's your alternative?
tkz1312•50m ago
AFAIK the signal backups use symmetric encryption with user generated and controlled keys and anonymous credentials (https://signal.org/blog/introducing-secure-backups/). Do you have a link about the usage of sgx there?

Also fwiw I think tees and remote attestation are a pretty pragmatic solution here that meaningfully improves on the current state of the art for llm inference and I'm happy to see it.

paxys•59m ago
"trusted execution environment" != end-to-end encryption

The entire point of E2EE is that both "ends" need to be fully under your control.

Stefan-H•46m ago
The point of E2EE is that only the people/systems that need access to the data are able to do so. If the message is encrypted on the user's device and then is only decrypted in the TEE where the data is needed in order to process the request, and only lives there ephemerally, then in what way is it not end-to-end encrypted?
paxys•17m ago
Because anyone with access to the TEE also has access to the data. The owners can say they won't tamper with it, but those are promises, not guarantees.
optymizer•15m ago
This is false.

From Wikipedia: "End-to-end encryption (E2EE) is a method of implementing a secure communication system where only the sender and intended recipient can read the messages."

Both ends do not need to be under your control for E2EE.

letmetweakit•48m ago
How does inference work with a TEE, isn’t performance a lot more restricted?
LordDragonfang•46m ago
> Advanced Passkey Features Required

> This application requires passkey with PRF extension support for secure encryption key storage. Your browser or device doesn't support these advanced features.

> Please use Chrome 116+, Firefox 139+, or Edge 141+ on a device with platform authentication (Face ID, Touch ID, Windows Hello, etc.).

(Running Chrome 143)

So... does this just not support desktops without overpriced webcams, or am I missing something?

literalAardvark•14m ago
Windows Hello should work fine just by PIN, it's the platform authentication part that's important, not the way you unlock it
slipheen•36m ago
Does it say anywhere which model it’s using?

I see references to vLLM in the GitHub but not which actual model (Llama, Mistral, etc.) or if they have a custom fine tune, or you give your own huggingface link?

jdthedisciple•10m ago
The best private LLM is the one you host yourself.
orbital-decay•9m ago
At least Cocoon and similar services relying on TEE don't call this end-to-end encryption. Hardware DRM is not E2EE, it's security by obscurity. Not to say it doesn't work, but it doesn't provide mathematically strong guarantees either.