frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Notes for February 2-7

https://taoofmac.com/space/notes/2026/02/07/2000
1•rcarmo•33s ago•0 comments

Study confirms experience beats youthful enthusiasm

https://www.theregister.com/2026/02/07/boomers_vs_zoomers_workplace/
1•Willingham•7m ago•0 comments

The Big Hunger by Walter J Miller, Jr. (1952)

https://lauriepenny.substack.com/p/the-big-hunger
1•shervinafshar•8m ago•0 comments

The Genus Amanita

https://www.mushroomexpert.com/amanita.html
1•rolph•13m ago•0 comments

We have broken SHA-1 in practice

https://shattered.io/
1•mooreds•14m ago•1 comments

Ask HN: Was my first management job bad, or is this what management is like?

1•Buttons840•15m ago•0 comments

Ask HN: How to Reduce Time Spent Crimping?

1•pinkmuffinere•16m ago•0 comments

KV Cache Transform Coding for Compact Storage in LLM Inference

https://arxiv.org/abs/2511.01815
1•walterbell•21m ago•0 comments

A quantitative, multimodal wearable bioelectronic device for stress assessment

https://www.nature.com/articles/s41467-025-67747-9
1•PaulHoule•23m ago•0 comments

Why Big Tech Is Throwing Cash into India in Quest for AI Supremacy

https://www.wsj.com/world/india/why-big-tech-is-throwing-cash-into-india-in-quest-for-ai-supremac...
1•saikatsg•23m ago•0 comments

How to shoot yourself in the foot – 2026 edition

https://github.com/aweussom/HowToShootYourselfInTheFoot
1•aweussom•23m ago•0 comments

Eight More Months of Agents

https://crawshaw.io/blog/eight-more-months-of-agents
3•archb•25m ago•0 comments

From Human Thought to Machine Coordination

https://www.psychologytoday.com/us/blog/the-digital-self/202602/from-human-thought-to-machine-coo...
1•walterbell•26m ago•0 comments

The new X API pricing must be a joke

https://developer.x.com/
1•danver0•26m ago•0 comments

Show HN: RMA Dashboard fast SAST results for monorepos (SARIF and triage)

https://rma-dashboard.bukhari-kibuka7.workers.dev/
1•bumahkib7•27m ago•0 comments

Show HN: Source code graphRAG for Java/Kotlin development based on jQAssistant

https://github.com/2015xli/jqassistant-graph-rag
1•artigent•32m ago•0 comments

Python Only Has One Real Competitor

https://mccue.dev/pages/2-6-26-python-competitor
4•dragandj•33m ago•0 comments

Tmux to Zellij (and Back)

https://www.mauriciopoppe.com/notes/tmux-to-zellij/
1•maurizzzio•34m ago•1 comments

Ask HN: How are you using specialized agents to accelerate your work?

1•otterley•35m ago•0 comments

Passing user_id through 6 services? OTel Baggage fixes this

https://signoz.io/blog/otel-baggage/
1•pranay01•36m ago•0 comments

DavMail Pop/IMAP/SMTP/Caldav/Carddav/LDAP Exchange Gateway

https://davmail.sourceforge.net/
1•todsacerdoti•37m ago•0 comments

Visual data modelling in the browser (open source)

https://github.com/sqlmodel/sqlmodel
1•Sean766•39m ago•0 comments

Show HN: Tharos – CLI to find and autofix security bugs using local LLMs

https://github.com/chinonsochikelue/tharos
1•fluantix•39m ago•0 comments

Oddly Simple GUI Programs

https://simonsafar.com/2024/win32_lights/
1•MaximilianEmel•40m ago•0 comments

The New Playbook for Leaders [pdf]

https://www.ibli.com/IBLI%20OnePagers%20The%20Plays%20Summarized.pdf
1•mooreds•40m ago•1 comments

Interactive Unboxing of J Dilla's Donuts

https://donuts20.vercel.app
1•sngahane•41m ago•0 comments

OneCourt helps blind and low-vision fans to track Super Bowl live

https://www.dezeen.com/2026/02/06/onecourt-tactile-device-super-bowl-blind-low-vision-fans/
1•gaws•43m ago•0 comments

Rudolf Vrba

https://en.wikipedia.org/wiki/Rudolf_Vrba
1•mooreds•44m ago•0 comments

Autism Incidence in Girls and Boys May Be Nearly Equal, Study Suggests

https://www.medpagetoday.com/neurology/autism/119747
1•paulpauper•45m ago•0 comments

Wellness Hotels Discovery Application

https://aurio.place/
1•cherrylinedev•45m ago•1 comments
Open in hackernews

Building Private Processing for AI Tools on WhatsApp

https://engineering.fb.com/2025/04/29/security/whatsapp-private-processing-ai-tools/
27•3s•9mo ago

Comments

grugagag•9mo ago
> We’re sharing an early look into Private Processing, an optional capability that enables users to initiate a request to a confidential and secure environment and use AI for processing messages where no one — including Meta and WhatsApp — can access them.

What is this and what is this supposed to mean? I have a hard time trusting these companies with any privacy and while this wording may be technically correct they’ll likely extract all meaning from your communication, probably would even have some AI enabled surveillance service.

ipsum2•9mo ago
Did you read the next paragraphs? It literally describes the details. I would quote the parts that respond to your question, but I would be quoting the entire post.

> This confidential computing infrastructure, built on top of a Trusted Execution Environment (TEE), will make it possible for people to direct AI to process their requests — like summarizing unread WhatsApp threads or getting writing suggestions — in our secure and private cloud environment.

brookst•9mo ago
We’re into “can’t prove a negative” territory here. Yes, the scheme is explained in detail, yes it conforms to cryptographic norms, yes real people work on it and some of us know some of them..

..but how can FB prove it isn’t all a smokescreen, and requests are printed out and faxed to evil people? They can’t, of course, and some people like to demand proof of the negative as a way of implying wrongdoing in a “just asking questions” manner.

heromal•9mo ago
Except the software running in TEEs, including the source code, is all verifiable at runtime, via a third party not controlled by Meta. And if you disagree, claim a bug bounty and become famous for exposing Meta as frauds. Or, more likely, stick with your reddit-tier zealotry and clown posting.
ATechGuy•9mo ago
A few startups [1,2] also offer infra for private AI based on confidential computing from Nvidia and Intel/AMD.

1. https://tinfoil.sh 2. https://www.privatemode.ai

justanotheratom•9mo ago
I don't understand the knee-jerk skepticism. This is something they are doing to gain trust and encourage users to use AI on WhatsApp.

WhatsApp did not used to be end-to-end encypted, then in 2021 it was - a step in the right direction. Similary, AI interaction in WhatsApp today is not private, which is something they are trying to improve with this effort - another step in the right direction.

mhio•9mo ago
What's the motive "to gain trust and encourage users to use AI on WhatsApp"? Meta aren't a charity. You have to question their motives because their motive is to extract value out of their users who don't pay for a service, and I would say that whatsapp has proven to be a harder place to extract that value than their other ventures.

btw whatsapp implemented the signal protocol around 2016.

justanotheratom•9mo ago
"motive is to extract value out of their users who don't pay for a service" that is called a business.

if you find something deceitful in the business practice, that should certainly be called out and even prosecuted. I don't see why an effort to improve privacy has to get a skeptical treatment, because big business bad bla bla

echelon_musk•9mo ago
Privacy was reduced from where it already stood by the introduction of an AI assistant to an E2E messaging app.

Had they not included it in the first place they would then not have to 'improve privacy' by reworking the AI.

I agree with OP and am highly sceptical of Meta's motives.

justanotheratom•9mo ago
You would be correct to be skeptical when they introduced AI into conversations, which btw is opt-in.
asadm•9mo ago
I mean you are not forced to?

If a company is trying to move their business to be more privacy focused, at least we can be non-dismissive.

nl•9mo ago
Broadly similar to what Apple is trying with their private compute work.

It's a great idea but the trust chains are so complex they are hard to reason about.

In "simple" public key encryption reasonably technically literate people can reason about it ("not your key, not your X") but with private compute there are many layers, each of which works in a fairly complex way and AFAIK you always end up having to trust a root source of trust that certifies the trusted device.

It's good in the sense it is trust minimization, but it's hard to explain and the cynicism (see HN comments similar to "you can't trust it because big tech/gov interference etc) means I am sadly pessimistic about the uptake.

I wish it wasn't so though. The cynicism in particular I find disappointing.

squigz•9mo ago
Why do you find it disappointing? It seems quite appropriate to me.
brookst•9mo ago
Not GP, but to me it is also disappointing because it’s just the old “if seatbelts don’t prevent car accidents, why wear them?” argument.

On the one hand you have systems where anyone at any company in the value chain can inspect your data ad hoc , with no auditing or notification.

On the other hand, you have systems that prevent casual security / privacy violations but could still be subverted by a state actor or the company that has the root of trust.

Neither is perfect. But it’s cynical and nihilistic to profess to see no difference.

Risk reduction should be celebrated. Those who see no value in it come across as zealots.

hulitu•9mo ago
> It's a great idea

Maybe for you. My computer should not do what I didn't ask it.

2Gkashmiri•9mo ago
So this is fb explaining how they are using your content from e2ee to cloud and back ? So not even Fb knows the content ?

Simple question. What if csam is sent to ai. Would it stop, or report to authorities or allow processing ? Same for bad stuff.

brookst•9mo ago
See: how Apple tried to solve this and generated massive outrage.
cutler•9mo ago
Love the Accept-only cookie notice. A real trust builder.