frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Every company building your AI assistant is now an ad company

https://juno-labs.com/blogs/every-company-building-your-ai-assistant-is-an-ad-company
46•ajuhasz•4h ago

Comments

FeteCommuniste•1h ago
The level of trust I have in a promise made by any existing AI company that such a device would never phone home: 0.
rimbo789•1h ago
Ads in AI should be banned right now. We need to learn from mistakes of the internet (crypto, facebook) and aggressively regulate early and often before this gets too institutionalized to remove.
kalterdev•1h ago
Ads (at least in the classical pre-AI sense) are by orders of magnitude better than preventive laws
nancyminusone•1h ago
They did learn. That's why they are adding ads.
doomslayer999•1h ago
Boomers in government would be clueless on how to properly regulate and create correct incentives. Hell, that is still a bold ask for tech and economist geniuses with the best of intentions.
irishcoffee•57m ago
Would that be the same cohort of boomers jamming LLMs up our collective asses? So they don’t understand how to regulate a technology they don’t understand, but fucking by golly you’re going to be left behind if you don’t use it?

This is like a shitty Disney movie.

doomslayer999•50m ago
It's mostly SV grifters who shoved LLMs up our asses. They then get in cahoots with boomers in the government to create policies and "investment schemes" that inflate their stock in a ponzi-like fashion and regulate competition. Why do you think Trump has some no-name crypto firm, or why Thiel has Vance as his whipping boy, and Elon spend a fortune trying to get Trump to win? This is a multiparty thing, as most politicians are heavily bought and paid for.
sxp•1h ago
The article is forgetting about Anthropic which currently has the best agentic programmer and was the backbone for the recent OpenClaw assistants.
ajuhasz•1h ago
True, we focused on hardware embodied AI assistants (smart speakers, smart glasses, etc) as those are the ones we believe will soon start leaving wake words behind and moving towards an always-on interaction design. The privacy implications of an always-listening smart speaker are magnitudes higher than OpenClaw that you intentionally interact with.
iugtmkbdfil834•1h ago
This. Kids already have tons of those gadgets on. Previously, I only really had to worry about a cell phone so even if someone was visiting, it was a simple case of plop all electronics here, but now with glasses I am not even sure how to reasonably approach this short of not allowing it period. Eh, brave new world.
kleiba•1h ago
Always on is incompatible with data protection rights, such as the GDPR in Europe.
ajuhasz•1h ago
With cloud based inference we agree, this being just one more benefit of doing everything with "edge" inference (on device inside the home) as we do with Juno.
NickJLange•1h ago
This isn't a technology issue. Regulation is the only sane way to address the issue.

For once,we (as the technologists) have a free translator to laymen speak via the frontier LLMs, which can be an opportunity to educate the masses as to the exact world on the horizon.

doomslayer999•1h ago
Who would buy OpenAI's spy device? I think a lot of public discourse and backlash about the greedy, anticompetitive, and exploitative practices of the silicon valley elite have gone mainstream and will hopefully course correct the industry in time.
BoxFour•57m ago
This strikes me as a pretty weak rationalization for "safe" always-on assistants. Even if the model runs locally, there’s still a serious privacy issue: Unwitting victims of something recording everything they said.

Friends at your house who value their privacy probably won’t feel great knowing you’ve potentially got a transcript of things they said just because they were in the room. Sure, it's still better than also sending everything up to OpenAI, but that doesn’t make it harmless or less creepy.

Unless you’ve got super-reliable speaker diarization and can truly ensure only opted-in voices are processed, it’s hard to see how any always-listening setup ever sits well with people who value their privacy.

ajuhasz•51m ago
We give an overview of our the current memory architecture at https://juno-labs.com/blogs/building-memory-for-an-always-on...

This is something we call out under the "What we got wrong" section. We're currently collecting an audio dataset that should help create a speech-to-text (STT) model that incorporates speaker identification and that tag will be weaved into the core of the memory architecture.

> The shared household memory pool creates privacy situations we’re still working through. The current design has everyone in the family shares the same memory corpus. Should a child be able to see a memory their parents created? Our current answer is to deliberately tune the memory extraction to be household-wide with no per-person scoping because a kitchen device hears everyone equally. But “deliberately chose” doesn’t mean “solved.” We’re hoping our in-house STT will allow us to do per-person memory tagging and then we can experiment with scoping memories to certain people or groups of people in the household.

paxys•55m ago
This spiel is hilarious in the context of the product this company (https://juno-labs.com/) is pushing – an always on, always listening AI device that inserts itself into your and your family’s private lives.

“Oh but they only run on local hardware…”

Okay, but that doesn't mean every aspect of our lives needs to be recorded and analyzed by an AI.

Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?

Have all your guests consented to this?

What happens when someone breaks in and steals the box?

What if the government wants to take a look at the data in there and serves a warrant?

What if a large company comes knocking and makes an acquistion offer? Will all the privacy guarantees still stand in face of the $$$ ?

zmmmmm•24m ago
The fundamental problem with a lot of this is that the legal system is absolute: if information exists, it is accessible. If the courts order it, nothing you can do can prevent the information being handed over, even if that means a raid of your physical premises. Unless you encrypt it in a manner resistant to any way you can be compelled to decrypt it, the only way to have privacy is for information not to exist in the first place. It's a bit sad as the potential for what technology can do to assist us grows that this actually may be the limit on how much we can fully take advantage of it.

I do sometimes wish it would be seen as an enlightened policy to legislate that personal private information held in technical devices is legally treated the same as information held in your brain. Especially for people for whom assistive technology is essential (deaf, blind, etc). But everything we see says the wind is blowing the opposite way.

ajuhasz•11m ago
Agreed, while we've tried to think through this and build in protections we can't pretend that there is a magical perfect solution. We do have strong conviction that doing this inside the walls of your home is much safer than doing it within any companies datacenter (I accept that some just don't want this to exist period and we won't be able to appease them).

Some of our decisions in this direction:

  - Minimize how long we have "raw data" in memory
  - Tune the memory extraction to be very discriminating and err on the side of forgetting (https://juno-labs.com/blogs/building-memory-for-an-always-on-ai-that-listens-to-your-kitchen)
  - Encrypt storage with hardware protected keys (we're building on top of the Nvidia Jetson SOM)
We're always open to criticism on how to improve our implementation around this.
ajuhasz•22m ago
> Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?

One of our core architecture decisions was to use a streaming speech-to-text model. At any given time about 80ms of actual audio is in memory and about 5 minutes of transcribed audio (text) is in memory (this is help the STT model know the context of the audio for higher transcription accuracy).

Of these 5 minute transcripts, those that don't become memories are forgotten. So only selected extracted memories are durably stored. Currently we store the transcript with the memory (this was a request from our prototype users to help them build confidence in the transcription accuracy) but we'll continue to iterate based on feedback if this is the correct decision.

BoxFour•12m ago
It’s definitely a strange pitch, because the target audience (the privacy-conscious crowd) is exactly the type who will immediately spot all the issues you just mentioned. It's difficult to think of any privacy-conscious individual who wouldn't want, at bare minimum, a wake word (and more likely just wouldn't use anything like this period).

The non privacy-conscious will just use Google/etc.

zmmmmm•31m ago
It's interesting to me that there seems to be an implicit line being drawn around what's acceptable and what's not between video and audio.

If there's a camera in an AI device (like Meta Ray Ban glasses) then there's a light when it's on, and they are going out of their way to engineer it to be tamper resistant.

But audio - this seems to be on the other side of the line. Passively listening ambient audio is being treated as something that doesn't need active consent, flashing lights or other privacy preserving measures. And it's true, it's fundamentally different, because I have to make a proactive choice to speak, but I can't avoid being visible. So you can construct a logical argument for it.

I'm curious how this will really go down as these become pervasively available. Microphones are pretty easy to embed almost invisibly into wearables. A lot of them already have them. They don't use a lot of power, it won't be too hard to just have them always on. If we settle on this as the line, what's it going to mean that everything you say, everywhere will be presumed recorded? Is that OK?

BoxFour•20m ago
> Passively listening ambient audio is being treated as something that doesn't need active consent

That’s not accurate. There are plenty of states that require everyone involved to consent to a recording of a private conversation. California, for example.

Voice assistants today skirt around that because of the wake word, but always-on recording obviously negates that defense.

Keep Android Open

https://f-droid.org/2026/02/20/twif.html
873•LorenDB•5h ago•353 comments

Turn Dependabot Off

https://words.filippo.io/dependabot/
165•todsacerdoti•2h ago•47 comments

I found a Vulnerability. They found a Lawyer

https://dixken.de/blog/i-found-a-vulnerability-they-found-a-lawyer
243•toomuchtodo•4h ago•116 comments

Facebook is cooked

https://pilk.website/3/facebook-is-absolutely-cooked
564•npilk•5h ago•350 comments

Ggml.ai joins Hugging Face to ensure the long-term progress of Local AI

https://github.com/ggml-org/llama.cpp/discussions/19759
627•lairv•9h ago•151 comments

Wikipedia deprecates Archive.today, starts removing archive links

https://arstechnica.com/tech-policy/2026/02/wikipedia-bans-archive-today-after-site-executed-ddos...
219•nobody9999•4h ago•123 comments

Show HN: Mines.fyi – all the mines in the US in a leaflet visualization

https://mines.fyi/
32•irasigman•2h ago•12 comments

OpenScan

https://openscan.eu/pages/scan-gallery
60•joebig•2h ago•1 comments

I hate AI side projects

https://dylancastillo.co/posts/ai-side-projects.html
33•dcastm•1h ago•22 comments

CERN rebuilt the original browser from 1989

https://worldwideweb.cern.ch
4•tylerdane•17m ago•1 comments

Uncovering insiders and alpha on Polymarket with AI

https://twitter.com/peterjliu/status/2024901585806225723
48•somerandomness•5h ago•24 comments

Blue light filters don't work – controlling total luminance is a better bet

https://www.neuroai.science/p/blue-light-filters-dont-work
95•pminimax•5h ago•130 comments

Every company building your AI assistant is now an ad company

https://juno-labs.com/blogs/every-company-building-your-ai-assistant-is-an-ad-company
46•ajuhasz•4h ago•23 comments

Making frontier cybersecurity capabilities available to defenders

https://www.anthropic.com/news/claude-code-security
89•surprisetalk•5h ago•38 comments

Trump's global tariffs struck down by US Supreme Court

https://www.bbc.com/news/live/c0l9r67drg7t
1151•blackguardx•8h ago•938 comments

Lil' Fun Langs

https://taylor.town/scrapscript-000
81•surprisetalk•6h ago•10 comments

The path to ubiquitous AI (17k tokens/sec)

https://taalas.com/the-path-to-ubiquitous-ai/
649•sidnarsipur•13h ago•372 comments

Show HN: A native macOS client for Hacker News, built with SwiftUI

https://github.com/IronsideXXVI/Hacker-News
164•IronsideXXVI•9h ago•120 comments

Legion Health (YC) Is Hiring Cracked SWEs for Autonomous Mental Health

https://jobs.ashbyhq.com/legionhealth/ffdd2b52-eb21-489e-b124-3c0804231424
1•ympatel•6h ago

How to Review an AUR Package

https://bertptrs.nl/2026/01/30/how-to-review-an-aur-package.html
41•exploraz•3d ago•3 comments

I found a useful Git one liner buried in leaked CIA developer docs

https://spencer.wtf/2026/02/20/cleaning-up-merged-git-branches-a-one-liner-from-the-cias-leaked-d...
577•spencerldixon•9h ago•204 comments

Across the US, people are dismantling and destroying Flock surveillance cameras

https://www.bloodinthemachine.com/p/across-the-us-people-are-dismantling
13•latexr•45m ago•1 comments

Untapped Way to Learn a Codebase: Build a Visualizer

https://jimmyhmiller.com/learn-codebase-visualizer
188•andreabergia•14h ago•32 comments

Phil Spencer is exiting Microsoft as AI executive takes over Xbox

https://www.neowin.net/news/phil-spencer-is-exiting-microsoft-as-ai-executive-takes-over-xbox/
32•bundie•2h ago•28 comments

Child's Play: Tech's new generation and the end of thinking

https://harpers.org/archive/2026/03/childs-play-sam-kriss-ai-startup-roy-lee/
317•ramimac•8h ago•204 comments

How were video transfers made? (2011)

https://www.film-tech.com/ubb/f12/t000972.html
3•exvi•4d ago•0 comments

The Popper Principle

https://theamericanscholar.org/the-popper-principle/
57•lermontov•1d ago•31 comments

PayPal discloses data breach that exposed user info for 6 months

https://www.bleepingcomputer.com/news/security/paypal-discloses-data-breach-exposing-users-person...
255•el_duderino•10h ago•78 comments

A16Z partner says that the theory that we'll vibe code everything is ' wrong'

https://www.aol.com/articles/a16z-partner-says-theory-well-050150534.html
11•paulpauper•48m ago•2 comments

Consistency diffusion language models: Up to 14x faster, no quality loss

https://www.together.ai/blog/consistency-diffusion-language-models
203•zagwdt•19h ago•91 comments