frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Ask HN: GAN'ing Coding GPTs

1•abrax3141•42s ago•0 comments

Tariffs as Siege Engines – The Long War on China

https://themindness.substack.com/p/tariffs-as-siege-engines-the-long
2•hackandthink•2m ago•0 comments

The Making of GoldenEye 007 (N64) – Interview with Rare's Dr. David Doak [video]

https://www.youtube.com/watch?v=GjJMDrVkZ2Y
1•CharlesW•2m ago•0 comments

MCP for DNS

https://github.com/mattcollins/spaceship-mcp
1•skyfantom•3m ago•1 comments

An E-Bike for the Mind

https://joshbrake.substack.com/p/an-e-bike-for-the-mind
1•vinhnx•3m ago•0 comments

The Salesloft-Drift Breach: Analyzing the Biggest SaaS Breach of 2025

https://www.reco.ai/blog/the-salesloft-drift-breach-analyzing-the-biggest-saas-breach-of-2025
1•llmacpu•3m ago•0 comments

I made a crackme that unlocks a free copy of my book

https://blog.ryanmerket.com/crack-the-code-unlock-a-free-book-the-hackers-edge-challenge-e66065d1...
1•ryanmerket•3m ago•1 comments

Zuckerberg on hot mic telling Trump he wasn't sure how much to spend on AI

https://www.engadget.com/zuckerberg-caught-on-hot-mic-telling-trump-i-wasnt-sure-how-much-to-prom...
1•dataflow•6m ago•0 comments

The Importance of Kindness in Engineering

https://ashouri.xyz/post/kindnessinengineering
1•gpi•7m ago•0 comments

Statement on discourse about ActivityPub and AT Protocol

https://github.com/swicg/general/blob/master/statements%2F2025-09-05-activitypub-and-atproto-disc...
1•gpi•9m ago•0 comments

Levallois Technique

https://en.wikipedia.org/wiki/Levallois_technique
1•tusslewake•9m ago•0 comments

Show HN: Pinblocks – Your Chats with Notion-Style Collaborative Blocks

https://pinblocks.io/
1•p2hari•9m ago•0 comments

Ubuntu installs failing for more than 24 hours due to security.ubuntu.com down

https://askubuntu.com/questions/1555546/why-am-i-unable-to-update-ubuntu-right-now-september-5-20...
1•programd•10m ago•0 comments

A Practical Introduction to Parsing

https://jhwlr.io/intro-to-parsing/
1•ibobev•12m ago•0 comments

Show HN: unplugin-transform-import-meta – Transform ImportMeta at build-time

https://github.com/sushichan044/unplugin-transform-import-meta
1•sushichan044•17m ago•0 comments

Streaming Platforms, Filter Bubbles, and Cultural Inequalities

https://sociologicalscience.com/articles-v6-18-467/
1•bediger4000•21m ago•1 comments

Root cause for why Windows 11 is breaking or corrupting SSDs may have been found

https://www.neowin.net/news/root-cause-for-why-windows-11-is-breaking-or-corrupting-ssds-may-have...
3•bundie•23m ago•0 comments

Silicon Valley's most powerful alliance just got stronger

https://www.theverge.com/command-line-newsletter/773260/google-apple-search-deal-money-ai
1•retskrad•24m ago•1 comments

The Oscar Winning Algorithm

https://sangarshanan.com/2025/09/06/perlin-noise/
1•phantomshelby•26m ago•0 comments

Strudel Flow

https://xyflow.com/strudel-flow
2•fcpguru•28m ago•1 comments

Musk's $1T pay package is full of watered-down takes on his own broken promises

https://techcrunch.com/2025/09/06/musks-1t-pay-package-is-full-of-watered-down-versions-of-his-ow...
8•rntn•29m ago•0 comments

Language-Oriented Programming in Racket(2019)

https://www.youtube.com/watch?v=z8Pz4bJV3Tk
13•farhanhubble•31m ago•0 comments

Chemical pollution a threat comparable to climate change, scientists warn

https://www.theguardian.com/environment/2025/aug/06/chemical-pollution-threat-comparable-climate-...
2•PaulHoule•32m ago•0 comments

Exploiting the Impossible: A Vulnerability Apple Deems Unexploitable

https://jhftss.github.io/Exploiting-the-Impossible/
1•walterbell•35m ago•0 comments

BoldPixels: A free pixel art font

https://yukipixels.itch.io/boldpixels
1•clessg•36m ago•0 comments

Show HN: Built a trade journal for memecoin traders - looking for feedback

https://fikr.net
1•rayoe•39m ago•1 comments

XEmacs 21.5.36 "leeks" is released

https://www.xemacs.org/Releases/21.5.36.html
15•gudzpoz•41m ago•0 comments

Soundscope – a CLI tool to analyze audio files (FFT, LUFS, waveform)

https://github.com/bananaofhappiness/soundscope
1•bnnfhppnss•41m ago•0 comments

Show HN: 60-Second Linux Analysis, Supercharged with Nix and LLMs

https://quesma.com/blog/60s-linux-analysis-nix-llms/
2•piotrgrabowski•44m ago•1 comments

We Badly Need Frameworks

https://koolcodez.com/blog/we-badly-need-frameworks/
5•luciodale•44m ago•1 comments
Open in hackernews

DuckDuckGo founder: AI surveillance should be banned

https://gabrielweinberg.com/p/ai-surveillance-should-be-banned
177•mustaphah•2h ago

Comments

iambateman•1h ago
If the author sees this…could you go one step further, what policy specifically do you recommend?

It seems like having LLM providers not train on user data is a big part of it. But is using traditional ML models to do keyword analysis considered “AI” or “surveillance”?

The author…and this community in general…are much more prepared to make full recommendations about what AI surveillance policy should be. We should be super clear to try to enact good regulation without killing innovation in the process.

slt2021•1h ago
the law could be as simple as requiring to blur faces and body silhouettes of all people inside each camera, prior to any further processing in the cloud, ensuring privacy of the CCTV footage.
beepbooptheory•1h ago
From the TFA:

> That’s why we (at DuckDuckGo) started offering Duck.ai for protected chatbot conversations and optional, anonymous AI-assisted answers in our private search engine. In doing so, we’re demonstrating that privacy-respecting AI services are feasible.

I don't know if its a great idea, or just I wonder what does make it feasible, but there is a kind of implied recommendation here.

By "killing innovation" do you just mean: "we need to allow these companies to make money in possibly a predatory way, so they have the money to do... something else"? Or what is the precise concern here? What facet needs to be innovated upon?

yegg•1h ago
Thanks (author here). I am working on a follow-up post (and likely posts) with specific recommendations.
martin-t•1h ago
LLM providers should only be allowed to train on data in public domain or their models and outputs should interior the license of the training data.

And people should own all data about themselves, all rights reserved.

It's ironic copyright is the law that protects against this kind of abuse. And this is of course why big "AI" companies are trying to weaken it by arguing models training is not derivative work.

Or by claiming that writing a prompt in 2 minutes is enough creative work to own copyright of the output despite the model being based on 10^12 hours of human work, give or take a few orders of magnitude.

j45•57m ago
Makes sense, have to deal with the cat being out of the bag though.

The groups that didn't train on public domain content would have an advantage if it's implemented as a rule moving forward at least for some time.

New models following this could create a gap.

I'm sure competition as has been seen from open-source models will be able to

zvmaz•1h ago
The problem is, we have to take the word of companies for our privacy.
tantalor•1h ago
This is an argument against chatbots in general, not just surveillance.
beepbooptheory•1h ago
Doesn't seem the case because they do end up advertising the duckduckgo chatbot as a safe alternative.
0x696C6961•1h ago
AI makes large scale retroactive thought policing practical. This is terrifying.
j45•59m ago
Like search histories but far more.
alphazard•1h ago
I expect we will continue to see the big AI companies pushing for privacy protections. Sam Altman made a comparison to attorney-client privilege in an interview. There is a significant hold out to using these things as fully trusted personal assistants or personal knowledge bases because of the lack of privacy.

The only real solution is locally running models, but that goes against the business model. So instead they will seek regulation to create privacy by fiat. Fiat privacy still has all the same problems as telling your therapist that you killed someone, or keeping your wallet keys printed out on paper in a safe. It's dependent on regulations and definitions of greater good that you can't control.

dataviz1000•47m ago
> but that goes against the business model.

Not if you are selling hardware. If I was Apple, Dell, or Lenovo, I would be pushing for local running models supporting Hugging Face while I full speed developed systems that can do inference locally.

utyop22•41m ago
Apple will eventually figure it out. Remember the iPhone took 5 years to develop - they don’t rush this stuff.
Wowfunhappy•41m ago
Notably, Apple is pushing for local models, albeit not open ones and with very limited success.
alphazard•33m ago
Local models do make a lot of sense (especially for Apple), but it's tough to figure out a business model that would cause a company like OpenAI to distribute weights they worked so hard to train.

Getting customers to pay for the weights would be entirely dependent on copyright law, which OpenAI already has a complicated relationship with. Quite the needle to thread: it's okay for us to ingest and regurgitate data with total disregard for how it's licensed, but under no circumstances can anyone share these weights.

dataviz1000•23m ago
> Getting customers to pay for the weights

Provide the weights as an add-on for customers who pay for hardware to run them. The customers will be paying for weights + hardware. I think it is the same model as buying the hardware and get the macOS for free. Apple spends $35B a year in R&D. Training GPT5 cost ~$500M. It is a nothing burger for Apple to create a model that runs locally on their hardware.

novok•18m ago
That is functionally much harder to pull off than software because model weights are essentially more like raw media files than code, and that is much easier to convert to another runtime
firesteelrain•6m ago
Codeium had an airgap solution until they were in talks with OpenAI and pulled it back. It worked on prem and they even told you what hardware to buy
ankit219•1h ago
> your particular persuasive triggers through chatbot memory features, where they train and fine-tune based on your past conversations

Represents a fundamental misunderstanding of how training works or can work. Memory is more to do with retrieval. Finetuning on those memories would not be useful given the data is going to be minuscule to affect the probablity distribution in the right way.

While everyone is for privacy (and thats what makes these arguments hard to refute), this is clearly about using privacy as a way to argue against using conversational interfaces. Not just that, it's using the same playbook to use privacy as a marketing tactic. The argument starts from highly persuasive nature of chatbots, to somehow privacy preserving chatbots from DDG wont do it, to being safe with hackers stealing your info elsewhere and not on DDG. And then asking for regulation.

pessimizer•1h ago
This is silly, and there's no time. We can't even ban illegal surveillance i.e. we can write whatever we want into the law, and the law will simply be ignored.

The next politician to come in will retroactively pardon everyone involved, and will create legislation or hand down an executive order that creates a "due process" in order to do the illegal act in the future, making it now a legal act. The people who voted the politician in celebrate their victory over the old evil, lawbreaking politician, who is on a yacht somewhere with one of the billionaires who he really works for. Rinse and repeat.

Eric Holder assured us that "due process" simply refers to any process that they do, and can take place entirely within one's mind.

And we think we can ban somebody from doing something that they can do with a computer connected to a bunch of thick internet pipes, without telling anyone.

That's libs for you. Still believe in the magic of these garbage institutions, even when they're headed by a game show host and wrestling valet who's famous because he was good at getting his name in the NY Daily News and the NY Post 40 years ago. He is no less legitimate than all of you clowns. The only reason Weinberg has a voice is because he's rich, too.

cousin_it•1h ago
This is a great point. Everyone who has talked with chatbots at all: please note that all contents of your past conversations with chatbots (that already exist now, and that you can't meaningfully delete!) could be used in the future to target ads to you, manipulate you financially and politically, and sell "personalized influence on you specifically" as a service to the highest bidder. Just wanted to make sure y'all understand that.

EDIT: I want to add that "training on chat logs" isn't even the issue. In fact it understates the danger. It's better to imagine things like this: when a future ad-bot or influence-bot talks to you, it will receive your past chatlogs with other bots as context, useful to know what'll work on you or not.

HPsquared•41m ago
Or literally read out in court if you discuss anything relevant to a legal case.
hkon•30m ago
For me, what is most scary about ai-chatbot is the interface to an exploiter.

They can just prompt "given all your chats with this person, how can we manipulate him to do x"

Not really any expertise needed at all, let the AI to all the lifting.

bethekidyouwant•20m ago
I can see how this would work if you just turned off your brain and just thought of course this will work
rsyring•58m ago
IMO: make all the laws you want. They generally won't be enforced and, if they are, it will take 5-10 years to make it's way through the courts. At best, the fines will be huge and yet account for maybe 10% of the revenue generated by violating the law.

The incentives are all wrong.

I'm fundamentally a capitalist because I don't know another system that will work better. But, there really is just too much concentrated wealth in these orgs.

Our legal and cultural constructs are not designed in a way that such disparity can be put in check. The populace responds by wanting ever more powerful leaders to "make things right" and you get someone like Trump at best and it goes downhill from there.

Make the laws, it will help, a little, maybe.

But I think something more profound needs to happen for these things to be truly fixed. I, admittedly, have no idea what that is.

Lerc•54m ago
I think much of the philosophical discussion on the pertinent issues here have been discussed at length in the context of Legal, Medical, or Financial advice.

In essence, there is a general consensus on the conduct concerting trusted advisors. They should act in the interest of their client. Privacy protections exist to enable individuals to be able to provide their advisors the context required to give good advice without fear of disclosure to others.

I think AI needs recognition as a similarly protected class.

AI actions should be considered to be acting for a Client (or some other specifically defined term to denote who they are advising). Any information shared with the AI by the client should be considered privileged. If the Client shares the information to others, the privilege is lost.

It should be illegal to configure an an AI to deliberately act against the interests of their Client. It should be illegal to configure an AI to claim that their Client is someone other than who it is (it may refuse to disclose, it may not misrepresent). Any information shared with an AI misrepresenting itself as the representative of the Client must have protections against disclosure or evidential use. There should be no penalty to refusing to provide information to an AI that does not disclose who its Client is.

I have a bunch of other principles floating around in my head around AI but those are the ones regarding privacy and being able to communicate candidly with an AI.

Some of the others are along the lines of

It should be disclosed(of the nutritional information type of disclosure) when an AI makes a determination regarding a person. There should be a set of circumstances where, if an AI makes a determination regarding a person, that person is provided with means to contest the determination.

A lot of the ideas would be good practice if they went beyond AI, but are more required in the case of AI because of the potential for mass deployment without oversight.

olyellybelly•53m ago
The hype industry around AI is making too much money for governments to do anything about it that's actually needed.
testfrequency•50m ago
Especially in the US right now where they are doing whatever it takes to be #1 in ~anything, ethical or not. It’s pure bragging rights and power, anything goes - profit is just the byproduct.
hungmung•48m ago
America is in love with privatized surveillance, it helps get around that pesky Constitution that prohibits unwarranted search and seizure.

"Wipeth thine ass with what is written" should be engraved above the doorway of the National Constitution Center.

add-sub-mul-div•40m ago
Cool, but they're shoving AI into their products and trying to profit from the surveillance etc. that went into building that technology so this just comes across as virtue signaling.
aunty_helen•33m ago
This guy has been know to fold like a bed sheet on principles when it’s convenient for him.

> Use our service

Nah.

swayvil•29m ago
Surveillance is our overlords's favorite thing. And AI makes it 1000x better. So good luck banning it.

Ultimately it's one of those arms races. The culture that surveills its population most intensely wins.

throwaway106382•25m ago
Unless it’s banned worldwide by every country by binding treaty this this will never work.

Banning it just in USA leaves you wide open to be defeated by China, Russia, etc….

Like it or not it’s a mutually assured destruction arms race.

AI is the new nuclear bomb.

yupyupyups•21m ago
Wrong. Excessive data collection should be banned.