frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Utah's hottest new power source is 15k feet below the ground

https://www.gatesnotes.com/utahs-hottest-new-power-source-is-below-the-ground
124•mooreds•3h ago•74 comments

How the "Kim" dump exposed North Korea's credential theft playbook

https://dti.domaintools.com/inside-the-kimsuky-leak-how-the-kim-dump-exposed-north-koreas-credent...
153•notmine1337•4h ago•20 comments

A Navajo weaving of an integrated circuit: the 555 timer

https://www.righto.com/2025/09/marilou-schultz-navajo-555-weaving.html
60•defrost•3h ago•9 comments

Shipping textures as PNGs is suboptimal

https://gamesbymason.com/blog/2025/stop-shipping-pngs/
41•ibobev•3h ago•15 comments

I'm Making a Beautiful, Aesthetic and Open-Source Platform for Learning Japanese

https://kanadojo.com
37•tentoumushi•2h ago•11 comments

C++26: Erroneous Behaviour

https://www.sandordargo.com/blog/2025/02/05/cpp26-erroneous-behaviour
12•todsacerdoti•1h ago•8 comments

Troubleshooting ZFS – Common Issues and How to Fix Them

https://klarasystems.com/articles/troubleshooting-zfs-common-issues-how-to-fix-them/
14•zdw•3d ago•0 comments

A history of metaphorical brain talk in psychiatry

https://www.nature.com/articles/s41380-025-03053-6
10•fremden•1h ago•2 comments

Qwen3 30B A3B Hits 13 token/s on 4xRaspberry Pi 5

https://github.com/b4rtaz/distributed-llama/discussions/255
277•b4rtazz•13h ago•115 comments

Over 80% of Sunscreen Performed Below Their Labelled Efficacy (2020)

https://www.consumer.org.hk/en/press-release/528-sunscreen-test
87•mgh2•4h ago•79 comments

We hacked Burger King: How auth bypass led to drive-thru audio surveillance

https://bobdahacker.com/blog/rbi-hacked-drive-thrus/
272•BobDaHacker•10h ago•148 comments

The maths you need to start understanding LLMs

https://www.gilesthomas.com/2025/09/maths-for-llms
454•gpjt•4d ago•99 comments

Oldest recorded transaction

https://avi.im/blag/2025/oldest-txn/
135•avinassh•9h ago•59 comments

What to Do with an Old iPad

http://odb.ar/blog/2025/09/05/hosting-my-blog-on-an-iPad-2.html
40•owenmakes•1d ago•27 comments

Anonymous recursive functions in Racket

https://github.com/shriram/anonymous-recursive-function
46•azhenley•2d ago•12 comments

Stop writing CLI validation. Parse it right the first time

https://hackers.pub/@hongminhee/2025/stop-writing-cli-validation-parse-it-right-the-first-time
56•dahlia•5h ago•20 comments

Using Claude Code SDK to reduce E2E test time

https://jampauchoa.substack.com/p/best-of-both-worlds-using-claude
96•jampa•6h ago•66 comments

Matmul on Blackwell: Part 2 – Using Hardware Features to Optimize Matmul

https://www.modular.com/blog/matrix-multiplication-on-nvidias-blackwell-part-2-using-hardware-fea...
7•robertvc•1d ago•0 comments

GigaByte CXL memory expansion card with up to 512GB DRAM

https://www.gigabyte.com/PC-Accessory/AI-TOP-CXL-R5X4
41•tanelpoder•5h ago•38 comments

Microsoft Azure: "Multiple international subsea cables were cut in the Red Sea"

https://azure.status.microsoft/en-gb/status
100•djfobbz•3h ago•13 comments

Why language models hallucinate

https://openai.com/index/why-language-models-hallucinate/
133•simianwords•16h ago•147 comments

Processing Piano Tutorial Videos in the Browser

https://www.heyraviteja.com/post/portfolio/piano-reader/
25•catchmeifyoucan•2d ago•6 comments

Gloria funicular derailment initial findings report (EN) [pdf]

https://www.gpiaaf.gov.pt/upload/processos/d054239.pdf
9•vascocosta•2h ago•6 comments

AI surveillance should be banned while there is still time

https://gabrielweinberg.com/p/ai-surveillance-should-be-banned
461•mustaphah•10h ago•169 comments

Baby's first type checker

https://austinhenley.com/blog/babytypechecker.html
58•alexmolas•3d ago•15 comments

Qantas is cutting executive bonuses after data breach

https://www.flightglobal.com/airlines/qantas-slashes-executive-pay-by-15-after-data-breach/164398...
39•campuscodi•2h ago•9 comments

William James at CERN (1995)

http://bactra.org/wm-james-at-cern/
13•benbreen•1d ago•0 comments

Rug pulls, forks, and open-source feudalism

https://lwn.net/SubscriberLink/1036465/e80ebbc4cee39bfb/
242•pabs3•18h ago•118 comments

Rust tool for generating random fractals

https://github.com/benjaminrall/chaos-game
4•gidellav•2h ago•0 comments

Europe enters the exascale supercomputing league with Jupiter

https://ec.europa.eu/commission/presscorner/detail/en/ip_25_2029
50•Sami_Lehtinen•4h ago•34 comments
Open in hackernews

AI surveillance should be banned while there is still time

https://gabrielweinberg.com/p/ai-surveillance-should-be-banned
461•mustaphah•10h ago

Comments

iambateman•9h ago
If the author sees this…could you go one step further, what policy specifically do you recommend?

It seems like having LLM providers not train on user data is a big part of it. But is using traditional ML models to do keyword analysis considered “AI” or “surveillance”?

The author…and this community in general…are much more prepared to make full recommendations about what AI surveillance policy should be. We should be super clear to try to enact good regulation without killing innovation in the process.

slt2021•9h ago
the law could be as simple as requiring to blur faces and body silhouettes of all people inside each camera, prior to any further processing in the cloud, ensuring privacy of the CCTV footage.
beepbooptheory•8h ago
From the TFA:

> That’s why we (at DuckDuckGo) started offering Duck.ai for protected chatbot conversations and optional, anonymous AI-assisted answers in our private search engine. In doing so, we’re demonstrating that privacy-respecting AI services are feasible.

I don't know if its a great idea, or just I wonder what does make it feasible, but there is a kind of implied recommendation here.

By "killing innovation" do you just mean: "we need to allow these companies to make money in possibly a predatory way, so they have the money to do... something else"? Or what is the precise concern here? What facet needs to be innovated upon?

yegg•8h ago
Thanks (author here). I am working on a follow-up post (and likely posts) with specific recommendations.
FollowingTheDao•6h ago
While I agree with your take on the harms or AI surveillance, I will never agree that AI is beneficial, and there is a net negative outcome using AI. For example electricity prices, carbon release, hallucinations, cognitive decay...they all outweigh what benefit AI brings, which still is not clear.

Like nuclear fission, AI should never have been developed.

fragmede•6h ago
As well as crypto.
martin-t•8h ago
LLM providers should only be allowed to train on data in public domain or their models and outputs should interior the license of the training data.

And people should own all data about themselves, all rights reserved.

It's ironic copyright is the law that protects against this kind of abuse. And this is of course why big "AI" companies are trying to weaken it by arguing models training is not derivative work.

Or by claiming that writing a prompt in 2 minutes is enough creative work to own copyright of the output despite the model being based on 10^12 hours of human work, give or take a few orders of magnitude.

j45•8h ago
Makes sense, have to deal with the cat being out of the bag though.

The groups that didn't train on public domain content would have an advantage if it's implemented as a rule moving forward at least for some time.

New models following this could create a gap.

I'm sure competition as has been seen from open-source models will be able to

martin-t•7h ago
It's simple, the current models and their usage is copyright infringement.

Just because everyone is doing it doesn't meant it's right or legal. Only that a lot of very rich companies deserve to get punished and pay the creators.

zvmaz•9h ago
The problem is, we have to take the word of companies for our privacy.
tantalor•9h ago
This is an argument against chatbots in general, not just surveillance.
beepbooptheory•8h ago
Doesn't seem the case because they do end up advertising the duckduckgo chatbot as a safe alternative.
gdulli•7h ago
Puffery isn't evidence for or against anything.
quectophoton•5h ago
They mention that they're "demonstrating that privacy-respecting AI services are feasible", knowing their duck.ai is sending the prompts to other AI services, and then in the same paragraph they mention leaks and hacks.

To their credit, their privacy policy says they have agreements on how the upstream services can use that info[1]:

> As noted above, we call model providers on your behalf so your personal information (for example, IP address) is not exposed to them. In addition, we have agreements in place with all model providers that further limit how they can use data from these anonymous requests, including not using Prompts and Outputs to develop or improve their models, as well as deleting all information received once it is no longer necessary to provide Outputs (at most within 30 days, with limited exceptions for safety and legal compliance).

But even assuming the upstream services actually respect the agreement, their own privacy policy implies that your prompts and the responses could still be leaked because they could technically be stored for up to 30 days, or for an unspecified amount of time in the case of the exceptions mentioned.

I mean, it's reasonable and a good start to move in the direction of better privacy, way better than nothing. Just have to keep those details in mind.

[1]: https://duckduckgo.com/duckai/privacy-terms

0x696C6961•8h ago
AI makes large scale retroactive thought policing practical. This is terrifying.
j45•8h ago
Like search histories but far more.
apples_oranges•7h ago
I think people will come to the conclusion that they don’t want to post anything online. Private chats, if they exist, will stay but already, if you generate a work of art, you donate it more or less to the shareholders of Google or Meta. This is even before the thought policing and similar implications.
dude250711•1h ago
"...post anything online..." you probably mean "express true thoughts or emotions externally in a proximity of an electronic device".
tim333•5h ago
Roko's basilisk is on it's way.
sherburt3•4h ago
I would argue that's been happening for a long time with much simpler methods. Take this website for example, I get points for posting comments that people upvote. If I post a comment that people don't like, I will get downvoted. I obviously want more points because... so I naturally work to craft the comment that I believe will net the most upvotes. This isn't 100% terrible because this helps weed out the assholes, but I think it has much more insidious effects.

Feel free to call me an accelerationist but I hope AI makes social media so awful that no one wants to use it anymore. My hope is that AI is the cleansing fire that burns down social media so that we can rebuild on fertile soil.

alphazard•8h ago
I expect we will continue to see the big AI companies pushing for privacy protections. Sam Altman made a comparison to attorney-client privilege in an interview. There is a significant hold out to using these things as fully trusted personal assistants or personal knowledge bases because of the lack of privacy.

The only real solution is locally running models, but that goes against the business model. So instead they will seek regulation to create privacy by fiat. Fiat privacy still has all the same problems as telling your therapist that you killed someone, or keeping your wallet keys printed out on paper in a safe. It's dependent on regulations and definitions of greater good that you can't control.

dataviz1000•8h ago
> but that goes against the business model.

Not if you are selling hardware. If I was Apple, Dell, or Lenovo, I would be pushing for local running models supporting Hugging Face while I full speed developed systems that can do inference locally.

utyop22•8h ago
Apple will eventually figure it out. Remember the iPhone took 5 years to develop - they don’t rush this stuff.
Wowfunhappy•8h ago
Notably, Apple is pushing for local models, albeit not open ones and with very limited success.
alphazard•8h ago
Local models do make a lot of sense (especially for Apple), but it's tough to figure out a business model that would cause a company like OpenAI to distribute weights they worked so hard to train.

Getting customers to pay for the weights would be entirely dependent on copyright law, which OpenAI already has a complicated relationship with. Quite the needle to thread: it's okay for us to ingest and regurgitate data with total disregard for how it's licensed, but under no circumstances can anyone share these weights.

dataviz1000•7h ago
> Getting customers to pay for the weights

Provide the weights as an add-on for customers who pay for hardware to run them. The customers will be paying for weights + hardware. I think it is the same model as buying the hardware and get the macOS for free. Apple spends $35B a year in R&D. Training GPT5 cost ~$500M. It is a nothing burger for Apple to create a model that runs locally on their hardware.

novok•7h ago
That is functionally much harder to pull off than software because model weights are essentially more like raw media files than code, and that is much easier to convert to another runtime
firesteelrain•7h ago
Codeium had an airgap solution until they were in talks with OpenAI and pulled it back. It worked on prem and they even told you what hardware to buy
novok•4h ago
You can still extract the model weights from an on-prem machine. It has all the same problems of media DRM, and large enterprises do not accept unknown recording and surveillance that they cannot control
firesteelrain•3h ago
I am not sure what you mean. I work at a large Enterprise and we did not unleash it on our baseline and it couldn’t phone home but it was really good for writing unit tests. That sped things up for us.
Juliate•7h ago
> it's tough to figure out a business model that would cause a company like OpenAI to distribute weights they worked so hard to train.

It sounds a lot like the browsers war, where the winning strategy had been to aggressively push (for free, which was rather uncommon then) one's platform, in the aim of market dominance for later benefits.

esseph•6h ago
There is no moat.
ronsor•5h ago
> Getting customers to pay for the weights would be entirely dependent on copyright law

That's assuming weights are even covered by copyright law, and I have a feeling they are not in the US, since they aren't really a "work of authorship"

kjkjadksj•5h ago
Why do that when openai might pay you billions to be the primary ai model on your system?
username332211•7h ago
> Fiat privacy still has all the same problems as telling your therapist that you killed someone, or keeping your wallet keys printed out on paper in a safe.

They could take a lesson from churches. If LLM providers and their employees were willing to commit to privacy and were willing to sacrifice their wealth and liberty for the sake of their clients, society would yield.

I remember seeing a video of a certain Richard Masten, a CrimeStoppers coordinator, destroying the information he had on a confidential source right in the courtroom under the threat of a contempt charge and getting away with a slap on the wrist.

In decent societies standing up for principles does work.

mathgradthrow•5h ago
Anyone can just kill you whenever they want. Security cannot be granted by cryptography, only secrecy.
floundy•4h ago
The tech companies have wised up and they'll continue to speak idyllically about what "should be" and maybe even deploy watered-down versions of it, but really they're just buying time to where they can get even bigger and capture more power before the government even thinks of stepping in. The nice thing about being first to market is you can abuse the market, abuse customers, pay a few trivial class action lawsuits along the way, then when regulations finally lag along you've got hundreds of billions worth of market power behind you to bribe the politicians. The US govt won't do anything about AI companies for at least 5 years, and when they do OpenAI, Google, and Meta will all be sitting at the table holding the pen.
alfalfasprout•3h ago
You really think that Altman won’t turn around and start selling ads once enough people are on OpenAI’s “trusted” platform?
socalgal2•2h ago
> Sam Altman made a comparison to attorney-client privilege in an interview

Isn't his company, OpenAI, the one that said the monitor all communications and will report anyone they think is a threat to the government?

https://openai.com/index/helping-people-when-they-need-it-mo...

> If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement.

I get they are trying to do something positive overall. At the same time. I don't want corp owned AI that's monitoring everything I ask it.

IIRC it is illegal for the phone company to monitor and censor communications. The government can ask a judge for permission for police to monitor a line but otherwise it's illegal. But now with AI transcription it won't be long until a company can monitor every call, transcribe it, feed to an LLM to judge and decide which lists you should be on.

ankit219•8h ago
> your particular persuasive triggers through chatbot memory features, where they train and fine-tune based on your past conversations

Represents a fundamental misunderstanding of how training works or can work. Memory is more to do with retrieval. Finetuning on those memories would not be useful given the data is going to be minuscule to affect the probablity distribution in the right way.

While everyone is for privacy (and thats what makes these arguments hard to refute), this is clearly about using privacy as a way to argue against using conversational interfaces. Not just that, it's using the same playbook to use privacy as a marketing tactic. The argument starts from highly persuasive nature of chatbots, to somehow privacy preserving chatbots from DDG wont do it, to being safe with hackers stealing your info elsewhere and not on DDG. And then asking for regulation.

pessimizer•8h ago
This is silly, and there's no time. We can't even ban illegal surveillance i.e. we can write whatever we want into the law, and the law will simply be ignored.

The next politician to come in will retroactively pardon everyone involved, and will create legislation or hand down an executive order that creates a "due process" in order to do the illegal act in the future, making it now a legal act. The people who voted the politician in celebrate their victory over the old evil, lawbreaking politician, who is on a yacht somewhere with one of the billionaires who he really works for. Rinse and repeat.

Eric Holder assured us that "due process" simply refers to any process that they do, and can take place entirely within one's mind.

And we think we can ban somebody from doing something that they can do with a computer connected to a bunch of thick internet pipes, without telling anyone.

That's libs for you. Still believe in the magic of these garbage institutions, even when they're headed by a game show host and wrestling valet who's famous because he was good at getting his name in the NY Daily News and the NY Post 40 years ago. He is no less legitimate than all of you clowns. The only reason Weinberg has a voice is because he's rich, too.

soulofmischief•7h ago
You're right on all points, but it's easy to come to such a conclusion. The harder, and more rewarding path, is to organize with others and figure out what can be done even if it seems hard or impossible, because standing around observing our rapid decline isn't good for anyone.

If the government is failing, explore writing civil software, providing people protected forms of communication or modern spaces where they can safely organize and learn, eventually the current generations die and a new, strongly connected culture has another chance to try and fix things.

This is why so many are balkanizing the internet age gating, they see the threat of the next few digitally-augmented generations.

cousin_it•8h ago
This is a great point. Everyone who has talked with chatbots at all: please note that all contents of your past conversations with chatbots (that already exist now, and that you can't meaningfully delete!) could be used in the future to target ads to you, manipulate you financially and politically, and sell "personalized influence on you specifically" as a service to the highest bidder. Just wanted to make sure y'all understand that.

EDIT: I want to add that "training on chat logs" isn't even the issue. In fact it understates the danger. It's better to imagine things like this: when a future ad-bot or influence-bot talks to you, it will receive your past chatlogs with other bots as context, useful to know what'll work on you or not.

EDIT 2: And your chatlogs with other people I guess, if they happened on a platform that stored them and later got desperate enough to sell them. This is just getting worse and worse as I think about it.

HPsquared•8h ago
Or literally read out in court if you discuss anything relevant to a legal case.
hkon•7h ago
For me, what is most scary about ai-chatbot is the interface to an exploiter.

They can just prompt "given all your chats with this person, how can we manipulate him to do x"

Not really any expertise needed at all, let the AI to all the lifting.

bethekidyouwant•7h ago
I can see how this would work if you just turned off your brain and just thought of course this will work
lordhumphrey•7h ago
I take it you haven't seen this then:

https://cybersecuritynews.com/fraudgpt-new-black-hat-ai-tool...

hhh•6h ago
different flavour gpt wrapper
hkon•7h ago
Which you of course already have done.
mycall•6h ago
Turn that around and think of the AI itself as the exploiter. In the world of agent driven daily tasks, AI will indeed want to look at your historical chats to find a way to "strongly suggest" you do task 1..[n] for whatever master plan it has for it's user base.
matheusmoreira•1h ago
Ah yes, the plot of Neuromancer. Truly interesting times we are living in. Man made horrors entirely within the realm of our comprehension. We could stop it but that would decrease profits so we won't.
pwython•7h ago
Honestly, retargeting/personalized ads have never bothered me. If I'm gonna see ads anyway, I'd much rather get ads that might actually interest me, versus wildly irrelevant pharmaceutical drugs and other nonsense.
apparent•6h ago
Part of the issue is that this enables companies to give smaller discounts to people they identify as more likely to want a product. The net effect of understanding much more about every person on earth is that people will increasingly find the price of goods to be just about the max they would be willing to pay. This shifts more profit to companies, and ultimately to the AI companies that enable this type of personalization.
jazzyjackson•6h ago
I wish I could fund an as campaign to free people from the perception that ads are to sell you products

Ads are there to change your behavior to make you more likely to buy products, e.g., put downward pressure on your self esteem to make you feel "less than" unless you live a lifestyle that happens to involve buying X product

They are not made in your best interest, they are adverserial psycho-tech that have a side effect of building a economic and political profile on you for whoever needs to know what messaging might resonate with you

FollowingTheDao•6h ago
This, yes, thank you. Advertising is behavioral modification. They even talk about it out in the open, and if you are unconvinced, hear it from the horse's mouth:

https://brandingstrategyinsider.com/achieving-marketing-obje...

"Your ultimate marketing goal is behavior change — for the simple reason that nothing matters unless it results in a shift in consumer actions"

hdgvhicv•4h ago
It is literally brainwashing.

Brainwashing is the systematic effort to get someone to adopt a particular loyalty, instruction, or doctrine.

reaperducer•4h ago
Ads are there to change your behavior to make you more likely to buy products

You have described one type of ad. There are many many types of ads.

If you were actually knowledgeable about this, you'd know that basic fact.

cousin_it•6h ago
The ads won't be for the product which will bring you maximum value. They will be for the product that will bring the advertiser maximum profit (for example, by manipulating you into buying something overpriced). The products which are really good and cheap, giving all their surplus value to you and just a little bit to the maker, will lose the bidding for the ad slot.
username332211•5h ago
Not necessary. If economies of scale exist, that means that a popular product is going to be inherently superior in terms of price or quality than an unpopular one. Companies that advertise effectively can offer a better product precisely because they advertise and have large market share. (Whether they do it or not is a question of market conditions, business strategy, public policy and ultimately their own decisions.)

Surplus value isn't really that useful of a concept when it comes to understanding the world.

reaperducer•4h ago
a popular product is going to be inherently superior in terms of price or quality than an unpopular one.

This is so far from the reality of so many things in life, it's hard to believe you've thought this through.

Maybe it works in the academic, theoretical sense, but it falls down in the real world.

username332211•4h ago
Really? Because the most common place I've seen this logic break down, is the bizarre habit of people to derive some sort of status and self-worth from using an unpopular product. And to then to vehemently defend that choice in the face of all evidence to the contrary.

No "artisanal" product, from food to cosmetics to clothing and furniture is ever worth it unless value-for-money (and money in general) is of no significance to you. But people buy them.

I really can't go trough every product class, but take furniture as a painfully obvious example. The amount of money you'd have to spend to get furniture of a similar quality to IKEA is mind-boggling. Trust me, I've done it. Yet I know of people in Sweden who put considerable effort in acquiring second-hand furniture because IKEA is somehow beneath them.

Again, there situations where economies of scale don't exist and situations where a business may not be interested in selling a cheaper or superior product. But they are rarer than we'd like to admit.

koolala•6h ago
If I'm going to be psychologically manipulated, I want my psychological profile to be tracked and targeted specifically to my everyday behaviors.
kjkjadksj•5h ago
You get ads that actually interest you with targeted ads? You might be one of the only people with that experience. The whole meme with targeted ads is “I looked up toilet paper on Amazon once now I get ads for charmin all over the web”
fragmede•5h ago
I stopped using TikTok and Instagram because I was impulse purchasing too much stupid crap from their advertisement. So there are at least two of us out there.
esafak•5h ago
Then why in twenty years of personalization am I still seeing junk ads? I don't want to hear about your drop-shipping or LLM wrapping business. The overwhelming majority of ads are junk. Yes, they bother me.
fsflover•2h ago
This is not how personal targeting works. Here's how:

> Each Shiftkey nurse is offered a different pay-scale for each shift. Apps use commercially available financial data – purchased on the cheap from the chaotic, unregulated data broker sector – to predict how desperate each nurse is. The less money you have in your bank accounts and the more you owe on your credit cards, the lower the wage the app will offer you.

https://pluralistic.net/2024/12/18/loose-flapping-ends/#luig...

BLKNSLVR•44m ago
I'm the complete opposite and don't really understand your position.

I'd rather see totally irrelevant ads because they're easy to ignore or dismiss. Targeted ads distract your thought processes explicitly because they know what will distract you; make you want something where there was previously no wanting. Targeted advertising is productised ADHD; it is anti-productive.

Like the start of Madness' One Step Beyond: "Hey you! Don't watch that, watch this!"

rsyring•8h ago
IMO: make all the laws you want. They generally won't be enforced and, if they are, it will take 5-10 years to make it's way through the courts. At best, the fines will be huge and yet account for maybe 10% of the revenue generated by violating the law.

The incentives are all wrong.

I'm fundamentally a capitalist because I don't know another system that will work better. But, there really is just too much concentrated wealth in these orgs.

Our legal and cultural constructs are not designed in a way that such disparity can be put in check. The populace responds by wanting ever more powerful leaders to "make things right" and you get someone like Trump at best and it goes downhill from there.

Make the laws, it will help, a little, maybe.

But I think something more profound needs to happen for these things to be truly fixed. I, admittedly, have no idea what that is.

martin-t•7h ago
The law really should be "if you cause harm to others you will receive 2x greater harm done to you". And "if you profit from harming others, you will compensate them by 2x of what you gained".

Instead of the current maze of case specific laws.

---

> But I think something more profound needs to happen for these things to be truly fixed. I, admittedly, have no idea what that is.

You know, you're just unwilling to think it because you've been conditioned not to. It's what always happens when inequality (of income, power, etc.) gets too high.

ddq•3h ago
Yes, only 3x rather than 2x. https://en.wikipedia.org/wiki/Treble_damages
martin-t•16m ago
Y'know, I used to say 1.5-2x, now I lean towards 2x but 3x is actually fine by me too. And for any kind of financial crimes, this should also be divided by the probability of getting caught and convicted.
rightbyte•6h ago
> I'm fundamentally a capitalist

Being a capitalist is decided by access to capital not really a belief system.

> But, there really is just too much concentrated wealth in these orgs.

Please make up your mind? Should capital self-accumulate and grant power or not?

Portraying capitalism as some sort of force of nature that one doesn't "know another system that will work better" might be the neoliberals biggest accomplishment.

Lerc•8h ago
I think much of the philosophical discussion on the pertinent issues here have been discussed at length in the context of Legal, Medical, or Financial advice.

In essence, there is a general consensus on the conduct concerting trusted advisors. They should act in the interest of their client. Privacy protections exist to enable individuals to be able to provide their advisors the context required to give good advice without fear of disclosure to others.

I think AI needs recognition as a similarly protected class.

AI actions should be considered to be acting for a Client (or some other specifically defined term to denote who they are advising). Any information shared with the AI by the client should be considered privileged. If the Client shares the information to others, the privilege is lost.

It should be illegal to configure an an AI to deliberately act against the interests of their Client. It should be illegal to configure an AI to claim that their Client is someone other than who it is (it may refuse to disclose, it may not misrepresent). Any information shared with an AI misrepresenting itself as the representative of the Client must have protections against disclosure or evidential use. There should be no penalty to refusing to provide information to an AI that does not disclose who its Client is.

I have a bunch of other principles floating around in my head around AI but those are the ones regarding privacy and being able to communicate candidly with an AI.

Some of the others are along the lines of

It should be disclosed(of the nutritional information type of disclosure) when an AI makes a determination regarding a person. There should be a set of circumstances where, if an AI makes a determination regarding a person, that person is provided with means to contest the determination.

A lot of the ideas would be good practice if they went beyond AI, but are more required in the case of AI because of the potential for mass deployment without oversight.

olyellybelly•8h ago
The hype industry around AI is making too much money for governments to do anything about it that's actually needed.
testfrequency•8h ago
Especially in the US right now where they are doing whatever it takes to be #1 in ~anything, ethical or not. It’s pure bragging rights and power, anything goes - profit is just the byproduct.
mark_l_watson•4h ago
#1, likely not now:

Most of my close friends are non-technical and expect me to be a cheerleader fir USA AI efforts. They were surprised when I started mentioning the recent Stanford study that 80% of US startups are using Chinese models. I would like us to win but we seem too hype focused and not engineering and practical applications focused.

testfrequency•3h ago
I agree
BLKNSLVR•26m ago
First they came for climate science, but I said nothing because I was not a climate scientist.

Then they came for medical science, but I said nothing because I was not a doctor.

Then they came for specialists and subject matter experts, and I said nothing because I was an influencer and wanted the management position.

hungmung•8h ago
America is in love with privatized surveillance, it helps get around that pesky Constitution that prohibits unwarranted search and seizure.

"Wipeth thine ass with what is written" should be engraved above the doorway of the National Constitution Center.

antegamisou•5h ago
Don't be hesitant to suggest that YC loves too funding questionable ideas all the time.
add-sub-mul-div•8h ago
Cool, but they're shoving AI into their products and trying to profit from the surveillance etc. that went into building that technology so this just comes across as virtue signaling.
aunty_helen•7h ago
This guy has been know to fold like a bed sheet on principles when it’s convenient for him.

> Use our service

Nah.

swayvil•7h ago
Surveillance is our overlords's favorite thing. And AI makes it 1000x better. So good luck banning it.

Ultimately it's one of those arms races. The culture that surveills its population most intensely wins.

throwaway106382•7h ago
Unless it’s banned worldwide by every country by binding treaty this this will never work.

Banning it just in USA leaves you wide open to be defeated by China, Russia, etc….

Like it or not it’s a mutually assured destruction arms race.

AI is the new nuclear bomb.

martin-t•7h ago
While real nukes still work.

What bad thing exactly happens if China wins? What does winning even mean? They can't invade because nukes.

Can they manipulate elections? Yes, so we'll do the opposite of the great firewall and block them from the internet. Block their citizens from entering physically, too.

We should be doing this anyway, given China is known to force them to spy for them.

fragmede•6h ago
If China gets to ASI first, the nukes won't matter.
UncleMeat•5h ago
"Don't worry, things will be good if the amoral megacorporations and ascendant fascist government in the US gets there first" is not my idea of a good time.
martin-t•5h ago
ASI doesn't change physics.

Perun has a very good explanation why defending against nukes is impossible to do economically compared to just having more nukes and mutually assured destruction: https://www.youtube.com/watch?v=CpFhNXecrb4

fragmede•5h ago
Who said anything about economically?
martin-t•2h ago
Any potential ASI would still be limited by raw resources and even if you assume all work would be done by robots, somebody would still have to build those robots first. Such a ramp up in production would give everyone else time to build their own ASIs or strike first.
mrob•3h ago
If any country gets ASI first, nukes won't matter. For an ASI, most tasks are more reliably accomplished with all biological life dead. Making a safe ASI is vastly more difficult than the default option of letting it kill everything, so the default will be attempted first.
martin-t•2h ago
There's two interpretations of the parent comment:

1) China will get ASI and use it to beat everyone else (militarily or economically). In my reply, I argue we shouldn't race China because even if ASI is achieved and China gets it first, there's nothing they can do quickly enough that we wouldn't be able to build ASI second or nuke them if we couldn't catch up and it became clear they went to become a global dictatorship.

2) China will get ASI, it'll go out of control and kill everyone. In that case, I argue even more that we shouldn't race China but instead deescalate and stop the race.

BTW even in the second case, it would be very hard for the ASI to kill everyone quickly enough, especially those on nuclear submarines. Computers are much more vulnerable to EMPs than humans so a (series of) nuclear explosion(s) like Starfish Prime could be used to destroy all of most of its computers and give humans a fighting chance.

rightbyte•6h ago
So your hypothesis is that the surveillance state is some sort of necessity that brings an great edge?
yupyupyups•7h ago
Wrong. Excessive data collection should be banned.
quectophoton•6h ago
But this is not excessive, it's Legitimate Interest and absolutely needed to provide a good service /s
sacul•7h ago
I think the biggest problem with chatbots is the constant effort to anthropomorphize them. Even seasoned software developers who know better fall into acting like they are interacting with a human.

But a llm is not a human, and I think OpenAI and all the others should make it clear that you are NOT talking to a human. Repeatedly.

I think if society were trained to treat AI as NOT human, things would be better.

mejutoco•7h ago
> I think if society were trained to treat AI as NOT human, things would be better.

Could you elaborate on why? I am curious but there is no argument.

sacul•6h ago
Yeah, thanks for asking. My reasoning is this:

That chatbot you're interacting with is not your friend. I take it as a fact (assumption? axiom?) that it can never be your friend. A friend is a human - animals, in some sense, can be friends - who has your best interests at heart. But in fact, that chatbot "is" a megacorp whose interests certainly aren't your interests - often, their interests are at odds with your interests.

Google works hard with branding and marketing to make people feel good about using their products. But, at the end of the day, it's reasonably easy to recognize that when you use their products, you are interacting with a megacorp.

Chatbots blur that line, and there is a huge incentive for the megacorps to make me feel like I'm interacting with a safe, trusted "friend" or even mentor. But... I'm not. In the end, it will always be me interacting with Microsoft or OpenAI or Google or whoever.

There are laws, and then there is culture. The laws for AI and surveillance capitalism need to be in place, and we need lawmakers who are informed and who are advocates for the regular people who need to be protected. But we also need to shift culture around technology use. Just like social customs have come in that put guard rails around smartphone usage, we need to establish social customs around AI.

AI is a super helpful tool, but it should never be treated as a human friend. It might trick us into thinking that its a friend, but it can never be or become a friend.

fragmede•6h ago
But why not? If we look past the trappings of "we hate corporations", why not treat it as a friend? Let's say you acquire a free-trade organic GPU and run an ethically trained LLM on it. Why is an expensive funny-shaped rock not allowed to become a friend when a stuffed animal can?
aduwah•4h ago
The stuffed animal has only a sentimental value and it will not have a statistics based and geopolitically biased opinion that it shares with you and influences your decisions. If you want to see how bad a chatbot can be as a friend, see the recent case when it has driven a poor mentally vulnerable minor to suicide
mu53•4h ago
There is a new term, AI Psychosis.

AI chatbots are not humans, they don't have ethics, they can't be held responsible, they are the product of complex mathematics.

It really takes the bad parts from social media to the next level.

pwython•7h ago
I'm not a full-time coder, it's maybe 25% of my job. And am not one of those people that have conversations with LLMs. But I gotta say I actually like the occasional banter, it makes coding fun again for me. Like sometimes after Claude or whatever fixes a frustrating bug that took ages to figure out, I'll be like "You son of a bitch you fixed it! Ok, now do..."

I've been learning a hell of a lot from LLMs, and am doing way more coding these days for fun, even if they are doing most of the heavy lifting.

BriggyDwiggs42•6h ago
Yeah I’ve loved seeing people call them clankers and stuff for that reason.
giancarlostoro•7h ago
I stopped using Facebook because I saw a video of a little Australian girl maybe 7 years old age wise holding a spider bigger than her face in her hand. I wrote the most internet meme comment I could think of “girl let go of that spider, we gotta set the house on fire” hit the button to post, only it did not post, it gave me an account strike. At the time I was the only developer at my employer who managed our Facebook app integration, so I appealed it, but another AI immediately denied my appeal, or maybe a really fast human idk but they sure didnt know meme culture.

I outrifht stopped using Facebook.

We are doomed if AI is allowed to punish us.

philipallstar•6h ago
Maybe it banned you because you used a comma splice[0].

[0] https://en.m.wikipedia.org/wiki/Comma_splice

mmaunder•6h ago
Burn the witch.
gcanyon•5h ago
damn, you smoked them, please stop, they've had enough, don't be so, cruel
andrewflnr•1h ago
> don't be so, cruel

We've got the real criminal right here.

iammrpayments•5h ago
The way facebook uses automation to manage their products is a disgrace, they can’t even manage to keep themselves automatically banning a lawyer who happens to be called Mark Zuckerberg: https://www.huffpost.com/entry/mark-zuckerberg-lawsuit-imper...

If you advertise on facebook you’re almost guarantee to have your ad account restricted for no apparent reason and no human being to appeal to, even if you spend big money.

It’s so bad that is common knowledge that you should start a fan page, post random stuff and buy page likes for 5-7 days before you start advertising, otherwise their system will just flag your account.

hdgvhicv•4h ago
Yet oddly they never seem to be able to stop scam adverts.
azemetre•1h ago
Those are their best customers, they spend money and ask no questions.
tomrod•3h ago
I've tried to open an account for while now, consistently blocked. At this point I have to assume it's stock is a scam.
terribleperson•5h ago
I'm suspended on reddit for, as best as I can tell, posting a DFhack bug fix script. For the uninitiated, this is a bug fix script for a program commonly used in Dwarf Fortress moderating, not anything illicit.

If this kind of low-quality AI moderation is the future, I'm not sure if these major platforms will even remain usable.

floundy•4h ago
That's hilarious given that Reddit is utterly overrun with blatant, low-quality LLM accounts using ChatGPT to post comments and gain karma, and several of the "text stories" on the front page from subs like AITA are blatant AI slop that the users (or other bots?) are eating up.

I suspect sites like Reddit don't care about a few% false positive rate, without considering in context that bot farmers literally do not care, they'll make another free account, but genuine users will have their attitude towards the site turn significantly negative when they're falsely actioned.

Don't worry, Reddit's day of reckoning comes when the advertisers figure out what percent of Reddit's traffic that they're paying to serve ads to are just bots.

trod1234•4h ago
Its not just Reddit, this is happening on all networked platforms. FB, X, HN, Reddit, they are all in the exact same boat, and some are quite worse than Reddit.

This is surreptitious jamming of communications at levels that constitute and exceed thresholds for consideration as irregular warfare.

Genuine users no longer matter, only the user counts which are programmatically driven to distort reflected appraisal. The users are repressed and demoralized because of such false actions, and the platform has no solution because regulation failed to act at a time they could have changed these outcomes.

What comes later will simply be comparable to why "triage" is done on the battlefield.

Adtech is just a gloriously indirect means for money laundering in fiat money-printing environments. Credit/Debt being offers, when it is unbacked without proper reserve is money-printing.

terribleperson•3h ago
I don't know if the day of reckoning will be anytime soon. I think a lot of major advertising firms are aware that they're mostly serving to bots, but if they tell their customers that, they don't get paid.

edit: This has definitely soured my already poor opinion of reddit. I mostly post there about video games, or to help people in /r/buildapc or /r/askculinary. I think I'd rather help people somewhere I'm not going to get blackholed because an AI misinterpreted my comments.

viccis•3h ago
>That's hilarious given that Reddit is utterly overrun with blatant, low-quality LLM accounts using ChatGPT to post comments and gain karma, and several of the "text stories" on the front page from subs like AITA are blatant AI slop that the users (or other bots?) are eating up.

Check out this post [1] in which the post includes part of the LLM response ("This kind of story involves classic AITA themes: family drama, boundary-setting, and a “big event” setting, which typically generates a lot of engagement and differing opinions.") and almost no commenter points this out. Hilarious if it weren't so bleak.

1: https://www.rareddit.com/r/AITAH/comments/1ft3bt6/aita_for_n... (using rareddit because it was eventually deleted)

threeducks•3h ago
Over the past two years, I have also seen many similar stories where the majority of users were unable to recognize that these stories were AI-generated. I fear for the future of democracy if the masses are so easily deceived. Does anyone have any good ideas on how to counteract this?
utyop22•40m ago
Literacy rates have been falling off a cliff for decades.

If theres no literacy, there is no critical thinking.

The only solution is to deliver high quality education to all folks and create engaging environments for it to be delivered.

Ultimately it comes down to influencing folks to think deeper about whats going on around them.

Most of the people between the age of 13-30ish right now are kinda screwed and pretty much a write off imo.

trod1234•4h ago
I'm banned from a few subreddits for correctly pointing out that ricing is not a pejorative, and the history of the culture that led to extreme customization.

You have malevolent third-party bots taking advantage of poor moderation to conflate similar/same word different context pairs to silence communication.

For example, the reddit AI bots considers "ricing" to be the same as "rice boy". The latter definitely is pejorative, but the former is not.

Just wild and absolutely crazy-making that this is even allowed, since communication is the primary means to inflict compulsion and torture these days.

Intolerable acts without due process or a rule of law lead to only one possible outcome. Coercion isn't new, but the stupid people are trying their hand for another bite at the apple.

The major platforms will not remain usable because eventually you get this hollowing out of meaning, and this behavior will either drive away all your rational intelligent contributors, or lead to accelerated failures such as evaporative cooling in the social networks. People use things because they provide some amount of value. When that stops being the case, the value disappears not overnight, but within a few months.

Just take a look at the linuxquestions subreddit since the mod exodus. They have a automated trickle of the same questions that don't really get sufficiently answered. Its all slop.

All the experienced people who previously shared their knowledge as charity have moved on because they were driven out by caustic harassment and lack of proper moderation to prevent that. The mod list even hides who the mods are now so people who have had moderated action can't appeal to the Reddit Administrators with the specific moderator who did something as a fascist dictator incapable of basic reading level comprehension common to grade schoolers (AI).

bitwize•4h ago
Per the Linux Foundation it is inappropriate to speak of programs as "hung" due to the political and social history associated with hanging. What makes you think "ricing" would be considered acceptable language in today's context?
ants_everywhere•4h ago
huh, the past tense of the violent "hang" is "hanged" not "hung". https://www.thefreedictionary.com/hung

"hung" means to "suspend", so the process is suspended

blibble•3h ago
I don't remember voting the Linux Foundation into power as global word police
trod1234•2h ago
I also don't remember voting to allow doublespeak to be the dominant form of definition.
noah_buddy•4h ago
I thought you were going to point out distinct etymology, but these terms do seem linked, no? Not surprising that the shared lineage confers shared problems.
trod1234•2h ago
The two are unconnected, one is used as a pejorative which is racist, the other isn't. This is not a hard distinction to make if you aren't a bot.

<Victim> "I'm ricing my Linux Shell, check it out." <Bot> That's Racist!

<Bot Brigade> Moderator this person is violating your rules and being racist!

<Moderator> I'm just using AI to determine this. <Bot Brigade> Great! Now they can't contribute. Lets find another.

TL;DR Words have specific meanings, and a growing number of words have been corrupted purposefully to prevent communication, and by extension limit communication to the detriment of all. You get the same ultimate outcomes when people do this as any other false claim. Abuses pile up until eventually in the absence of functioning non-violent conflict resolution; violence forces the system to reform.

Have you noticed that your implication is circular based on the indefinite assumption (foregone conclusion) that the two are linked (tightly coupled)?

You use a lot of ambiguous manipulative language and structure. Doing that makes any reasonable person think you are either a bot, or a malicious actor.

busymom0•2h ago
Meanwhile the "new" Digg reboot plans on using AI moderators too...
terribleperson•1h ago
The thing that annoys me is that I could see value in AI moderation. Instead of scanning every post with AI with overly-broad criteria (and probably lower-power models), use AI to prescreen reports and determine whether they're worth surfacing to a human. IT could also be used to put temporary holds on material that's potentially criminal or just way over the line, but those holds should go to the very front of the human review queue to either be lifted or the content deleted.

Real moderation actions should not be taken without human input and should always be appealable, even if the appeal is just another mod looking at it to see if they agree.

kbaker•47m ago
Lol. I got perma-banned for violating rules under my alt accounts.

But I don't have any alt accounts...??? Appeal process is a joke. I just opted to delete my 12 year old account instead and have stopped going there.

Oh well, probably time for them to go under and be reborn anyways. The default subs and front page has been garbage for some time.

morkalork•4h ago
Not using Facebook doesn't help either though. My new co-worker didn't have one but needed it for his job so he made an account and was immediately flagged for suspicious activity.
smcin•4h ago
What did he do that was deemed suspicious? Send a large number of friend requests very quickly after joining (<24hrs? 1wk?)? Follow requests? Upvotes? Log on from multiple devices in multiple locations? Put third-party links in his bio or profile?

(LinkedIn ramped up anti-bot/inauthentic-user heuristics like that a few years ago. Sadly they are necessary. Near-impossible for heuristics to distinguish between real humans with inauthentic or suspiciously commercial behavior.)

junon•3h ago
I had the same thing happen on both Facebook and Twitter. The answer is: nothing.

In both cases for me, I had signed up and logged in for the first time, and was met with an immediate ban. No rhyme or reason why.

I, too, needed it for work so had no prior history from my IPs in the case of Facebook at least. So maybe that's why, but still. Very aggressive and annoying blocking algorithm behavior like that cost them some advertising money as we just decided it wasn't worth it to even advertise there.

malfist•3h ago
That's a lot of victim blaming in your post. Without any evidence to support it
smcin•1h ago
Don't talk nonsense like "victim-blaming" or outrage-trolling; we're not on Twitter; I gave the OP useful actionable constructive advice (based on multiple friends' of mine experience, that in some cases took me hours to debug) about what (often innocent) behaviors can tend to trigger anti-authentic heuristics on a legit user's account, esp. a very new one (<24h or <7d old). (For obvious reasons social sites won't tell you the heuristics, and they vary them often, and almost surely combine other information on that device, type, IMEI, IP address, subnet, other sites' activities by that user, etc.) Ok?

Nowhere did I justify social sites gtting things wrong or not having better customer support to fix cases like this.

Also the good-faith translation of "Without any evidence to support it" is "How/where did you find that out?", but even at that I had already given you some evidence: "LinkedIn ramped up anti-bot/inauthentic-user heuristics like that a few years ago." Ask good-faith questions instead of misrepresenting people. If you actually read my other HN posts you won't see me condoning large social-media sites behaving badly.

Frost1x•4h ago
The underlying issue here isn’t AI based policing, it’s the fact private entities have enough unregulated influence on peoples’ daily life that their use of these or any such policy mechanisms are undemocratically effecting people in notably significant ways. The Facebook example is, whatever, but what if it’s some landlord renting making a decision, some health insurance company deciding your coverage, etc.

Now obviously this won’t stop with private entities, state and federal law enforcement are gung-ho to leverage any of these sorts of systems and have been for ages. It doesn’t help the current direction the US specifically is moving in, promoting such authoritarian policies.

lumost•4h ago
We already live in this world for health insurance. The ai can make plausible sounding denials which a doctor can rubber stamp. You have no ability to sue the doctor for malpractice, you cannot appeal the decision.

Medical insurance is quickly becoming a simple scam where you are forced to pay a private entity that refuses to ever perform its function.

olddustytrail•1h ago
Most first world countries don't have this. It's not a given.
azemetre•1h ago
The US is usually a hot bed of experimentation in corporate malfeasance.
Ray20•1h ago
> what if it’s some landlord renting making a decision, some health insurance company deciding your coverage, etc.

Then you simply use the services of another private company. Here, in fact, there are no particular dangers, after all, private companies provide services to people because it is profitable for private companies.

BiteCode_dev•1h ago
This works only if none of those are true:

- There is real competition. It's less and less the case for many important things, such as food, accommodations, health, etc.

- Companies pay a price for misbehaving that is much higher than what they got from misbehaving. Also less and less the case, thanks to lobbying, huge law firms, corruption, etc.

- The cost of switching is fair. Moving to another places is very expensive. Doing it several times in a row is rarely possible for most people.

- Some practice are not just generalized in the whole industry. In IT tracking is, spying is, and preventing you from managing your device yourself is more and more trendy.

Basically, this view you are presenting is increasingly naive and even dangerous for any citizen practicing it.

toasted-subs•4h ago
Yeah I've basically refused to ever have kid's because I cant imagine them growing up knowingly becoming slaves. As a parent I'd basically expected CPS to be called on me for having kid's in this environment.
junon•3h ago
What in the world are you talking about?
tomrod•3h ago
If it wasn't Ai slop, it seems close.
skipants•2h ago
I've already had a pre-screen coding tests judged by AI. It's unsettling.

I also had a CV rejection letter with AI rejection reasons in it as well which was frustrating because none of the reasons matched my CV at all, in my opinion. I am still not sure if the resume was actually reviewed by a human or AI but I am assuming the latter.

I absolutely hated looking for a new job pre-AI and when times were good. Now I'm feeling completely disillusioned with the whole process.

dude250711•2h ago
They probably generated a generic AI response once and just copy-pasted it, thus saving their time twice. They did not do one scummy thing, these are just overall scummy people.
woadwarrior01•7h ago
Contrary to their privacy-washing marketing, DuckDuckGo serves cloaked Bing ads URLs with plenty of tracking parameters. Is that sort of surveillance fine?

https://imgur.com/a/Z4cAJU0

lordhumphrey•6h ago
The worker-drones powering the software world have so far resolutely not managed to reflect on their primary role in implementing the dystopian technological landscape we live in today, when it comes to privacy.

Or they have and they simply don't care, or they feel they can't change anything anyway, or the pay-check is enough to soothe any unease. The net result is the same.

Snowden's revelations happened 12 years ago, and there were plenty of what appeared to be well-intentioned articles and discussions in the years that followed. And yet, arguably, things are even worse today.

furyofantares•6h ago
I'm really impressed with how menacing Facebook feels in the cartoon on the left. And then a massive Google lurking in the background is excellent, although it being a silhouette of The Iron Giant takes a lot away from it for me.

The ChatGPT translation on the right is a total nothingburger, it loses all feeling.

citizenpaul•6h ago
Lets not forget that Gabereial Weinburg is a two faced ghoul or wolf in sheep clothing. He has literally said he does not believe people need privacy yet that supposedly is the duckduckgo's main selling point. He has made all kinds of tracking deals with other companies so duckduckgo "is not tracking you" just their partners are.

Most of the controversial stuff he has done is being whitewashed from the internet and is now hard to find.

kogasa240p•6h ago
Interesting find, wasn't he selling info to Microsoft?
doitLP•5h ago
This is baseless with no references or citations. What possible incentive would he have for sounding this alarm and urging congress to act when he could be using the data he secretly is collecting for his own gain when all the AI companies are doing it openly for obvious gain
reaperducer•4h ago
Interesting find, wasn't he selling info to Microsoft?

It's not a find. It's an allegation.

HN is supposed to be better than that.

hsartoris•5h ago
This is a pretty serious allegation, but cursory searching didn’t yield anything. Do you have any sources you can point to? Being as it’s very difficult to actually ‘whitewash’ things from the internet, I would expect there is something to point to. Thanks!
mixmastamyk•1h ago
They use(d?) bing and collect extensive metrics like exactly what you click. Have confirmed this with browser tools, and mitigate it with adguard and plugins.

Must sell it somehow. Likely but have not seen evidence.

FollowingTheDao•6h ago
I cannot understate how afraid I am of the use of AI Surveillance. The worst thing is there is nothing you can do about it. It does not matter how private I am online, if the person I am sending things to is not privacy conscious and, say, uses AI to summarize emails, then I am in the AI database. And then there is just the day to day data being scraped, like bank records, etc.

I mean a PARKING LOT in my town is using AI cameras to track and bill people in a parking lot! The people of my town are putting pressure on the parking lot owner to get rid of it but apparently the company is paying him too much money for having it in his lot.

Like the old video says "Don't talk to the Police" [1], but now we have to expand it to say "Don't Do Anything", because everything you do is being fed into a database that can possibly be searched.

[1] https://www.youtube.com/watch?v=d-7o9xYp7eE

dyauspitr•6h ago
You through enough identifiers into the mix and even low level employees will be able to get a summary of your entire past in seconds. It’s a terrifying world and I feel bad for gen z and beyond.
bubblebeard•6h ago
While I cannot see a way to effectively stop companies from collecting data from you (aside from avoiding practically everything), that doesn’t mean we should do nothing.

DuckDuckGo aren’t perfect, but I think they do a lot to all our benefit. Theirs have been my search engine of choice for many years and will continue being so.

Shout outs to their amazing team!

jmort•5h ago
I think a technical solution is necessary, rather than a legal/regulatory one
catigula•5h ago
I think this type of AI doomsday hypothesis rubs me the wrong way because it's almost quaint.

Merely being surveilled and marketed at is a fairly pedestrian application from the rolodex of AI related epistemic horrors.

bArray•5h ago
Protected chats? The ship already sailed, text messages via the phone network are already MITM'd since a very long time.

Even in real life, the police in the UK now deploy active face recognition and makes tonnes of arrests based on it (sometimes wrongly). Shops are now looking to deploy active face recognition to detect shoplifters (although it's unclear legally what they will actually do about it).

The UK can compel any person commuting through the UK to give over their passwords and devices - you have no right to appeal. Refusing to hand over the password can get you arrested under the Terrorist Act, where they can hold you indefinitely. When arrested under any terrorism offence you also have not right to legal representation.

The days of privacy sailed unnoticed.

whyenot•4h ago
It seems highly unlikely to me that it will be banned by congress in the next few years, if ever. So, what we really should be asking is how do we live in a world of pervasive surveillance, where everything we do and say is being recorded, saved, analyzed, and potentially used to manipulate us.

As I write this, sitting in Peet's Coffee in downtown Los Altos, I count three different cameras recording me, and I'm using their public wifi which I assuming is also being used to track me. That's the world we have now.

trod1234•4h ago
When you say pervasive, WIFI and Cameras aren't that pervasive. What actually is is quite a bit worse than this.

For example,

The WIFI signals can uniquely identify every single heartbeat in realtime within a certain range of the AP, multiple linked access points increase this range up to a mile. The radio devices you carry around with you unknowingly beacon at set intervals tracking your location just like an animal on a tracking collar. This includes the minute RFID chips sewn into your clothing and fabrics.

The phones don't turn off their radios when in airplane mode. Your vehicle had at least 3 different layers that uniquely beacon a set of identifiable characteristics to anyone with a passive radio. OBD-II uplink, TPMS sensors (one for each wheel), and Telematics.

Home Depot in cooperation with Flock, has without disclosure captured your biometrics, and tracked your minute movements and put that up for sale to the highest bidder through subscription based profiling.

Ultrasonic beacons are emitted from your phone to associate geographically local devices to individual people. All visible to anyone with a SDR, manipulable by those with a Flipper0, and treated as distinct sources of truth in a layered approach.

All aspects of social interaction with the wider world have now been replaced with a surrogate that runs through a few set centralized points that can be turned off/degraded to drive anyone they wish into poverty with no visible indicator, or alternative.

Imagine you are a job seeker, the AI social credit algorithm they've developed to target wealthy people on one side, and to torture/make people better incorrectly identifies you as a subversive, and so they not only degrade your existing services but isolate all your communications from everyone else intermittently through failure following a statistical approach similar to Turing during WW2.

Imagine the difficulty of finding work in any specialized field which you have experience for, where you can never receive those callbacks because they are inherently interrupt driven; and interrupt driven calls are jammed without your ability to recognize the jamming. Such communications are vulnerable to erasure.

Should any system ever exist whose sole purpose or impact has become to prevent a arbitrary target through interference from finding legitimate work, or other aspects to feed themselves or exercise their existential rights.

In effect such a system of control silently makes these people into slaves without recourse, or informed disclosure. It fundamentally violates their human rights and "these systems exist".

Failure of government to timely uphold the social contract promises and specifics of the constitution becomes after-the-fact purposeful intent through the gross negligence and failure to uphold their constitutional oaths. History has shown that repeatedly if the civilization survives at all, it repeatedly reforms itself through violence. Something no good person wants, but given the narrowing of agency and choice to affect the future; it is the only alternative when the cliff of existential exctinction is present (whether people realize that or not).

jjulius•2h ago
Opt out as much as possible.

If spaces like that irk you, stop going there. Limit your use of the Internet to when you're at home on your own network. Do we truly need to be browsing HN and other sites when we're out of the house?

Ditch the smartphone. Most things that people claim you need a smartphone for "in order to exist in modern society" can also be done via a laptop or a desktop, including banking. You don't need access to everything in the world tucked neatly away in your pocket when you're going grocery shopping, for instance.

Buy physical media so that your viewing habits aren't tracked relentlessly. Find hobbies that get you away from it all (I backpack!).

Fuck off from social media. Support hobby-based forums and small websites that put good faith into not participating in tracking and advertising, if possible.

Limit when you use the internet, and how.

It's hard to escape from it, but we can significantly limit our interactions with it.

gcanyon•3h ago
Is there anyone who has a credible take on how we avoid an effectively zero-privacy future? AI can identify people by their walk, no facial recognition required, and now technology can detect heartbeats through variations in wifi signals. It seems guaranteed we are heading for a future where someone knows everything. The only choice is whether it's a select few, or everyone. The latter seems clearly preferable.
pilingual•2h ago
Legislation or, my favorite, market forces.
BLKNSLVR•33m ago
"Market forces" is in serious decline as a leveller at this point in history, especially around products of interest to power or those producing outsized profits.

Or maybe it never was and this fact is just becoming more transparent.

drnick1•3h ago
The dude is against "surveillance," but his blog has dependencies (makes third party requests) to Cloudflare, Google, and others. Laughable.
BLKNSLVR•59m ago
From reading the comments I'm getting vibes similar to Altered Carbon's[0] AI hotels that no one uses.

The opposite of "if you build it they will come".

(The difference being the AIs in the book were incredibly needy, wanting too much to please the customer to the point of annoyance, which is a heavy contrast against the current reality of the AI working to appease the parent organisation)

[0]: https://en.m.wikipedia.org/wiki/Altered_Carbon

gblargg•21m ago
As long as the first ones to be surveilled are the companies that make it (including their employees) and all politicians who vote for it. We need to be able to access all the data the AI gathers from these groups.