To their credit, their privacy policy says they have agreements on how the upstream services can use that info[1]:
> As noted above, we call model providers on your behalf so your personal information (for example, IP address) is not exposed to them. In addition, we have agreements in place with all model providers that further limit how they can use data from these anonymous requests, including not using Prompts and Outputs to develop or improve their models, as well as deleting all information received once it is no longer necessary to provide Outputs (at most within 30 days, with limited exceptions for safety and legal compliance).
But even assuming the upstream services actually respect the agreement, their own privacy policy implies that your prompts and the responses could still be leaked because they could technically be stored for up to 30 days, or for an unspecified amount of time in the case of the exceptions mentioned.
I mean, it's reasonable and a good start to move in the direction of better privacy, way better than nothing. Just have to keep those details in mind.
Feel free to call me an accelerationist but I hope AI makes social media so awful that no one wants to use it anymore. My hope is that AI is the cleansing fire that burns down social media so that we can rebuild on fertile soil.
The only real solution is locally running models, but that goes against the business model. So instead they will seek regulation to create privacy by fiat. Fiat privacy still has all the same problems as telling your therapist that you killed someone, or keeping your wallet keys printed out on paper in a safe. It's dependent on regulations and definitions of greater good that you can't control.
Not if you are selling hardware. If I was Apple, Dell, or Lenovo, I would be pushing for local running models supporting Hugging Face while I full speed developed systems that can do inference locally.
Getting customers to pay for the weights would be entirely dependent on copyright law, which OpenAI already has a complicated relationship with. Quite the needle to thread: it's okay for us to ingest and regurgitate data with total disregard for how it's licensed, but under no circumstances can anyone share these weights.
Provide the weights as an add-on for customers who pay for hardware to run them. The customers will be paying for weights + hardware. I think it is the same model as buying the hardware and get the macOS for free. Apple spends $35B a year in R&D. Training GPT5 cost ~$500M. It is a nothing burger for Apple to create a model that runs locally on their hardware.
It sounds a lot like the browsers war, where the winning strategy had been to aggressively push (for free, which was rather uncommon then) one's platform, in the aim of market dominance for later benefits.
That's assuming weights are even covered by copyright law, and I have a feeling they are not in the US, since they aren't really a "work of authorship"
They could take a lesson from churches. If LLM providers and their employees were willing to commit to privacy and were willing to sacrifice their wealth and liberty for the sake of their clients, society would yield.
I remember seeing a video of a certain Richard Masten, a CrimeStoppers coordinator, destroying the information he had on a confidential source right in the courtroom under the threat of a contempt charge and getting away with a slap on the wrist.
In decent societies standing up for principles does work.
Isn't his company, OpenAI, the one that said the monitor all communications and will report anyone they think is a threat to the government?
https://openai.com/index/helping-people-when-they-need-it-mo...
> If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement.
I get they are trying to do something positive overall. At the same time. I don't want corp owned AI that's monitoring everything I ask it.
IIRC it is illegal for the phone company to monitor and censor communications. The government can ask a judge for permission for police to monitor a line but otherwise it's illegal. But now with AI transcription it won't be long until a company can monitor every call, transcribe it, feed to an LLM to judge and decide which lists you should be on.
Represents a fundamental misunderstanding of how training works or can work. Memory is more to do with retrieval. Finetuning on those memories would not be useful given the data is going to be minuscule to affect the probablity distribution in the right way.
While everyone is for privacy (and thats what makes these arguments hard to refute), this is clearly about using privacy as a way to argue against using conversational interfaces. Not just that, it's using the same playbook to use privacy as a marketing tactic. The argument starts from highly persuasive nature of chatbots, to somehow privacy preserving chatbots from DDG wont do it, to being safe with hackers stealing your info elsewhere and not on DDG. And then asking for regulation.
The next politician to come in will retroactively pardon everyone involved, and will create legislation or hand down an executive order that creates a "due process" in order to do the illegal act in the future, making it now a legal act. The people who voted the politician in celebrate their victory over the old evil, lawbreaking politician, who is on a yacht somewhere with one of the billionaires who he really works for. Rinse and repeat.
Eric Holder assured us that "due process" simply refers to any process that they do, and can take place entirely within one's mind.
And we think we can ban somebody from doing something that they can do with a computer connected to a bunch of thick internet pipes, without telling anyone.
That's libs for you. Still believe in the magic of these garbage institutions, even when they're headed by a game show host and wrestling valet who's famous because he was good at getting his name in the NY Daily News and the NY Post 40 years ago. He is no less legitimate than all of you clowns. The only reason Weinberg has a voice is because he's rich, too.
If the government is failing, explore writing civil software, providing people protected forms of communication or modern spaces where they can safely organize and learn, eventually the current generations die and a new, strongly connected culture has another chance to try and fix things.
This is why so many are balkanizing the internet age gating, they see the threat of the next few digitally-augmented generations.
EDIT: I want to add that "training on chat logs" isn't even the issue. In fact it understates the danger. It's better to imagine things like this: when a future ad-bot or influence-bot talks to you, it will receive your past chatlogs with other bots as context, useful to know what'll work on you or not.
EDIT 2: And your chatlogs with other people I guess, if they happened on a platform that stored them and later got desperate enough to sell them. This is just getting worse and worse as I think about it.
They can just prompt "given all your chats with this person, how can we manipulate him to do x"
Not really any expertise needed at all, let the AI to all the lifting.
https://cybersecuritynews.com/fraudgpt-new-black-hat-ai-tool...
Ads are there to change your behavior to make you more likely to buy products, e.g., put downward pressure on your self esteem to make you feel "less than" unless you live a lifestyle that happens to involve buying X product
They are not made in your best interest, they are adverserial psycho-tech that have a side effect of building a economic and political profile on you for whoever needs to know what messaging might resonate with you
https://brandingstrategyinsider.com/achieving-marketing-obje...
"Your ultimate marketing goal is behavior change — for the simple reason that nothing matters unless it results in a shift in consumer actions"
Brainwashing is the systematic effort to get someone to adopt a particular loyalty, instruction, or doctrine.
You have described one type of ad. There are many many types of ads.
If you were actually knowledgeable about this, you'd know that basic fact.
Surplus value isn't really that useful of a concept when it comes to understanding the world.
This is so far from the reality of so many things in life, it's hard to believe you've thought this through.
Maybe it works in the academic, theoretical sense, but it falls down in the real world.
No "artisanal" product, from food to cosmetics to clothing and furniture is ever worth it unless value-for-money (and money in general) is of no significance to you. But people buy them.
I really can't go trough every product class, but take furniture as a painfully obvious example. The amount of money you'd have to spend to get furniture of a similar quality to IKEA is mind-boggling. Trust me, I've done it. Yet I know of people in Sweden who put considerable effort in acquiring second-hand furniture because IKEA is somehow beneath them.
Again, there situations where economies of scale don't exist and situations where a business may not be interested in selling a cheaper or superior product. But they are rarer than we'd like to admit.
> Each Shiftkey nurse is offered a different pay-scale for each shift. Apps use commercially available financial data – purchased on the cheap from the chaotic, unregulated data broker sector – to predict how desperate each nurse is. The less money you have in your bank accounts and the more you owe on your credit cards, the lower the wage the app will offer you.
https://pluralistic.net/2024/12/18/loose-flapping-ends/#luig...
I'd rather see totally irrelevant ads because they're easy to ignore or dismiss. Targeted ads distract your thought processes explicitly because they know what will distract you; make you want something where there was previously no wanting. Targeted advertising is productised ADHD; it is anti-productive.
Like the start of Madness' One Step Beyond: "Hey you! Don't watch that, watch this!"
The incentives are all wrong.
I'm fundamentally a capitalist because I don't know another system that will work better. But, there really is just too much concentrated wealth in these orgs.
Our legal and cultural constructs are not designed in a way that such disparity can be put in check. The populace responds by wanting ever more powerful leaders to "make things right" and you get someone like Trump at best and it goes downhill from there.
Make the laws, it will help, a little, maybe.
But I think something more profound needs to happen for these things to be truly fixed. I, admittedly, have no idea what that is.
Instead of the current maze of case specific laws.
---
> But I think something more profound needs to happen for these things to be truly fixed. I, admittedly, have no idea what that is.
You know, you're just unwilling to think it because you've been conditioned not to. It's what always happens when inequality (of income, power, etc.) gets too high.
Being a capitalist is decided by access to capital not really a belief system.
> But, there really is just too much concentrated wealth in these orgs.
Please make up your mind? Should capital self-accumulate and grant power or not?
Portraying capitalism as some sort of force of nature that one doesn't "know another system that will work better" might be the neoliberals biggest accomplishment.
In essence, there is a general consensus on the conduct concerting trusted advisors. They should act in the interest of their client. Privacy protections exist to enable individuals to be able to provide their advisors the context required to give good advice without fear of disclosure to others.
I think AI needs recognition as a similarly protected class.
AI actions should be considered to be acting for a Client (or some other specifically defined term to denote who they are advising). Any information shared with the AI by the client should be considered privileged. If the Client shares the information to others, the privilege is lost.
It should be illegal to configure an an AI to deliberately act against the interests of their Client. It should be illegal to configure an AI to claim that their Client is someone other than who it is (it may refuse to disclose, it may not misrepresent). Any information shared with an AI misrepresenting itself as the representative of the Client must have protections against disclosure or evidential use. There should be no penalty to refusing to provide information to an AI that does not disclose who its Client is.
I have a bunch of other principles floating around in my head around AI but those are the ones regarding privacy and being able to communicate candidly with an AI.
Some of the others are along the lines of
It should be disclosed(of the nutritional information type of disclosure) when an AI makes a determination regarding a person. There should be a set of circumstances where, if an AI makes a determination regarding a person, that person is provided with means to contest the determination.
A lot of the ideas would be good practice if they went beyond AI, but are more required in the case of AI because of the potential for mass deployment without oversight.
Most of my close friends are non-technical and expect me to be a cheerleader fir USA AI efforts. They were surprised when I started mentioning the recent Stanford study that 80% of US startups are using Chinese models. I would like us to win but we seem too hype focused and not engineering and practical applications focused.
Then they came for medical science, but I said nothing because I was not a doctor.
Then they came for specialists and subject matter experts, and I said nothing because I was an influencer and wanted the management position.
"Wipeth thine ass with what is written" should be engraved above the doorway of the National Constitution Center.
> Use our service
Nah.
Ultimately it's one of those arms races. The culture that surveills its population most intensely wins.
Banning it just in USA leaves you wide open to be defeated by China, Russia, etc….
Like it or not it’s a mutually assured destruction arms race.
AI is the new nuclear bomb.
What bad thing exactly happens if China wins? What does winning even mean? They can't invade because nukes.
Can they manipulate elections? Yes, so we'll do the opposite of the great firewall and block them from the internet. Block their citizens from entering physically, too.
We should be doing this anyway, given China is known to force them to spy for them.
Perun has a very good explanation why defending against nukes is impossible to do economically compared to just having more nukes and mutually assured destruction: https://www.youtube.com/watch?v=CpFhNXecrb4
1) China will get ASI and use it to beat everyone else (militarily or economically). In my reply, I argue we shouldn't race China because even if ASI is achieved and China gets it first, there's nothing they can do quickly enough that we wouldn't be able to build ASI second or nuke them if we couldn't catch up and it became clear they went to become a global dictatorship.
2) China will get ASI, it'll go out of control and kill everyone. In that case, I argue even more that we shouldn't race China but instead deescalate and stop the race.
BTW even in the second case, it would be very hard for the ASI to kill everyone quickly enough, especially those on nuclear submarines. Computers are much more vulnerable to EMPs than humans so a (series of) nuclear explosion(s) like Starfish Prime could be used to destroy all of most of its computers and give humans a fighting chance.
But a llm is not a human, and I think OpenAI and all the others should make it clear that you are NOT talking to a human. Repeatedly.
I think if society were trained to treat AI as NOT human, things would be better.
Could you elaborate on why? I am curious but there is no argument.
That chatbot you're interacting with is not your friend. I take it as a fact (assumption? axiom?) that it can never be your friend. A friend is a human - animals, in some sense, can be friends - who has your best interests at heart. But in fact, that chatbot "is" a megacorp whose interests certainly aren't your interests - often, their interests are at odds with your interests.
Google works hard with branding and marketing to make people feel good about using their products. But, at the end of the day, it's reasonably easy to recognize that when you use their products, you are interacting with a megacorp.
Chatbots blur that line, and there is a huge incentive for the megacorps to make me feel like I'm interacting with a safe, trusted "friend" or even mentor. But... I'm not. In the end, it will always be me interacting with Microsoft or OpenAI or Google or whoever.
There are laws, and then there is culture. The laws for AI and surveillance capitalism need to be in place, and we need lawmakers who are informed and who are advocates for the regular people who need to be protected. But we also need to shift culture around technology use. Just like social customs have come in that put guard rails around smartphone usage, we need to establish social customs around AI.
AI is a super helpful tool, but it should never be treated as a human friend. It might trick us into thinking that its a friend, but it can never be or become a friend.
AI chatbots are not humans, they don't have ethics, they can't be held responsible, they are the product of complex mathematics.
It really takes the bad parts from social media to the next level.
I've been learning a hell of a lot from LLMs, and am doing way more coding these days for fun, even if they are doing most of the heavy lifting.
I outrifht stopped using Facebook.
We are doomed if AI is allowed to punish us.
We've got the real criminal right here.
If you advertise on facebook you’re almost guarantee to have your ad account restricted for no apparent reason and no human being to appeal to, even if you spend big money.
It’s so bad that is common knowledge that you should start a fan page, post random stuff and buy page likes for 5-7 days before you start advertising, otherwise their system will just flag your account.
If this kind of low-quality AI moderation is the future, I'm not sure if these major platforms will even remain usable.
I suspect sites like Reddit don't care about a few% false positive rate, without considering in context that bot farmers literally do not care, they'll make another free account, but genuine users will have their attitude towards the site turn significantly negative when they're falsely actioned.
Don't worry, Reddit's day of reckoning comes when the advertisers figure out what percent of Reddit's traffic that they're paying to serve ads to are just bots.
This is surreptitious jamming of communications at levels that constitute and exceed thresholds for consideration as irregular warfare.
Genuine users no longer matter, only the user counts which are programmatically driven to distort reflected appraisal. The users are repressed and demoralized because of such false actions, and the platform has no solution because regulation failed to act at a time they could have changed these outcomes.
What comes later will simply be comparable to why "triage" is done on the battlefield.
Adtech is just a gloriously indirect means for money laundering in fiat money-printing environments. Credit/Debt being offers, when it is unbacked without proper reserve is money-printing.
edit: This has definitely soured my already poor opinion of reddit. I mostly post there about video games, or to help people in /r/buildapc or /r/askculinary. I think I'd rather help people somewhere I'm not going to get blackholed because an AI misinterpreted my comments.
Check out this post [1] in which the post includes part of the LLM response ("This kind of story involves classic AITA themes: family drama, boundary-setting, and a “big event” setting, which typically generates a lot of engagement and differing opinions.") and almost no commenter points this out. Hilarious if it weren't so bleak.
1: https://www.rareddit.com/r/AITAH/comments/1ft3bt6/aita_for_n... (using rareddit because it was eventually deleted)
If theres no literacy, there is no critical thinking.
The only solution is to deliver high quality education to all folks and create engaging environments for it to be delivered.
Ultimately it comes down to influencing folks to think deeper about whats going on around them.
Most of the people between the age of 13-30ish right now are kinda screwed and pretty much a write off imo.
You have malevolent third-party bots taking advantage of poor moderation to conflate similar/same word different context pairs to silence communication.
For example, the reddit AI bots considers "ricing" to be the same as "rice boy". The latter definitely is pejorative, but the former is not.
Just wild and absolutely crazy-making that this is even allowed, since communication is the primary means to inflict compulsion and torture these days.
Intolerable acts without due process or a rule of law lead to only one possible outcome. Coercion isn't new, but the stupid people are trying their hand for another bite at the apple.
The major platforms will not remain usable because eventually you get this hollowing out of meaning, and this behavior will either drive away all your rational intelligent contributors, or lead to accelerated failures such as evaporative cooling in the social networks. People use things because they provide some amount of value. When that stops being the case, the value disappears not overnight, but within a few months.
Just take a look at the linuxquestions subreddit since the mod exodus. They have a automated trickle of the same questions that don't really get sufficiently answered. Its all slop.
All the experienced people who previously shared their knowledge as charity have moved on because they were driven out by caustic harassment and lack of proper moderation to prevent that. The mod list even hides who the mods are now so people who have had moderated action can't appeal to the Reddit Administrators with the specific moderator who did something as a fascist dictator incapable of basic reading level comprehension common to grade schoolers (AI).
"hung" means to "suspend", so the process is suspended
<Victim> "I'm ricing my Linux Shell, check it out." <Bot> That's Racist!
<Bot Brigade> Moderator this person is violating your rules and being racist!
<Moderator> I'm just using AI to determine this. <Bot Brigade> Great! Now they can't contribute. Lets find another.
TL;DR Words have specific meanings, and a growing number of words have been corrupted purposefully to prevent communication, and by extension limit communication to the detriment of all. You get the same ultimate outcomes when people do this as any other false claim. Abuses pile up until eventually in the absence of functioning non-violent conflict resolution; violence forces the system to reform.
Have you noticed that your implication is circular based on the indefinite assumption (foregone conclusion) that the two are linked (tightly coupled)?
You use a lot of ambiguous manipulative language and structure. Doing that makes any reasonable person think you are either a bot, or a malicious actor.
Real moderation actions should not be taken without human input and should always be appealable, even if the appeal is just another mod looking at it to see if they agree.
But I don't have any alt accounts...??? Appeal process is a joke. I just opted to delete my 12 year old account instead and have stopped going there.
Oh well, probably time for them to go under and be reborn anyways. The default subs and front page has been garbage for some time.
(LinkedIn ramped up anti-bot/inauthentic-user heuristics like that a few years ago. Sadly they are necessary. Near-impossible for heuristics to distinguish between real humans with inauthentic or suspiciously commercial behavior.)
In both cases for me, I had signed up and logged in for the first time, and was met with an immediate ban. No rhyme or reason why.
I, too, needed it for work so had no prior history from my IPs in the case of Facebook at least. So maybe that's why, but still. Very aggressive and annoying blocking algorithm behavior like that cost them some advertising money as we just decided it wasn't worth it to even advertise there.
Nowhere did I justify social sites gtting things wrong or not having better customer support to fix cases like this.
Also the good-faith translation of "Without any evidence to support it" is "How/where did you find that out?", but even at that I had already given you some evidence: "LinkedIn ramped up anti-bot/inauthentic-user heuristics like that a few years ago." Ask good-faith questions instead of misrepresenting people. If you actually read my other HN posts you won't see me condoning large social-media sites behaving badly.
Now obviously this won’t stop with private entities, state and federal law enforcement are gung-ho to leverage any of these sorts of systems and have been for ages. It doesn’t help the current direction the US specifically is moving in, promoting such authoritarian policies.
Medical insurance is quickly becoming a simple scam where you are forced to pay a private entity that refuses to ever perform its function.
Then you simply use the services of another private company. Here, in fact, there are no particular dangers, after all, private companies provide services to people because it is profitable for private companies.
- There is real competition. It's less and less the case for many important things, such as food, accommodations, health, etc.
- Companies pay a price for misbehaving that is much higher than what they got from misbehaving. Also less and less the case, thanks to lobbying, huge law firms, corruption, etc.
- The cost of switching is fair. Moving to another places is very expensive. Doing it several times in a row is rarely possible for most people.
- Some practice are not just generalized in the whole industry. In IT tracking is, spying is, and preventing you from managing your device yourself is more and more trendy.
Basically, this view you are presenting is increasingly naive and even dangerous for any citizen practicing it.
I also had a CV rejection letter with AI rejection reasons in it as well which was frustrating because none of the reasons matched my CV at all, in my opinion. I am still not sure if the resume was actually reviewed by a human or AI but I am assuming the latter.
I absolutely hated looking for a new job pre-AI and when times were good. Now I'm feeling completely disillusioned with the whole process.
Or they have and they simply don't care, or they feel they can't change anything anyway, or the pay-check is enough to soothe any unease. The net result is the same.
Snowden's revelations happened 12 years ago, and there were plenty of what appeared to be well-intentioned articles and discussions in the years that followed. And yet, arguably, things are even worse today.
The ChatGPT translation on the right is a total nothingburger, it loses all feeling.
Most of the controversial stuff he has done is being whitewashed from the internet and is now hard to find.
It's not a find. It's an allegation.
HN is supposed to be better than that.
Must sell it somehow. Likely but have not seen evidence.
I mean a PARKING LOT in my town is using AI cameras to track and bill people in a parking lot! The people of my town are putting pressure on the parking lot owner to get rid of it but apparently the company is paying him too much money for having it in his lot.
Like the old video says "Don't talk to the Police" [1], but now we have to expand it to say "Don't Do Anything", because everything you do is being fed into a database that can possibly be searched.
DuckDuckGo aren’t perfect, but I think they do a lot to all our benefit. Theirs have been my search engine of choice for many years and will continue being so.
Shout outs to their amazing team!
Merely being surveilled and marketed at is a fairly pedestrian application from the rolodex of AI related epistemic horrors.
Even in real life, the police in the UK now deploy active face recognition and makes tonnes of arrests based on it (sometimes wrongly). Shops are now looking to deploy active face recognition to detect shoplifters (although it's unclear legally what they will actually do about it).
The UK can compel any person commuting through the UK to give over their passwords and devices - you have no right to appeal. Refusing to hand over the password can get you arrested under the Terrorist Act, where they can hold you indefinitely. When arrested under any terrorism offence you also have not right to legal representation.
The days of privacy sailed unnoticed.
As I write this, sitting in Peet's Coffee in downtown Los Altos, I count three different cameras recording me, and I'm using their public wifi which I assuming is also being used to track me. That's the world we have now.
For example,
The WIFI signals can uniquely identify every single heartbeat in realtime within a certain range of the AP, multiple linked access points increase this range up to a mile. The radio devices you carry around with you unknowingly beacon at set intervals tracking your location just like an animal on a tracking collar. This includes the minute RFID chips sewn into your clothing and fabrics.
The phones don't turn off their radios when in airplane mode. Your vehicle had at least 3 different layers that uniquely beacon a set of identifiable characteristics to anyone with a passive radio. OBD-II uplink, TPMS sensors (one for each wheel), and Telematics.
Home Depot in cooperation with Flock, has without disclosure captured your biometrics, and tracked your minute movements and put that up for sale to the highest bidder through subscription based profiling.
Ultrasonic beacons are emitted from your phone to associate geographically local devices to individual people. All visible to anyone with a SDR, manipulable by those with a Flipper0, and treated as distinct sources of truth in a layered approach.
All aspects of social interaction with the wider world have now been replaced with a surrogate that runs through a few set centralized points that can be turned off/degraded to drive anyone they wish into poverty with no visible indicator, or alternative.
Imagine you are a job seeker, the AI social credit algorithm they've developed to target wealthy people on one side, and to torture/make people better incorrectly identifies you as a subversive, and so they not only degrade your existing services but isolate all your communications from everyone else intermittently through failure following a statistical approach similar to Turing during WW2.
Imagine the difficulty of finding work in any specialized field which you have experience for, where you can never receive those callbacks because they are inherently interrupt driven; and interrupt driven calls are jammed without your ability to recognize the jamming. Such communications are vulnerable to erasure.
Should any system ever exist whose sole purpose or impact has become to prevent a arbitrary target through interference from finding legitimate work, or other aspects to feed themselves or exercise their existential rights.
In effect such a system of control silently makes these people into slaves without recourse, or informed disclosure. It fundamentally violates their human rights and "these systems exist".
Failure of government to timely uphold the social contract promises and specifics of the constitution becomes after-the-fact purposeful intent through the gross negligence and failure to uphold their constitutional oaths. History has shown that repeatedly if the civilization survives at all, it repeatedly reforms itself through violence. Something no good person wants, but given the narrowing of agency and choice to affect the future; it is the only alternative when the cliff of existential exctinction is present (whether people realize that or not).
If spaces like that irk you, stop going there. Limit your use of the Internet to when you're at home on your own network. Do we truly need to be browsing HN and other sites when we're out of the house?
Ditch the smartphone. Most things that people claim you need a smartphone for "in order to exist in modern society" can also be done via a laptop or a desktop, including banking. You don't need access to everything in the world tucked neatly away in your pocket when you're going grocery shopping, for instance.
Buy physical media so that your viewing habits aren't tracked relentlessly. Find hobbies that get you away from it all (I backpack!).
Fuck off from social media. Support hobby-based forums and small websites that put good faith into not participating in tracking and advertising, if possible.
Limit when you use the internet, and how.
It's hard to escape from it, but we can significantly limit our interactions with it.
Or maybe it never was and this fact is just becoming more transparent.
The opposite of "if you build it they will come".
(The difference being the AIs in the book were incredibly needy, wanting too much to please the customer to the point of annoyance, which is a heavy contrast against the current reality of the AI working to appease the parent organisation)
iambateman•9h ago
It seems like having LLM providers not train on user data is a big part of it. But is using traditional ML models to do keyword analysis considered “AI” or “surveillance”?
The author…and this community in general…are much more prepared to make full recommendations about what AI surveillance policy should be. We should be super clear to try to enact good regulation without killing innovation in the process.
slt2021•9h ago
beepbooptheory•8h ago
> That’s why we (at DuckDuckGo) started offering Duck.ai for protected chatbot conversations and optional, anonymous AI-assisted answers in our private search engine. In doing so, we’re demonstrating that privacy-respecting AI services are feasible.
I don't know if its a great idea, or just I wonder what does make it feasible, but there is a kind of implied recommendation here.
By "killing innovation" do you just mean: "we need to allow these companies to make money in possibly a predatory way, so they have the money to do... something else"? Or what is the precise concern here? What facet needs to be innovated upon?
yegg•8h ago
FollowingTheDao•6h ago
Like nuclear fission, AI should never have been developed.
fragmede•6h ago
martin-t•8h ago
And people should own all data about themselves, all rights reserved.
It's ironic copyright is the law that protects against this kind of abuse. And this is of course why big "AI" companies are trying to weaken it by arguing models training is not derivative work.
Or by claiming that writing a prompt in 2 minutes is enough creative work to own copyright of the output despite the model being based on 10^12 hours of human work, give or take a few orders of magnitude.
j45•8h ago
The groups that didn't train on public domain content would have an advantage if it's implemented as a rule moving forward at least for some time.
New models following this could create a gap.
I'm sure competition as has been seen from open-source models will be able to
martin-t•7h ago
Just because everyone is doing it doesn't meant it's right or legal. Only that a lot of very rich companies deserve to get punished and pay the creators.