I wonder when/if the opposite will be as much of an article hook:
"Imagine applying for a job, only to find out that a human rejected your resume before an algorithm powered by artificial intelligence (AI) even saw it. Or imagine visiting a doctor where treatment options are chosen by a human you can’t question."
The implicit assumption is that it's preferred that humans do the work. In the first case, probably most would assume an AI is... ruthless? biased? Both exist for humans too. Not that the current state of AI resume processing is necessarily "good".
In the second, I don't understand as no competent licensed doctor chooses the treatment options (absent an emergency); they presumably know the only reasonable options, discuss them with the patient, answer questions, and the patient chooses.
I wish that were the case, but in my experience it is not. Every time I've seen a doctor, they offered only one medication, unless I requested a different one.
I've had a few doctors offer me alternatives and talk through the options, which I'll agree is rare. It sure has been nice when it happened. One time I did push back on one of the doctor's recommendations: I was with my mom and the doctor said he was going to prescribe some medication. I said "I presume you're already aware of this but she's been on that before and reacted poorly to it and we took her off it because of that. The doctor was NOT aware of that and prescribed something else. I sure was glad to be there and be able to catch that.
First is that the side effect profile of one option is much better known or tolerated, so the doctor will default to it.
Second is that the doctor knows the insurance company / government plan will require attempting to treat a condition with a standard cheaper treatment before they will pay for the newer, more expensive option.
There's always the third case where the doctor is overworked, lazy or prideful and doesn't consider the patient may have some input on which treatment they would like, since they didn't go to medical school and what would they know anyway?
All problems are human and nothing will ever change that. Just imagine the effects anyone is facing when being affected by something like the British Post Office scandal[0], only this time it's impossible to comprehend any faults in the software system.
[0]: https://en.wikipedia.org/wiki/British_Post_Office_scandal
GenAI interfaces are rolled out as chat products to end users, they just evaporate this last responsibility that remains on any human employee. This responsiblity shift from employee to end user is made on purpose, "worse responsibility issues" are real and well designed to be on the customer side.
If you mean that a human can be fired when they overlook a resume, an AI system can be be similarly rejected and no longer used.
A person can be held responsible, even when it's indirect responsibility, in a way that serves as a warning to others, to avoid certain behaviors.
It just seems wrong to allow machines to make decisions affecting humans, when those machines are incapable of experiencing the the world as a human being does. And yet, people are eager to offload the responsibility onto machines, to escape responsibility themselves.
On the other hand, "firing" an AI from AI-based HR department will likely paralyze it completely, so it's closer to "let's fire every single low-level HR person at once" - something very unlikely to occur.
The same goes with all other applications too: firing a single nurse is relativel easy. Replacing AI system with a new one is a major project which likely takes dozens of people and millions of dollars.
If it is built internally you need people to be responsible to create reliable tests and someone to lead the project. In a way it's not very different than if your external system is bad or crashing. You need accountability in the team. Google can't fire "Google Ads" but doesn't mean they can't expect Google Ads to reliably give them money and people to be responsible for maintaining quality.
Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.
When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."
Please don't fulminate. Please don't sneer, including at the rest of the community.
Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
That really struck a chord with me. I've been struggling with chronic sinusitis, without really much success. I had ChatGPT o3 do a deep research on my specific symptoms and test results, including a negative allergy (on my shoulder) test but that the doctor observed allergic reactions in my sinuses.
ChatGPT seemed to do a great job, and in particular came up with a pointer to an NIH reference that showed 25% of patients in a study showed "local rhinitis" (isolated allergic reactions) in their sinuses that didn't show elsewhere. I asked my ENT if I could be experiencing a local reaction in my sinuses that didn't show up in my shoulder, and he completely dismissed that idea with "That's not how allergies work, they cause a reaction all over the body."
However, I will say that I've been taking one of the second gen allergy meds for the last 2 weeks and the sinus issues have been resolved and staying resolved, but I do need another couple months to really have a good data point.
The funny thing is that this Dr is a evening programmer, and every time I see him we are talking about how amazing the different LLMs are for programming. He also really seems to keep up with new ENT tech, he was telling me all about a new "KPAP" algorithm that they are working on FDA approval for and apparently is much less annoying to use than CPAP. But he didn't have any interest in looking at the at the NIH reference.
You need another couple months to really have a good anecdote.
The point being that there's a lot that the LLMs can do in concert with physicians, discounting either one is not useful or interesting.
The article says this like it's a new problem. Automated resume screening is a long established practice at this point. That it'll be some LLM doing the screening instead of a keyword matcher doesn't change much. Although, it could be argued that an LLM would better approximate an actual human looking at the resume... including all the biases.
It's not like companies take responsibility for such automated systems today. I think they're used partly for liability cya anyway. The fewer actual employees that look at resumes, the fewer that can screw up and expose the company to a lawsuit. An algorithm can screw up too of course, but it's a lot harder to show intent, which can affect the damages awarded I think. Of course IANAL, so this could be entirely wrong. Interesting to think about though.
I think, more frighteningly, the potential for it to make decisions on insurance claims and medical care.
So many stupid comments about AI boil down to "humans are incredibly good at X, we can't risk having AI do it". Humans are bad at all manner of things. There are all kinds of bad human decisions being made in insurance, health care, construction, investing, everywhere. It's one big joke to suggest we are good at all this stuff.
What is needed from the AI is a trace/line-of-reasoning to which a decision is derived. Like a court judgement, which has explanations attached. This should be available (or be made as part of the decision documentation).
I think the safest would be for a reviewer in an appeal process to even not have any access to any of the AI's decision or reasoning, since if the incorrect decision was based on hallucinated information, a reviewer might be biased to think it's true even if it was imagined.
This would forbid things like spam filters.
Do you have a source for your somewhat unbelievable claim?
It's pretty standard practice for there to be a gradient of anti-spam enforcement. The messages the scoring engine thinks are certainly spam don't reach end users. If the scoring engine thinks it's not spam, it gets through. The middle range is what ends up in spam folders.
Most spam is so low-effort that the spam rules route it directly to /dev/null. I want to say the numbers are like 90% of spam doesn't even make it past that point, but I'm mostly culling this from recollections of various threads where email admins talk about spam filtering.
> by 2014, it comprised around 90% of all global email traffic
It is available to the entity running inference. Every single token and all the probabilities of every choice. But every decision made at every company isn't subject to a court hearing. So no, that's also a silly idea.
Even if you were right, AI doesn't change any of it - companies are liable.
Litigation is mainly a form of sport available to and enjoyed by the rich. And I mean serious litigation like taking on some corporation with deep pockets; not pick-on-someone-your-own-size litigation as in neighbor cut down a tree which fell onto your toolshed.
Second, have you ever look at rhat space? Because the agencies that done this were already weak, the hurdles you had to overcome were massive and the space was abused by companie to the maximum.
I mean, imagine that making an insurance claim with a black-sounding name results in a 5% greater chance of being rejected. How would we even know if this is the case? And, how do we prevent that?
Now, of course humans are biased too, but there's no guarantee that the biases of humans are the same as the biases of whatever AI model is chosen. And with humans we can hold them accountable to some degree. We need that accountability with AI agents.
AI will perform tirelessly and consistently at maximizing rejections. It will leave no stone unturned in search for justifications why a claim ought to be denied.
If they over-approve they will be unprofitable because their premiums aren't high enough. If they under-approve it'll be because their customers go elsewhere.
It’s just that A) I didn’t choose this insurer, my employer did and on balance the total package isn’t such that I want a new employer and B) I expect pretty much all my available insurance companies to be unreasonable.
Secondly, the reasonablity or unreasonability of payouts is linked to premiums.
In other words, one way that the parsimonious insurer would still have customers is that they offer low premiums compared to the liberal insurers.
Even people who know about the bad anecdotes from reading online reviews will brush that aside for the better deal. (Hey, reviews are biased toward negativity and miss the other side of the story; chances are that wouldn't happen to me.)
The free market doesn't optimize for quality. Firstly, it optimizes for the lowest price for a given level of quality. But the price optimization has a second-order effect of a downward pressure on quality.
If you're selling something and the margin is optimized: it's about as cheap as can be, what you can do is reduce quality by some epsilon, and make a corresponding decrease in price. It still looks like about the same quality to someone not using a magnifying glass and fine-toothed comb, and you have temporary price edge against competitors. That triggers a kind of "gradient descent" of declining quality which bottoms out at some minimum level of quality below which viability starts to get eroded past a point where the market still finds the thing acceptable.
If it's health insurance, it's not a free market. You don't have a choice. It's employer provided, so suck it up buttercup.
It's just socialized medicine but implemented in the private sector and, like, 100x more shit.
UHC, one of the largest insurers in the US, has a claim denial rate somewhere in the 30s if I remember correctly. Well... that sucks.
Quote from an IBM training manual from 1979
Seems just as true and even more relevant today than it was back then
The computer allows the humans a lot of leeway. For one thing, the computer represents a large diffusion of responsibility. Do you hold the programmers responsible? The managers who told them to build the software? How about the hardware manufacturers or the IT people who built or installed the system? Maybe the Executives?
What if the program was built with all good intentions and just has a critical exploit that someone abused?
It's just not so straightforward to hold the humans accountable when there are so many humans that touch any piece of commercial software
For example, if American Airlines uses a computer to help decide who gets refunds, they can't then blame the computer when it discriminates against group X, or steals money, because it was their own "business decision" that is responsible for that action (with the assist from a stupid computer they chose to use).
This is different from when their landing gear doesn't go down because of a software flaw in some component. They didn't produce the component and they didn't they didn't choose to delegate their "business decisions" to it, so as long as they used an approved vendor etc they should be ok. Choosing the vendor, the maintainence schedules, etc, etc: those are the "business decisions" they're responsible for.
If American Airlines uses a computer to automatically decline refunds, which human(s) do we hold accountable for these decisions?
The engineers who built the system?
The product people who designed the system, providing the business rules that the engineers followed?
The executives who oversaw the whole thing?
Sometimes there is one person who you can pin blame on, who was responsible for "going rogue" and building some discrimination into the system
Often time it is a failure of a large part of a business. Responsibility is diffused enough that no one is accountable, and essentially we do in fact "blame the computer"
Personally I'd be satisfied holding the company as a whole liable rather than a single person.
All that does is create a situation where decision makers at companies can make the company behave unethically or even illegally and suffer no repercussions for this. They might not even still be at the company when the consequences are finally felt
It means that the company is sued and is responsible for damages.
> decision makers at companies can make the company behave unethically or even illegally and suffer no repercussions for this
But now you've just argued yourself back to the "which human(s) do we hold accountable for these decisions?" question you raised that I was trying to get you out of.
I've also vaguely heard of a large company that provides just this as a service -- basically a factory where insurance claims are processed by humans here in VN, in one of the less affluent regions. I recall they had some minor problems with staffing as it's not a particularly pleasant job (it's very boring). On the other hand, the region has few employment opportunities, so perhaps it's good for some people too.
I'm not sure which country this last one is processing forms for. It may, or may not be the USA.
I don't really have an opinion to offer -- I just thought you might find that interesting.
I suspect though there might be something different today in terms of scale. Bigger corporations perhaps did some kind of screening (I am not aware of it though — at Apple I was asked to personally submit resumés for people I knew that were looking for engineering jobs — perhaps there was automation in other parts of the company). I doubt the restaurants around Omaha were doing any automation on screening resumés. That probably just got a lot easier with the pervasiveness of LLMs.
I personally think an algorithm would be easier to show intent, or a least willful negligence, it would also magnify the harm. A employee might only make mistakes on occasion, but an algorithm will make it every single time. The benefit of an algorithm is it does not need to be reminded do or not to do something, and it's actions are easier to interrogate than a humans.
I appreciate the frustration that, if not quite yet, it’ll be near impossible to live a normal life without having exposure to GenAI systems. Of course as others say here, and the date on the Onion piece shows, it’s not sadly not a new concern.
AI doesn’t arrive like a storm. It seeps in, feature by feature, until we no longer notice we’ve stopped choosing. And that’s why the freedom to opt out matters — not because we always want to use it, but because knowing we can is part of what keeps us human.
I don’t fear AI. But I do fear a world where silence is interpreted as consent, and presence means surrender by default.
silence is indeed concent (of the status quo). You need to vote with your wallet, personal choice and such - if you want to be comfortable, choosing the status quo is the way, and thus consent.
There's no possibility of a world where you get to remain comfortable, but still get to have a "choice" to dictate a choice contrary to the status quo.
https://samzdat.com/2017/06/01/the-meridian-of-her-greatness...
(Note how depending how one (mis)reads what you wrote, this is human nature, and there no escaping it, and you would likely be miserable if you tried.)
But still, I think there’s something in refusing to forget. Not to win — but to remember that not everything was agreed to in silence.
Maybe noticing isn’t power. But maybe it’s the thing that keeps us from surrendering to the machinery entirely.
On the other hand, blocking training on published information doesn’t make sense: If you don’t want your stuff to be read, don’t publish it!
This tradeoff has basically nothing to do with recent advances in AI though.
Also, with the current performance trends in LLMs, we seem very close to being able to run models locally. That’ll blow up a lot of the most abusive business models in this space.
On a related note, if AI decreases the number of mistakes my doctor makes, that seems like a win to me.
If the AI then sold my medical file (or used it in some other revenue generating way), that’d be unethical and wrong.
Current health care systems already do that without permission and it’s legal. Fix that problem instead.
There's a difference between reading something and ripping it off, no matter how you launder it.
because those people seem to think that individuals building it will be too small a scale to be commercially profitable (and thus the publisher is OK to have them be a form of social credit/portfolio building).
As soon as it is made clear that these published data can be monetized (if only by large corporations with money), they want a piece of the pie that they think they deserve (and not getting).
Let’s untangle this.
1. Humanity’s achievements are achievements by individuals, who are motivated by desires like recognition, wealth, personal security, altruism, self-actualisation.
2. “AI” does not build on that body of work. A chatbot has no free will or agency; there is no concept of “allowing” it to do something—there is engineering it and operating a tool, and both are done by humans.
3. Some humans today engineer and operate tools, for which (at least in the most prominent and widely-used cases) they generally charge money, yet which essentially proxy the above-mentioned original work by other humans.
4. Those humans engineer and operate said tools without asking the people who created said work, in a way that does not benefit or acknowledge them, thus robbing people of many of the motivations mentioned in point 1, and arguably in circumvention of legal IP protections that exist in order to encourage said work.
There is something valuable to others, that I neither built nor designed, but because I might have touched it once and left a paw print, I feel hurt no one wants to pay me rent for the valuable thing, and because of that, I want to destroy it so no one can have it.
Point 2) operates on a spectrum, there's plenty of cases where human work has no agency or free will behind it - in fact, it's very common in industrialized societies.
RE 3), "engineers" and "operators" are distinct; "engineers" make money because they provide something of immense value - something that exists only because of collective result of 1), but any individual contribution to it is of no importance. The value comes from the amount and diversity and how it all is processed. "Operators" usually pay "engineers" for access, and then they may or may not use it to provide some value to others, or themselves.
In the most basic case, "engineers" are OpenAI, and "operators" are everyone using ChatGPT app (both free and paid tiers).
RE 4) That's the sense of entitlement right there. Motivations from point 1. have already been satisfied; the value delivered by GenAI is a new thing, a form of reprocessing to access a new kind of value that was not possible to extract before, and that is not accessible to any individual creator, because (again) it comes from sheer bulk and diversity of works, not from any individual one.
IMO, individual creators have a point about AI competing with them for their jobs. But that's an argument against deployment and about what the "operators" do with it; it's not an argument against training.
> human work has no agency or free will behind it
There is one case where it is sort of true, but crucially 1) agency still exists, it is just restricted, and 2) it is called “slavery”. I felt like your comment not only equates a human being with freedom/agency (whether restricted or not) to a software tool, it also equates “I have to do my job because it pays money” with having brutally been robbed of freedom, which really underplays how bad the latter is.
> something that exists only because of collective result of 1), but any individual contribution to it is of no importance
Collective result, which consists of individual works. That argument appears to be “if we steal enough, then it is okay”.
> Motivations from point 1. have already been satisfied
We exist over time, not only in the past. A swath of motivations for doing any original work is going away for upcoming original work on which chatbots etc. are built.
> that is not accessible to any individual creator, because (again) it comes from sheer bulk and diversity of works, not from any individual one
Yes, humans can be inspired and build upon a huge diversity of works, like what has been happening for as long as humanity existed (you may have heard the phrase “everything is a remix”). If you talk to me and I previously read Kant then who knows, whatever you create may have been inspired by Kant.
Libraries have done great job at amplifying that ability. Search engines put massive datasets at your fingertips while maintaining attribution (you are always directed to the source), connection of authors and readers, and sometimes even offering ways of earning money (of course, profiting off it as well; the beauty of capitalism). I am sure there are plenty of other examples of ML models in various fields that achieved great results yet were trained on appropriately licensed work. In other words, none of this does justify theft.
Yes but that argument cuts both ways. There is a difference, and its not clear that training is "Ripping off"
> This tradeoff has basically nothing to do with recent advances in AI though.
I am surprised someone on HN would think this, especially considering the recent examples of DDoS via LLM crawlers compared to how websites are glad to be crawled by search engines.
For the first part : why do you think that robots.txt even exists ? Or why, say, YouTube constantly tries (and fails) to prevent you to use the 'wrong' kind of software to download their videos ?
Without that the whole thing is just noise
You can't simply look at a LLM's code and determine if, for example, it has racial biases. This is very similar to a human. You can't look inside someone's brain to see if they're racist. You can only respond to what they do.
If a human does something unethical or criminal, companies take steps to counter that behaviour which may include removing the human from their position. If an AI is found to be doing something wrong, one company might choose to patch it or replace it with something else, but will other companies do the same? Will they even be alerted to the problem? One human can only do so much harm. The harm a faulty AI can do potentially scales to the size of their install base.
Perhaps, in this sense, AI's need to be treated like humans while accounting for scale. If an AI does something unethical/criminal, it should be "recalled". i.e. Taken off the job everywhere until it can be demonstrated the behaviour has been corrected. It is not acceptable for a company, when alerted to a problem with an AI they're using, to say, "Well, it hasn't done anything wrong here yet."
Rather, why would LLMs be treated any differently from other machines ? Mass recall of flawed machines (if dangerous enough) is common after all.
Our intellectual property, privacy, and consumer protection laws were all tested by LLM tech, and they failed the test. Same as with social media — with proof it has caused genocides and suicides, and common sense saying it’s responsible for an epidemic of anxiety and depression, we have failed to stop its unethical advance.
The only wining move is to not play the game and go offline. Hope you weren’t looking to date, socialize, bank, get a ride, order food at restaurants, and do other things, because that has all moved online and is behind a cookie warning saying “We Care About Your Privacy” and listing 1899 ad partners the service will tell your behavioral measurements to for future behavior manipulation. Don’t worry, it’s “legitimate interest”. Then it will send an email to your inbox that will do the same, and it will have a tracking pixel so a mailing list company can get a piece of that action.
We are debating what parts of the torment nexus should or shouldn’t be allowed, while being tormented from every direction. It’s actually getting very ridiculous how too little too late it is. But I don’t think humanity has a spine to say enough is enough. There are large parts of humanity that like and justify their own abuse, too. They would kiss the ground their abusers walk on.
It is the end stage of corporate neo-liberalism. Something that could have worked out very well in theory if we didn’t become mindless fanatics[0] of it. Maybe with a little bit more hustle we can seed, scale and monetize ethics and morals. Then with a great IPO and an AI-first strategy, we could grow golden virtue retention in the short and long-run…
The UK government and trying to make it legal, presumably for concern over staying competitive in this rapidly growing space.
Baroness Kidron, mentioned in this story, is the leading figure in UK parliament who is pushing back against this.
lacker•2d ago
I imagine the author would respond, "That's not what I mean!" Well, they should figure out what they actually mean.
leereeves•2d ago
"Opting out of AI is no simple matter.
AI powers essential systems such as healthcare, transport and finance.
It also influences hiring decisions, rental applications, loans, credit scoring, social media feeds, government services and even what news or information we see when we search online."
mistrial9•2d ago
Robotbeat•2d ago
codr7•1d ago
Gigachad•1d ago
codr7•1d ago
The big issue is forcing AI down everyone's throat without being the least concerned about their experience.
lacker•1d ago
How could there be a system that lets you opt out, but keep sending email? Obviously all the spammers would love to opt out of spam filtering, if they could.
The system just fundamentally does not work without AI. To opt out of AI, you will have to stop sending email. And using credit cards. And doing Google searches. Etc etc etc...
tkellogg•2d ago
JackeJR•1d ago
jedbrown•1d ago
> I think we should shed the idea that AI is a technological artifact with political features and recognize it as a political artifact through and through. AI is an ideological project to shift authority and autonomy away from individuals, towards centralized structures of power. https://ali-alkhatib.com/blog/defining-ai
poslathian•1d ago
mjburgess•1d ago
TeMPOraL•1d ago
AI in the public mind comes from science fiction, and it means the same thing it meant for the past 5+ decades: a machine that presents recognizable characteristics of a thinking person - some story-specific combination of being as smart (or much smarter) than people in a broad (if limited) set of domains and activities, and having the ability (or at least giving impression of it) to autonomously set goals based on its own value system.
That is the "AI" general population experiences - a sci-fi trope, not tech industry marketing.
mjburgess•1d ago
drivingmenuts•1d ago
I'm not even sure what form that proof would take. I do know that I can tolerate non-deterministic behavior from a human, but having computers demonstrate non-deterministic behavior is, to me, a violation of the purpose for which we build computers.
simonw•1d ago
Did you prefer Google search results ten years ago? Those were still using all manner of machine learning algorithms, which is what we used to call "AI".
lacker•1d ago
simonw•1d ago
The author of this piece made no attempt at all to define what "AI" they were talking about here, which I think was irresponsible of them.
BlueTemplar•1d ago