I wonder when/if the opposite will be as much of an article hook:
"Imagine applying for a job, only to find out that a human rejected your resume before an algorithm powered by artificial intelligence (AI) even saw it. Or imagine visiting a doctor where treatment options are chosen by a human you can’t question."
The implicit assumption is that it's preferred that humans do the work. In the first case, probably most would assume an AI is... ruthless? biased? Both exist for humans too. Not that the current state of AI resume processing is necessarily "good".
In the second, I don't understand as no competent licensed doctor chooses the treatment options (absent an emergency); they presumably know the only reasonable options, discuss them with the patient, answer questions, and the patient chooses.
I wish that were the case, but in my experience it is not. Every time I've seen a doctor, they offered only one medication, unless I requested a different one.
I've had a few doctors offer me alternatives and talk through the options, which I'll agree is rare. It sure has been nice when it happened. One time I did push back on one of the doctor's recommendations: I was with my mom and the doctor said he was going to prescribe some medication. I said "I presume you're already aware of this but she's been on that before and reacted poorly to it and we took her off it because of that. The doctor was NOT aware of that and prescribed something else. I sure was glad to be there and be able to catch that.
First is that the side effect profile of one option is much better known or tolerated, so the doctor will default to it.
Second is that the doctor knows the insurance company / government plan will require attempting to treat a condition with a standard cheaper treatment before they will pay for the newer, more expensive option.
There's always the third case where the doctor is overworked, lazy or prideful and doesn't consider the patient may have some input on which treatment they would like, since they didn't go to medical school and what would they know anyway?
All problems are human and nothing will ever change that. Just imagine the effects anyone is facing when being affected by something like the British Post Office scandal[0], only this time it's impossible to comprehend any faults in the software system.
[0]: https://en.wikipedia.org/wiki/British_Post_Office_scandal
GenAI interfaces are rolled out as chat products to end users, they just evaporate this last responsibility that remains on any human employee. This responsiblity shift from employee to end user is made on purpose, "worse responsibility issues" are real and well designed to be on the customer side.
If you mean that a human can be fired when they overlook a resume, an AI system can be be similarly rejected and no longer used.
A person can be held responsible, even when it's indirect responsibility, in a way that serves as a warning to others, to avoid certain behaviors.
It just seems wrong to allow machines to make decisions affecting humans, when those machines are incapable of experiencing the the world as a human being does. And yet, people are eager to offload the responsibility onto machines, to escape responsibility themselves.
On the other hand, "firing" an AI from AI-based HR department will likely paralyze it completely, so it's closer to "let's fire every single low-level HR person at once" - something very unlikely to occur.
The same goes with all other applications too: firing a single nurse is relativel easy. Replacing AI system with a new one is a major project which likely takes dozens of people and millions of dollars.
If it is built internally you need people to be responsible to create reliable tests and someone to lead the project. In a way it's not very different than if your external system is bad or crashing. You need accountability in the team. Google can't fire "Google Ads" but doesn't mean they can't expect Google Ads to reliably give them money and people to be responsible for maintaining quality.
That really struck a chord with me. I've been struggling with chronic sinusitis, without really much success. I had ChatGPT o3 do a deep research on my specific symptoms and test results, including a negative allergy (on my shoulder) test but that the doctor observed allergic reactions in my sinuses.
ChatGPT seemed to do a great job, and in particular came up with a pointer to an NIH reference that showed 25% of patients in a study showed "local rhinitis" (isolated allergic reactions) in their sinuses that didn't show elsewhere. I asked my ENT if I could be experiencing a local reaction in my sinuses that didn't show up in my shoulder, and he completely dismissed that idea with "That's not how allergies work, they cause a reaction all over the body."
However, I will say that I've been taking one of the second gen allergy meds for the last 2 weeks and the sinus issues have been resolved and staying resolved, but I do need another couple months to really have a good data point.
The funny thing is that this Dr is a evening programmer, and every time I see him we are talking about how amazing the different LLMs are for programming. He also really seems to keep up with new ENT tech, he was telling me all about a new "KPAP" algorithm that they are working on FDA approval for and apparently is much less annoying to use than CPAP. But he didn't have any interest in looking at the at the NIH reference.
You need another couple months to really have a good anecdote.
The point being that there's a lot that the LLMs can do in concert with physicians, discounting either one is not useful or interesting.
The article says this like it's a new problem. Automated resume screening is a long established practice at this point. That it'll be some LLM doing the screening instead of a keyword matcher doesn't change much. Although, it could be argued that an LLM would better approximate an actual human looking at the resume... including all the biases.
It's not like companies take responsibility for such automated systems today. I think they're used partly for liability cya anyway. The fewer actual employees that look at resumes, the fewer that can screw up and expose the company to a lawsuit. An algorithm can screw up too of course, but it's a lot harder to show intent, which can affect the damages awarded I think. Of course IANAL, so this could be entirely wrong. Interesting to think about though.
I think, more frighteningly, the potential for it to make decisions on insurance claims and medical care.
So many stupid comments about AI boil down to "humans are incredibly good at X, we can't risk having AI do it". Humans are bad at all manner of things. There are all kinds of bad human decisions being made in insurance, health care, construction, investing, everywhere. It's one big joke to suggest we are good at all this stuff.
What is needed from the AI is a trace/line-of-reasoning to which a decision is derived. Like a court judgement, which has explanations attached. This should be available (or be made as part of the decision documentation).
I think the safest would be for a reviewer in an appeal process to even not have any access to any of the AI's decision or reasoning, since if the incorrect decision was based on hallucinated information, a reviewer might be biased to think it's true even if it was imagined.
This would forbid things like spam filters.
Do you have a source for your somewhat unbelievable claim?
It's pretty standard practice for there to be a gradient of anti-spam enforcement. The messages the scoring engine thinks are certainly spam don't reach end users. If the scoring engine thinks it's not spam, it gets through. The middle range is what ends up in spam folders.
It is available to the entity running inference. Every single token and all the probabilities of every choice. But every decision made at every company isn't subject to a court hearing. So no, that's also a silly idea.
Even if you were right, AI doesn't change any of it - companies are liable.
Second, have you ever look at rhat space? Because the agencies that done this were already weak, the hurdles you had to overcome were massive and the space was abused by companie to the maximum.
AI will perform tirelessly and consistently at maximizing rejections. It will leave no stone unturned in search for justifications why a claim ought to be denied.
If they over-approve they will be unprofitable because their premiums aren't high enough. If they under-approve it'll be because their customers go elsewhere.
It’s just that A) I didn’t choose this insurer, my employer did and on balance the total package isn’t such that I want a new employer and B) I expect pretty much all my available insurance companies to be unreasonable.
I've also vaguely heard of a large company that provides just this as a service -- basically a factory where insurance claims are processed by humans here in VN, in one of the less affluent regions. I recall they had some minor problems with staffing as it's not a particularly pleasant job (it's very boring). On the other hand, the region has few employment opportunities, so perhaps it's good for some people too.
I'm not sure which country this last one is processing forms for. It may, or may not be the USA.
I don't really have an opinion to offer -- I just thought you might find that interesting.
I suspect though there might be something different today in terms of scale. Bigger corporations perhaps did some kind of screening (I am not aware of it though — at Apple I was asked to personally submit resumés for people I knew that were looking for engineering jobs — perhaps there was automation in other parts of the company). I doubt the restaurants around Omaha were doing any automation on screening resumés. That probably just got a lot easier with the pervasiveness of LLMs.
I appreciate the frustration that, if not quite yet, it’ll be near impossible to live a normal life without having exposure to GenAI systems. Of course as others say here, and the date on the Onion piece shows, it’s not sadly not a new concern.
AI doesn’t arrive like a storm. It seeps in, feature by feature, until we no longer notice we’ve stopped choosing. And that’s why the freedom to opt out matters — not because we always want to use it, but because knowing we can is part of what keeps us human.
I don’t fear AI. But I do fear a world where silence is interpreted as consent, and presence means surrender by default.
silence is indeed concent (of the status quo). You need to vote with your wallet, personal choice and such - if you want to be comfortable, choosing the status quo is the way, and thus consent.
There's no possibility of a world where you get to remain comfortable, but still get to have a "choice" to dictate a choice contrary to the status quo.
https://samzdat.com/2017/06/01/the-meridian-of-her-greatness...
(Note how depending how one (mis)reads what you wrote, this is human nature, and there no escaping it, and you would likely be miserable if you tried.)
On the other hand, blocking training on published information doesn’t make sense: If you don’t want your stuff to be read, don’t publish it!
This tradeoff has basically nothing to do with recent advances in AI though.
Also, with the current performance trends in LLMs, we seem very close to being able to run models locally. That’ll blow up a lot of the most abusive business models in this space.
On a related note, if AI decreases the number of mistakes my doctor makes, that seems like a win to me.
If the AI then sold my medical file (or used it in some other revenue generating way), that’d be unethical and wrong.
Current health care systems already do that without permission and it’s legal. Fix that problem instead.
There's a difference between reading something and ripping it off, no matter how you launder it.
because those people seem to think that individuals building it will be too small a scale to be commercially profitable (and thus the publisher is OK to have them be a form of social credit/portfolio building).
As soon as it is made clear that these published data can be monetized (if only by large corporations with money), they want a piece of the pie that they think they deserve (and not getting).
Let’s untangle this.
1. Humanity’s achievements are achievements by individuals, who are motivated by desires like recognition, wealth, personal security, altruism, self-actualisation.
2. “AI” does not build on that body of work. A chatbot has no free will or agency; there is no concept of “allowing” it to do something—there is engineering it and operating a tool, and both are done by humans.
3. Some humans today engineer and operate tools, for which (at least in the most prominent and widely-used cases) they generally charge money, yet which essentially proxy the above-mentioned original work by other humans.
4. Those humans engineer and operate said tools without asking the people who created said work, in a way that does not benefit or acknowledge them, thus robbing people of many of the motivations mentioned in point 1, and arguably in circumvention of legal IP protections that exist in order to encourage said work.
Yes but that argument cuts both ways. There is a difference, and its not clear that training is "Ripping off"
> This tradeoff has basically nothing to do with recent advances in AI though.
I am surprised someone on HN would think this, especially considering the recent examples of DDoS via LLM crawlers compared to how websites are glad to be crawled by search engines.
For the first part : why do you think that robots.txt even exists ? Or why, say, YouTube constantly tries (and fails) to prevent you to use the 'wrong' kind of software to download their videos ?
Without that the whole thing is just noise
You can't simply look at a LLM's code and determine if, for example, it has racial biases. This is very similar to a human. You can't look inside someone's brain to see if they're racist. You can only respond to what they do.
If a human does something unethical or criminal, companies take steps to counter that behaviour which may include removing the human from their position. If an AI is found to be doing something wrong, one company might choose to patch it or replace it with something else, but will other companies do the same? Will they even be alerted to the problem? One human can only do so much harm. The harm a faulty AI can do potentially scales to the size of their install base.
Perhaps, in this sense, AI's need to be treated like humans while accounting for scale. If an AI does something unethical/criminal, it should be "recalled". i.e. Taken off the job everywhere until it can be demonstrated the behaviour has been corrected. It is not acceptable for a company, when alerted to a problem with an AI they're using, to say, "Well, it hasn't done anything wrong here yet."
Rather, why would LLMs be treated any differently from other machines ? Mass recall of flawed machines (if dangerous enough) is common after all.
Our intellectual property, privacy, and consumer protection laws were all tested by LLM tech, and they failed the test. Same as with social media — with proof it has caused genocides and suicides, and common sense saying it’s responsible for an epidemic of anxiety and depression, we have failed to stop its unethical advance.
The only wining move is to not play the game and go offline. Hope you weren’t looking to date, socialize, bank, get a ride, order food at restaurants, and do other things, because that has all moved online and is behind a cookie warning saying “We Care About Your Privacy” and listing 1899 ad partners the service will tell your behavioral measurements to for future behavior manipulation. Don’t worry, it’s “legitimate interest”. Then it will send an email to your inbox that will do the same, and it will have a tracking pixel so a mailing list company can get a piece of that action.
We are debating what parts of the torment nexus should or shouldn’t be allowed, while being tormented from every direction. It’s actually getting very ridiculous how too little too late it is. But I don’t think humanity has a spine to say enough is enough. There are large parts of humanity that like and justify their own abuse, too. They would kiss the ground their abusers walk on.
It is the end stage of corporate neo-liberalism. Something that could have worked out very well in theory if we didn’t become mindless fanatics[0] of it. Maybe with a little bit more hustle we can seed, scale and monetize ethics and morals. Then with a great IPO and an AI-first strategy, we could grow golden virtue retention in the short and long-run…
lacker•10h ago
I imagine the author would respond, "That's not what I mean!" Well, they should figure out what they actually mean.
leereeves•10h ago
"Opting out of AI is no simple matter.
AI powers essential systems such as healthcare, transport and finance.
It also influences hiring decisions, rental applications, loans, credit scoring, social media feeds, government services and even what news or information we see when we search online."
mistrial9•10h ago
Robotbeat•10h ago
codr7•8h ago
Gigachad•8h ago
lacker•7h ago
How could there be a system that lets you opt out, but keep sending email? Obviously all the spammers would love to opt out of spam filtering, if they could.
The system just fundamentally does not work without AI. To opt out of AI, you will have to stop sending email. And using credit cards. And doing Google searches. Etc etc etc...
tkellogg•10h ago
JackeJR•9h ago
jedbrown•6h ago
> I think we should shed the idea that AI is a technological artifact with political features and recognize it as a political artifact through and through. AI is an ideological project to shift authority and autonomy away from individuals, towards centralized structures of power. https://ali-alkhatib.com/blog/defining-ai
poslathian•4h ago
mjburgess•1h ago
drivingmenuts•9h ago
I'm not even sure what form that proof would take. I do know that I can tolerate non-deterministic behavior from a human, but having computers demonstrate non-deterministic behavior is, to me, a violation of the purpose for which we build computers.
simonw•9h ago
Did you prefer Google search results ten years ago? Those were still using all manner of machine learning algorithms, which is what we used to call "AI".
lacker•7h ago
danielmarkbruce•6h ago
drivingmenuts•50m ago
We also don't abdicate our decision-making to encryption processes.
simonw•9h ago
The author of this piece made no attempt at all to define what "AI" they were talking about here, which I think was irresponsible of them.
BlueTemplar•1h ago