frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

How 'overworked, underpaid' humans train Google's AI to seem smart

https://www.theguardian.com/technology/2025/sep/11/google-gemini-ai-training-humans
74•Brajeshwar•2h ago

Comments

kerblang•1h ago
Are other AI companies doing the same thing? Would like to see more articles about this...
jkkola•1h ago
There's a YouTube video titled "AI is a hype-fueled dumpster fire" [0] that mentions OpenAI's shenanigans. I haven't fact checked that but I've heard enough stories to believe it.

[0] https://youtu.be/0bF_AQvHs1M?si=rpMG2CY3TxnG3EYQ

thepryz•1h ago
Scale AI’s entire business model was using people in developing countries to label data for training models. Once you look into it, it comes across as rather predatory.

This was one of the first links I found re: Scale’s labor practices https://techcrunch.com/2025/01/22/scale-ai-is-facing-a-third...

Here’s another: https://relationaldemocracy.medium.com/an-authoritarian-work...

lawgimenez•1h ago
Couple of months ago I received a job invite for Kotlin AI trainers from the team at Upwork. I asked what the job is about and she says something like "for the opportunity to review & evaluate content for generative AI." And I'm from a developed country too.
benreesman•1h ago
There's nontrivial historical precedent for this exact playbook: when a new paradigm (Lisp machines and GOFAI search, GPU backprop, softmax self-attention) is scaling fast, a lot of promises get made, a lot of national security money gets involved, and AI Summer is just balmy.

But the next paradigm breakthrough is hard to forecast, and the current paradigm's asymptote is just as hard to predict, so it's +EV to say "tomorrow" and "forever".

When the second becomes clear before the first, you turk and expert label like it's 1988 and pray that the next paradigm breakthrough is soon, you bridge the gap with expert labeling and compute until it works or you run out of money and the DoD guy stops taking your calls. AI Winter is cold.

And just like Game of Thrones, no I mean no one, not Altman, not Amodei, not Allah Most Blessed knows when the seasons in A Song of Math and Grift will change.

jhbadger•1h ago
Karen Hao's recent book "Empire of AI" about the rise of OpenAI goes into detail how people in Africa and South America were hired (and arguably exploited) for their training efforts.
cs702•1h ago
The title is biased, blaming Google for mistreating people and implying that Google's AI isn't smart, but the OP is worth reading, because it gives readers a sense of the labor and cost involved in providing AI models with human feedback, the HF in RLHF, to ensure the AI models are more aligned with human values and preferences.
lm28469•1h ago
> to ensure the AI models are more aligned with human values and preferences.

And which are these universal human values and preferences ? Or are we talking about silicon valley's executives values ?

giveita•1h ago
> Sawyer is one among the thousands of AI workers contracted for Google through Japanese conglomerate Hitachi’s GlobalLogic to rate and moderate the output of Google’s AI products...

Depends how you look at it. I think a brand like Google should vet a mere one level down the supply chain.

FirmwareBurner•1h ago
I had no idea Hitachi was also running software sweatshops.
rs186•1h ago
> to ensure the AI models are more aligned with human values and preferences.

to ensure the AI models are more aligned with Google's values and preferences.

FTFY

falcor84•1h ago
I'm a big fan of cyberpunk dystopian fiction, but I still can't quite understand what you're alluding to here. Can you give an example value that google align the AI with that you think isn't a positive human value?
Ygg2•1h ago
"Adtech is good. Adblockers are unnatural"
smokel•49m ago
Google Gemini 2.5 Pro actually has a quite nuanced reply when asked to consider this statement, including the following:

> "Massive privacy invasion: The core of modern adtech runs on tracking your behavior across different websites and apps. It collects vast amounts of personal data to build a detailed profile about your interests, habits, location, and more, often without your full understanding or consent."

ToucanLoucan•54m ago
Their entire business model? Making search results worse to juice page impressions? Every dark pattern they use to juice subscriptions like every other SaaS company? Brand lock-in for Android? Paying Apple for prominent placement of their search engine in iOS? Anti-competitive practices in the Play store? Taking a massive cut of Play Store revenue from people actually making software?
simonw•36m ago
How does all of that affect the desired outputs for their LLMs?
watwut•9m ago
Google likes it when it can show you more ads, it is not positive human value.

It does not have to have anything ro do with cyberpunk. Corporations are not people, but if they were people, they would be powerful sociopaths. Their interests and anybody elses interests are not the same.

add-sub-mul-div•44m ago
Yes, and one more tweak: the values of Google or anyone paying Google to deliver their marketing or political messaging.
zozbot234•44m ago
RLHF (and its evolution, RLAIF) is actually used for more than setting "values and preferences". It's what makes AI models engage in recognizable behavior, as opposed to simply continuing a given text. It's how the "Chat" part of "ChatGPT" can be made to work in the first place.
throwaway106382•40m ago
What is a "human value" and whose preferences?
zerodaysbroker•1h ago
The title seems kinda misleading, this is from the article (GlobalLogic is the company contracted by Google):

"AI raters at GlobalLogic are paid more than their data-labeling counterparts in Africa and South America, with wages starting at $16 an hour for generalist raters and $21 an hour for super raters, according to workers. Some are simply thankful to have a gig as the US job market sours, but others say that trying to make Google’s AI products better has come at a personal cost."

imperio59•56m ago
It's employment at will. They are free to go work somewhere else if they don't like it...
teiferer•38m ago
That argument is as old as any mistreated worker complaining about their situation and as old as any argument against workers rights in general. Anybody not liking their job could just leave right? Simple! No, the world just isn't that simple and it didn't become simpler just because it happens in an AI context that produces a tool you like.

There are lots of jobs out there that suck and people do them anyway. Because the freedom that they supposedly have is not as free as you imagine.

mallowdram•1h ago
Gemini is faked.

How this industry managed to not grasp that meaning exists entirely separate from words is altogether bizarre.

dolphinscorpion•1h ago
"Google" posted a job opening. They applied for and took the job, agreeing to posted pay and conditions. End of the story. It's not up to the Guardian to decide
xkbarkar•1h ago
I agree, article is pretty low quality ragebait. Not good journalism at all.
lysace•23m ago
It is amazing how much their quality levels have fallen during the past two decades.

I used to point to their reporting as models that my nation’s newspapers should seek to emulate.

iandanforth•1h ago
"Google said in a statement: “Quality raters are employed by our suppliers and are temporarily assigned to provide external feedback on our products. Their ratings are one of many aggregated data points that help us measure how well our systems are working, but do not directly impact our algorithms or models.” GlobalLogic declined to comment for this story." (emphasis mine)

How is this not a straight up lie? For this to be true they would have to throw away labeled training data.

Gracana•56m ago
They probably don’t do it at a scale large enough to do RLHF with it, but it’s still useful feedback the people working on the projects / products.
zozbot234•49m ago
More recent models actually use "reinforcement learning from AI feedback", where the task of assigning a reward is essentially fed back into the model itself. Human feedback is then only used to ground the training, on selected examples (potentially even entirely artificial ones) where the AI is most highly uncertain about what feedback should be given.
creddit•50m ago
Because they are doing it to compute quality metrics not to implement RLHF. It’s not training data.
teiferer•43m ago
Key word: "directly"

It does so indirectly, so it's a true albeit misleading statement.

ants_everywhere•1h ago
When they switch to aligning with algorithms instead of humans we'll get another story about how terrible it was that they removed the jobs that were terrible when they existed.

This doesn't sound as bad to me as the Facebook moderator job or even a call center job, but it does sound pretty tedious.

lysace•57m ago
with wages starting at $16 an hour for generalist raters and $21 an hour for super raters, according to workers

That’s sort of what I expect the Guardian’s UK online non-sub readers to make.

Perhaps GlobalLogic should open a subsidiary in the UK?

simonw•38m ago
Something I'd be interested to understand is how widespread this practice is. Are all of the LLMs trained using human labor that is sometimes exposed to extreme content?

There are a whole lot of organizations training competent LLMs these days in addition to the big three (OpenAI, Google, Anthropic).

What about Mistral and Moonshot and Qwen and DeepSeek and Meta and Microsoft (Phi) and Hugging Face and Ai2 and MBZUAI? Do they all have their own (potentially outsourced) teams of human labelers?

I always look out for notes about this in model cards and papers but it's pretty rare to see any transparency about how this is done.

yvdriess•35m ago
One of the key innovations behind the DNN/CNN models was Mechanical Turk. OpenAI used a similar system extensively to improve the early GPT models. I would not be surprised that the practice continues today; NN models needs a lot of quality ground truth training data.
simonw•26m ago
Right, but where are the details?

Given the number of labs that are competing these days on "open weights" and "transparency" I'd be very interested to read details of how some of them are handling the human side of their model training.

I'm puzzled at how little information I've been able to find.

whilenot-dev•27m ago
So why do you think asking this question here would yield a satisfying answer, especially how the HN community likes to dispute any vague conclusions for anything as hyped as AI training?

To counter your question, what makes you think that's not the case? Do you think Mistral/Moonshot/Qwen/etc. are all emloying their own data labelers? Why would you expect this kind of transparency from for-profit bodies that are evaluated in the billions?

happy_dog1•16m ago
I've shared this once on HN before, but it's very relevant to this question and just a really great article so I'll reshare it here:

https://www.theverge.com/features/23764584/ai-artificial-int...

it explores the world of outsourced labeling work. Unfortunately hard numbers on the number of people involved are hard to come by because as the article notes:

"This tangled supply chain is deliberately hard to map. According to people in the industry, the companies buying the data demand strict confidentiality. (This is the reason Scale cited to explain why Remotasks has a different name.) Annotation reveals too much about the systems being developed, and the huge number of workers required makes leaks difficult to prevent. Annotators are warned repeatedly not to tell anyone about their jobs, not even their friends and co-workers, but corporate aliases, project code names, and, crucially, the extreme division of labor ensure they don’t have enough information about them to talk even if they wanted to. (Most workers requested pseudonyms for fear of being booted from the platforms.) Consequently, there are no granular estimates of the number of people who work in annotation, but it is a lot, and it is growing. A recent Google Research paper gave an order-of-magnitude figure of “millions” with the potential to become “billions.” "

I too would love to know more about how much human effort is going into labeling and feedback for each of these models, it would be interesting to know.

philipallstar•38m ago
If they're underpaid and overworked, by definition words that are relative to other options, they should go to one of the better options.
CPLX•31m ago
Glad to learn from your post that the labor market has recently become perfectly competitive and efficient.
bflesch•28m ago
The way you defend against an article citing "thousands of workers" by using a nitpicky criticism about grammar style makes me suspect that it raises a cognitive dissonance in your head that you are not ready to address yet.
sjiabq•11m ago
This line of reasoning that goes “I don’t like your comment, you should go to therapy” is very feminine.
Group_B•26m ago
Comments like these are why HN is the best
blactuary•23m ago
Yeah they should simply buy widgets from the abundance of other widget sellers since this is a perfectly competitive market with no transaction costs and perfectly symmetric information
a3w•35m ago
AI means actual indians, did we not learn that from the initial OpenAI GPT 3.0 training? It made it to HN.
wslh•35m ago
It seems a deja vu of previous Amazin's Mechanical Turk[1] discussions[2] but with AI.

[1] https://www.mturk.com/

[2] https://tinyurl.com/4r2p39v3

yanis_t•20m ago
From my shallow understanding, it seems that human training is involved heavily in the post-training/fine-tuning stage, after the base model has been solidified already.

In that case, how is the notion of truthiness (what the model accepts as right or wrong) affected during this stage , that is affected by human beings vs. it being sealed into the basic model itself, that is truthiness being deduced by the method / part of its world model.

oefrha•20m ago
> [job] … has come at a personal cost.

Congratulations, you just described most jobs. And many backbreaking laborers make about the same or less, even in the U.S., not to mention the rest of the world.

A store that generates products from anything you type in search

https://anycrap.shop/
38•kafked•1h ago•14 comments

SkiftOS: A hobby OS built from scratch using C/C++ for ARM, x86, and RISC-V

https://skiftos.org
270•ksec•8h ago•49 comments

UTF-8 is a brilliant design

https://iamvishnu.com/posts/utf8-is-brilliant-design
649•vishnuharidas•19h ago•256 comments

Java 25's new CPU-Time Profiler (1)

https://mostlynerdless.de/blog/2025/06/11/java-25s-new-cpu-time-profiler-1/
67•SerCe•5h ago•15 comments

How to Use Claude Code Subagents to Parallelize Development

https://zachwills.net/how-to-use-claude-code-subagents-to-parallelize-development/
130•zachwills•4d ago•66 comments

How 'overworked, underpaid' humans train Google's AI to seem smart

https://www.theguardian.com/technology/2025/sep/11/google-gemini-ai-training-humans
74•Brajeshwar•2h ago•50 comments

QGIS is a free, open-source, cross platform geographical information system

https://github.com/qgis/QGIS
472•rcarmo•20h ago•112 comments

Weird CPU architectures, the MOV only CPU (2020)

https://justanotherelectronicsblog.com/?p=771
44•v9v•4d ago•7 comments

Many hard LeetCode problems are easy constraint problems

https://buttondown.com/hillelwayne/archive/many-hard-leetcode-problems-are-easy-constraint/
549•mpweiher•23h ago•463 comments

FFglitch, FFmpeg fork for glitch art

https://ffglitch.org/gallery/
228•captain_bender•15h ago•32 comments

The Worst Air Disaster You've Never Heard Of

https://longreads.com/2025/09/04/zeppelin-navy-aircraft-disaster/
11•mooreds•3d ago•9 comments

The treasury is expanding the Patriot Act to attack Bitcoin self custody

https://www.tftc.io/treasury-iexpanding-patriot-act/
711•bilsbie•1d ago•507 comments

Raspberry Pi Synthesizers – How the Pi is transforming synths

https://www.gearnews.com/raspberry-pi-synthesizers-how-the-pi-is-transforming-synths/
80•zdw•9h ago•52 comments

Resizing images in Rust, now with EXIF orientation support

https://alexwlchan.net/2025/create-thumbnail-is-exif-aware/
42•ingve•4d ago•15 comments

Does All Semiconductor Manufacturing Depend on Spruce Pine Quartz? (2024)

https://www.construction-physics.com/p/does-all-semiconductor-manufacturing
16•colinprince•3d ago•6 comments

Life, work, death and the peasant: Rent and extraction

https://acoup.blog/2025/09/12/collections-life-work-death-and-the-peasant-part-ivc-rent-and-extra...
235•baud147258•12h ago•107 comments

I used standard Emacs extension-points to extend org-mode

https://edoput.it/2025/04/16/emacs-paradigm-shift.html
170•Karrot_Kream•16h ago•22 comments

Tips for installing Windows 98 in QEMU/UTM

https://sporks.space/2025/08/28/tips-for-installing-windows-98-in-qemu-utm/
103•Bogdanp•14h ago•20 comments

Social media promised connection, but it has delivered exhaustion

https://www.noemamag.com/the-last-days-of-social-media/
201•pseudolus•7h ago•143 comments

EU court rules nuclear energy is clean energy

https://www.weplanet.org/post/eu-court-rules-nuclear-energy-is-clean-energy
873•mpweiher•19h ago•804 comments

Meow: Yet another modal editing on Emacs

https://github.com/meow-edit/meow
106•Bogdanp•12h ago•18 comments

3D modeling with paper

https://www.arvinpoddar.com/blog/3d-modeling-with-paper
294•joshuawootonn•23h ago•45 comments

OCI Registry Explorer

https://oci.dag.dev/
73•jcbhmr•11h ago•7 comments

I unified convolution and attention into a single framework

https://zenodo.org/records/17103133
29•umjunsik132•6h ago•9 comments

Behind Kamathipura's Closed Doors

https://failedarchitecture.com/behind-kamathipuras-closed-doors/
13•tsaifu•3d ago•1 comments

AI Coding

https://geohot.github.io//blog/jekyll/update/2025/09/12/ai-coding.html
192•abhaynayar•4h ago•145 comments

Legal win

https://ma.tt/2025/09/legal-win/
199•pentagrama•11h ago•165 comments

Reduce bandwidth costs with dm-cache: fast local SSD caching for network storage

https://devcenter.upsun.com/posts/cut-aws-bandwidth-costs-95-with-dm-cache/
65•tlar•4d ago•19 comments

How FOSS Projects Handle Legal Takedown Requests

https://f-droid.org/2025/09/10/how-foss-projects-handle-legal-takedown-requests.html
135•mkesper•20h ago•12 comments

Chatbox app is back on the US app store

https://github.com/chatboxai/chatbox/issues/2644
55•themez•11h ago•25 comments