frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
1•simonebrunozzi•1m ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
2•eeko_systems•8m ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
1•neogoose•11m ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
1•mav5431•12m ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
1•sizzle•12m ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•13m ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•13m ago•1 comments

Baldur's Gate to be turned into TV series – without the game's developers

https://www.bbc.com/news/articles/c24g457y534o
2•vunderba•14m ago•0 comments

Interview with 'Just use a VPS' bro (OpenClaw version) [video]

https://www.youtube.com/watch?v=40SnEd1RWUU
1•dangtony98•19m ago•0 comments

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•27m ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
1•1vuio0pswjnm7•29m ago•0 comments

Effective Nihilism

https://www.effectivenihilism.org/
1•abetusk•32m ago•1 comments

The UK government didn't want you to see this report on ecosystem collapse

https://www.theguardian.com/commentisfree/2026/jan/27/uk-government-report-ecosystem-collapse-foi...
3•pabs3•34m ago•0 comments

No 10 blocks report on impact of rainforest collapse on food prices

https://www.thetimes.com/uk/environment/article/no-10-blocks-report-on-impact-of-rainforest-colla...
2•pabs3•34m ago•0 comments

Seedance 2.0 Is Coming

https://seedance-2.app/
1•Jenny249•36m ago•0 comments

Show HN: Fitspire – a simple 5-minute workout app for busy people (iOS)

https://apps.apple.com/us/app/fitspire-5-minute-workout/id6758784938
1•devavinoth12•36m ago•0 comments

Dexterous robotic hands: 2009 – 2014 – 2025

https://old.reddit.com/r/robotics/comments/1qp7z15/dexterous_robotic_hands_2009_2014_2025/
1•gmays•40m ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•ksec•50m ago•1 comments

JobArena – Human Intuition vs. Artificial Intelligence

https://www.jobarena.ai/
1•84634E1A607A•53m ago•0 comments

Concept Artists Say Generative AI References Only Make Their Jobs Harder

https://thisweekinvideogames.com/feature/concept-artists-in-games-say-generative-ai-references-on...
1•KittenInABox•57m ago•0 comments

Show HN: PaySentry – Open-source control plane for AI agent payments

https://github.com/mkmkkkkk/paysentry
2•mkyang•59m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
2•ShinyaKoyano•1h ago•1 comments

The Crumbling Workflow Moat: Aggregation Theory's Final Chapter

https://twitter.com/nicbstme/status/2019149771706102022
1•SubiculumCode•1h ago•0 comments

Pax Historia – User and AI powered gaming platform

https://www.ycombinator.com/launches/PMu-pax-historia-user-ai-powered-gaming-platform
2•Osiris30•1h ago•0 comments

Show HN: I built a RAG engine to search Singaporean laws

https://github.com/adityaprasad-sudo/Explore-Singapore
3•ambitious_potat•1h ago•4 comments

Scams, Fraud, and Fake Apps: How to Protect Your Money in a Mobile-First Economy

https://blog.afrowallet.co/en_GB/tiers-app/scams-fraud-and-fake-apps-in-africa
1•jonatask•1h ago•0 comments

Porting Doom to My WebAssembly VM

https://irreducible.io/blog/porting-doom-to-wasm/
2•irreducible•1h ago•0 comments

Cognitive Style and Visual Attention in Multimodal Museum Exhibitions

https://www.mdpi.com/2075-5309/15/16/2968
1•rbanffy•1h ago•0 comments

Full-Blown Cross-Assembler in a Bash Script

https://hackaday.com/2026/02/06/full-blown-cross-assembler-in-a-bash-script/
1•grajmanu•1h ago•0 comments

Logic Puzzles: Why the Liar Is the Helpful One

https://blog.szczepan.org/blog/knights-and-knaves/
1•wasabi991011•1h ago•0 comments
Open in hackernews

‘Overworked, underpaid’ humans train Google’s AI

https://www.theguardian.com/technology/2025/sep/11/google-gemini-ai-training-humans
287•Brajeshwar•4mo ago

Comments

kerblang•4mo ago
Are other AI companies doing the same thing? Would like to see more articles about this...
jkkola•4mo ago
There's a YouTube video titled "AI is a hype-fueled dumpster fire" [0] that mentions OpenAI's shenanigans. I haven't fact checked that but I've heard enough stories to believe it.

[0] https://youtu.be/0bF_AQvHs1M?si=rpMG2CY3TxnG3EYQ

thepryz•4mo ago
Scale AI’s entire business model was using people in developing countries to label data for training models. Once you look into it, it comes across as rather predatory.

This was one of the first links I found re: Scale’s labor practices https://techcrunch.com/2025/01/22/scale-ai-is-facing-a-third...

Here’s another: https://relationaldemocracy.medium.com/an-authoritarian-work...

lawgimenez•4mo ago
Couple of months ago I received a job invite for Kotlin AI trainers from the team at Upwork. I asked what the job is about and she says something like "for the opportunity to review & evaluate content for generative AI." And I'm from a developed country too.
benreesman•4mo ago
There's nontrivial historical precedent for this exact playbook: when a new paradigm (Lisp machines and GOFAI search, GPU backprop, softmax self-attention) is scaling fast, a lot of promises get made, a lot of national security money gets involved, and AI Summer is just balmy.

But the next paradigm breakthrough is hard to forecast, and the current paradigm's asymptote is just as hard to predict, so it's +EV to say "tomorrow" and "forever".

When the second becomes clear before the first, you turk and expert label like it's 1988 and pray that the next paradigm breakthrough is soon, you bridge the gap with expert labeling and compute until it works or you run out of money and the DoD guy stops taking your calls. AI Winter is cold.

And just like Game of Thrones, no I mean no one, not Altman, not Amodei, not Allah Most Blessed knows when the seasons in A Song of Math and Grift will change.

jhbadger•4mo ago
Karen Hao's recent book "Empire of AI" about the rise of OpenAI goes into detail how people in Africa and South America were hired (and arguably exploited) for their training efforts.
maltelandwehr•4mo ago
Can you explain the exploited part?

My understanding is they performed work and were paid for it at market rate. So just regular capitalism. Or was there more to it?

jhbadger•4mo ago
According to the book they kept dropping the rates paid per item forcing people to work ridiculous 12+ hours/day just to get enough to live on, even in the low cost of living places they were in. It was like something in a cyberpunk dystopia but real.
intended•4mo ago
This is a weird sentence, because its got many assumptions baked in that pull the answers in different directions, if they have to conform with the implied definitions you are using.

Global south nations do not have the same level of Judicial recourse, work safety norms, and health infrastructure as does, say, America. So people doing labelling work who then go ahead and kill themselves after getting PTSD, are just costs of doing business.

This can be put under many labels, to transfer the objectionable portion to some other entity or ideology - in your case "capitalism".

That doesn't mean it is actually capitalism. In this case it's exploitating gaps in global legal infrastructure.

I used to bash capitalism happily, but its becoming a white whale, and catch all. We don't even have capitalism anywhere, since you can get far too many definitions for that term today.

cs702•4mo ago
The title is biased, blaming Google for mistreating people and implying that Google's AI isn't smart, but the OP is worth reading, because it gives readers a sense of the labor and cost involved in providing AI models with human feedback, the HF in RLHF, to ensure they behave in ways acceptable to human beings, more aligned with human expectations, values, and preferences.
lm28469•4mo ago
> to ensure the AI models are more aligned with human values and preferences.

And which are these universal human values and preferences ? Or are we talking about silicon valley's executives values ?

alehlopeh•4mo ago
Well, it doesn’t say universal so it’s clearly going to be a specific set of human values and preferences. It’s obviously referring to the preferences of the humans who are footing the bill and who stand to profit from it. The extent to which those values happen to align with those of the eventual consumer of this product could potentially determine whether the aforementioned profits ever materialize.
giveita•4mo ago
> Sawyer is one among the thousands of AI workers contracted for Google through Japanese conglomerate Hitachi’s GlobalLogic to rate and moderate the output of Google’s AI products...

Depends how you look at it. I think a brand like Google should vet a mere one level down the supply chain.

FirmwareBurner•4mo ago
I had no idea Hitachi was also running software sweatshops.
rs186•4mo ago
> to ensure the AI models are more aligned with human values and preferences.

to ensure the AI models are more aligned with Google's values and preferences.

FTFY

falcor84•4mo ago
I'm a big fan of cyberpunk dystopian fiction, but I still can't quite understand what you're alluding to here. Can you give an example value that google align the AI with that you think isn't a positive human value?
Ygg2•4mo ago
"Adtech is good. Adblockers are unnatural"
smokel•4mo ago
Google Gemini 2.5 Pro actually has a quite nuanced reply when asked to consider this statement, including the following:

> "Massive privacy invasion: The core of modern adtech runs on tracking your behavior across different websites and apps. It collects vast amounts of personal data to build a detailed profile about your interests, habits, location, and more, often without your full understanding or consent."

Ygg2•4mo ago
You don't boil the frog instantly. You first lobotomize it, by gaining its trust. Then you turn up the heat. See how YouTube went from Ads are optional to Adblockers are immoral.
ToucanLoucan•4mo ago
Their entire business model? Making search results worse to juice page impressions? Every dark pattern they use to juice subscriptions like every other SaaS company? Brand lock-in for Android? Paying Apple for prominent placement of their search engine in iOS? Anti-competitive practices in the Play store? Taking a massive cut of Play Store revenue from people actually making software?
simonw•4mo ago
How does all of that affect the desired outputs for their LLMs?
scotty79•4mo ago
You'll see once they figure it out.
jondwillis•4mo ago
Or, if they really figure it out, you’ll only feel it.
watwut•4mo ago
Google likes it when it can show you more ads, it is not positive human value.

It does not have to have anything ro do with cyberpunk. Corporations are not people, but if they were people, they would be powerful sociopaths. Their interests and anybody elses interests are not the same.

add-sub-mul-div•4mo ago
Yes, and one more tweak: the values of Google or anyone paying Google to deliver their marketing or political messaging.
zozbot234•4mo ago
RLHF (and its evolution, RLAIF) is actually used for more than setting "values and preferences". It's what makes AI models engage in recognizable behavior, as opposed to simply continuing a given text. It's how the "Chat" part of "ChatGPT" can be made to work in the first place.
cs702•4mo ago
Yes. I updated my comment to reflect as much. Thank you.
throwaway106382•4mo ago
What is a "human value" and whose preferences?
NewEntryHN•4mo ago
Isn't that mostly the fine-tuning phase? RLHF being cherry on top?
zerodaysbroker•4mo ago
The title seems kinda misleading, this is from the article (GlobalLogic is the company contracted by Google):

"AI raters at GlobalLogic are paid more than their data-labeling counterparts in Africa and South America, with wages starting at $16 an hour for generalist raters and $21 an hour for super raters, according to workers. Some are simply thankful to have a gig as the US job market sours, but others say that trying to make Google’s AI products better has come at a personal cost."

mallowdram•4mo ago
Gemini is faked.

How this industry managed to not grasp that meaning exists entirely separate from words is altogether bizarre.

dolphinscorpion•4mo ago
"Google" posted a job opening. They applied for and took the job, agreeing to posted pay and conditions. End of the story. It's not up to the Guardian to decide
xkbarkar•4mo ago
I agree, article is pretty low quality ragebait. Not good journalism at all.
lysace•4mo ago
It is amazing how much their quality levels have fallen during the past two decades.

I used to point to their reporting as something that my nation’s newspapers should seek to emulate.

(My nation’s newspapers have since fallen even lower.)

jimnotgym•4mo ago
Is it amazing? They are struggling to make money as much as every other news organisation, they have to keep cutting costs to do it. Then they need as many click throughs from social platforms as possible so that they can sell at least some advertising. I would say it is inevitable.
lysace•4mo ago
It is inevitable that the journalistic integrity of the Guardian goes to shit?
anthonj•4mo ago
Not so easy. What if you get hired as a physiotherapist somewhere but on your first day you find out you will work in a brothel?

Or join an hospital as nurse, but then you are asked to perform surgery as you were a doctor?

There are serious issues outlined in the article.

lysace•4mo ago
This is not what the article is outlining.
anthonj•4mo ago
The article mentions some stories such ad the one lady requested to edit medical-related infos without having any qualifications to evaluate thir correctness.

Or the one about handling disturbing concted with no previous warning and no consueling

iandanforth•4mo ago
"Google said in a statement: “Quality raters are employed by our suppliers and are temporarily assigned to provide external feedback on our products. Their ratings are one of many aggregated data points that help us measure how well our systems are working, but do not directly impact our algorithms or models.” GlobalLogic declined to comment for this story." (emphasis mine)

How is this not a straight up lie? For this to be true they would have to throw away labeled training data.

Gracana•4mo ago
They probably don’t do it at a scale large enough to do RLHF with it, but it’s still useful feedback the people working on the projects / products.
zozbot234•4mo ago
More recent models actually use "reinforcement learning from AI feedback", where the task of assigning a reward is essentially fed back into the model itself. Human feedback is then only used to ground the training, on selected examples (potentially even entirely artificial ones) where the AI is most highly uncertain about what feedback should be given.
creddit•4mo ago
Because they are doing it to compute quality metrics not to implement RLHF. It’s not training data.
visarga•4mo ago
Every decision they take based on evals influences the model.
creddit•4mo ago
/"directly"/
teiferer•4mo ago
Key word: "directly"

It does so indirectly, so it's a true albeit misleading statement.

skybrian•4mo ago
It's not part of the inner feedback loop. It's part of the outer feedback loop that they use to decide if the inner loop is working.
yobbo•4mo ago
> For this to be true they would have to throw away labeled training data.

That's how validation works.

jfengel•4mo ago
Is there a reason not to use validation data in your next round of training data? Or is it more efficient to reuse validation and instead get more training data?
parineum•4mo ago
You'd have to recreate your validation if you trained your model on it every iteration and then they wouldn't be consistent enough to show any trends
jfengel•4mo ago
I'd have thought that if you kept the same validation you'd risk over fitting.

Clearly that does make it hard to measure. I'd think you'd want "equivalent" validation (like changing the SATs every year), though I imagine that's not really a meaningful concept.

ants_everywhere•4mo ago
When they switch to aligning with algorithms instead of humans we'll get another story about how terrible it was that they removed the jobs that were terrible when they existed.

This doesn't sound as bad to me as the Facebook moderator job or even a call center job, but it does sound pretty tedious.

lysace•4mo ago
with wages starting at $16 an hour for generalist raters and $21 an hour for super raters, according to workers

That’s sort of what I expect the Guardian’s UK online non-sub readers to make.

Perhaps GlobalLogic should open a subsidiary in the UK?

simonw•4mo ago
Something I'd be interested to understand is how widespread this practice is. Are all of the LLMs trained using human labor that is sometimes exposed to extreme content?

There are a whole lot of organizations training competent LLMs these days in addition to the big three (OpenAI, Google, Anthropic).

What about Mistral and Moonshot and Qwen and DeepSeek and Meta and Microsoft (Phi) and Hugging Face and Ai2 and MBZUAI? Do they all have their own (potentially outsourced) teams of human labelers?

I always look out for notes about this in model cards and papers but it's pretty rare to see any transparency about how this is done.

yvdriess•4mo ago
One of the key innovations behind the DNN/CNN models was Mechanical Turk. OpenAI used a similar system extensively to improve the early GPT models. I would not be surprised that the practice continues today; NN models needs a lot of quality ground truth training data.
simonw•4mo ago
Right, but where are the details?

Given the number of labs that are competing these days on "open weights" and "transparency" I'd be very interested to read details of how some of them are handling the human side of their model training.

I'm puzzled at how little information I've been able to find.

esperent•4mo ago
I read this a few years ago.

Time Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic

https://time.com/6247678/openai-chatgpt-kenya-workers/

Beyond that, I think the reason you haven't heard more about it is that it happens in developing countries, so western media doesn't care much, and also because big AI companies work hard to distance themselves from it. They'll never be the ones directly employing these AI sweatshop works, it's all contracted out.

conradkay•4mo ago
Good article from 2023, not much data though if that's what you're looking for:

https://nymag.com/intelligencer/article/ai-artificial-intell...

unwalled: https://archive.ph/Z6t35

Generally seems similar today just on a bigger Scale. And much more focus on coding

Here in the US DataAnnotation seems to be the most marketed company offering these jobs

ics•4mo ago
This is not going to be as deep/specific as you want but a starting point from one of the companies that handles this sort of work is here: https://humandata.mercor.com/mercors-approach/black-box-vs-o...
whilenot-dev•4mo ago
So why do you think asking this question here would yield a satisfying answer, especially how the HN community likes to dispute any vague conclusions for anything as hyped as AI training?

To counter your question, what makes you think that's not the case? Do you think Mistral/Moonshot/Qwen/etc. are all employing their own data labelers? Why would you expect this kind of transparency from for-profit bodies that are evaluated in the billions?

simonw•4mo ago
If you don't ask the question you'll definitely not get an answer. Given how many AI labs follow Hacker News it's not a bad place to pose this.

"what makes you think that's not the case?"

I genuinely do not have enough information to form an opinion one way or the other.

whilenot-dev•4mo ago
> If you don't ask the question you'll definitely not get an answer.

Sure, but the way you're formulating the question is already casting an opinion. Besides, no one could even attempt to answer your questions without falling into the trap of true diligence... one question just asks how all (with emphasis!) LLMs are trained:

> Are all of the LLMs trained using human labor that is sometimes exposed to extreme content?

Who in the world would even be in such a position?

simonw•4mo ago
That question could be answered by proving the opposite: if someone has trained a single competent LLM without any human labor that was exposed to extreme content then not all LLMs were trained that way.
happy_dog1•4mo ago
I've shared this once on HN before, but it's very relevant to this question and just a really great article so I'll reshare it here:

https://www.theverge.com/features/23764584/ai-artificial-int...

it explores the world of outsourced labeling work. Unfortunately hard numbers on the number of people involved are hard to come by because as the article notes:

"This tangled supply chain is deliberately hard to map. According to people in the industry, the companies buying the data demand strict confidentiality. (This is the reason Scale cited to explain why Remotasks has a different name.) Annotation reveals too much about the systems being developed, and the huge number of workers required makes leaks difficult to prevent. Annotators are warned repeatedly not to tell anyone about their jobs, not even their friends and co-workers, but corporate aliases, project code names, and, crucially, the extreme division of labor ensure they don’t have enough information about them to talk even if they wanted to. (Most workers requested pseudonyms for fear of being booted from the platforms.) Consequently, there are no granular estimates of the number of people who work in annotation, but it is a lot, and it is growing. A recent Google Research paper gave an order-of-magnitude figure of “millions” with the potential to become “billions.” "

I too would love to know more about how much human effort is going into labeling and feedback for each of these models, it would be interesting to know.

simonw•4mo ago
That was indeed a great article, but it is a couple of years old now. A lot of of the labeling work described there relates to older forms of machine learning - moderation models, spam labelers, image segmentation etc.

Is it possible in 2025 to train a useful LLM without hiring thousands of labelers? Maybe through application of open datasets (themselves based on human labor) that did not exist two years ago?

happy_dog1•4mo ago
Good question, I don't personally know. The linked article would suggest there are plenty of people working on human feedback for chatbots, but that still doesn't give us any hard numbers or any sense of how the number of people involved is changing over time. Perhaps the best datapoint I have is that revenue for SurgeAI (one of many companies that provides data labeling services to Google and OpenAI among others) has grown significantly in recent years, partly due to ScaleAI's acquisition by Meta, and is now at $1.2 billion without having raised any outside VC funding:

https://finance.yahoo.com/news/surge-ai-quietly-hit-1b-15005...

Their continued revenue growth is at least one datapoint to suggest that the number of people working in this field (or at least the amount of money spent on this field) is not decreasing.

Also see the really helpful comment above from cjbarber, there's quite a lot of companies providing these services to foundation model companies. Another datapoint to suggest the number of people working providing labeling / feedback is definitely not decreasing and is more likely increasing. Hard numbers / increased transparency would be nice but I suspect will be hard to find.

johnnyanmac•4mo ago
Why is it so secretive? This gives me Severance vibes.

Is it just to dodge labor laws?

ics•4mo ago
I have been a generalist annotator for some of the others you mentioned, due to NDA will not specify which. I would venture to guess that basically all major models use some degree of human feedback if there is money coming in from somewhere.
michaelt•4mo ago
> Are all of the LLMs trained using human labor that is sometimes exposed to extreme content?

The business process outsourcing companies labelling things for AI training are often the same outsourcing companies providing moderation services to facebook and other social media companies.

I need 100k images labelled by the type of flower shown, for my flower-identifying AI, so I contract a business that does that sort of thing.

Facebook need 100k flagged images labelled by is-it-an-isis-beheading-video to keep on top of human reviews for their moderation queues. They contract with the same business.

The outsourcing company rotates workers between tasks, so nobody has to be on isis beheading videos for a whole shift.

s1mplicissimus•4mo ago
> The outsourcing company rotates workers between tasks, so nobody has to be on isis beheading videos for a whole shift.

Is that an assumption on your side, a claim made by the business, a documented process or something entirely different?

alasarmas•4mo ago
It has been documented that human image moderators exist and that some have been deeply traumatized by their work. I have zero doubts that the datasets of content and metadata created by human image moderators are being bought and sold, literally trafficking in human suffering. Can you point to a comprehensive effort by the tech majors to create a freely-licensed dataset of violent content and metadata to prevent duplication of human suffering?
michaelt•4mo ago
Nobody's distributing a free dataset of child abuse, animal torture and terror beheading images, for obvious reasons.

There are some open-weights NSFW detectors [1] but even if your detector is 99.9% accurate, you still need an appeals/review mechanism. And someone's got to look at the appeals.

[1] https://github.com/yahoo/open_nsfw

mallowdram•4mo ago
All of this is so dystopian (flowers/beheadings) it makes K Dick look like a golden-age Hollywood musical. Are the engineers so unaware of the essential primate forces underneath this that cannot be sanitized from the events? You can unearth our extinction from this value dichotomy.
alasarmas•4mo ago
I mean, yes, my assumption is there exists an image / video normalization algorithm that can be followed by hashing the normalized value. There’s a CSAM scanning tool that exists that I believe uses a similar approach
michaelt•4mo ago
I know for certain it's whatever you care to contract for, but rotation between tasks is common.

A lot of these suppliers provide on-demand workers - if you need 40 man-hours of work on a one-off task, they can put 8 people on it and get you results within 5 hours.

On the other hand, if you want the same workers every time, it can be arranged. If you want a fixed number of workers on an agreed-upon shift pattern, they can do that too.

Even when there is a rotation, the most undesirable tasks often pay a few bucks extra per hour, so I wouldn't be surprised if there were some people who opted to stay on the worst jobs for a full shift.

throwaway219450•4mo ago
Having tried both strategies, unless your task is brain-dead simple and/or you have a way to cheaply and deterministically validate the labels, always pay to retain the team.

Even if you can afford only a couple of people a month and it takes 5x as long, do it. It's much eaiser to deal with high quality data than to firefight large quantities of slop. Your annotators will get faster and more accurate over time. And don't underestimate the time it takes to review thousands of labels. Even if you get results l in 5 hours, someone has to check if it's any good. You might find that your bottleneck is the review process. Most shops can implement a QA layer for you, but not requesting it upfront is a trap for young players.

kilroy123•4mo ago
Stupid question... How can we build on these models without the humans doing all this work?

Even theoretically.

a3w•4mo ago
AI means actual indians, did we not learn that from the initial OpenAI GPT 3.0 training? It made it to HN.
wslh•4mo ago
It seems a deja vu of previous Amazon's Mechanical Turk[1] discussions[2] but with AI.

[1] https://www.mturk.com/

[2] https://tinyurl.com/4r2p39v3

yanis_t•4mo ago
From my shallow understanding, it seems that human training is involved heavily in the post-training/fine-tuning stage, after the base model has been solidified already.

In that case, how is the notion of truthiness (what the model accepts as right or wrong) affected during this stage , that is affected by human beings vs. it being sealed into the basic model itself, that is truthiness being deduced by the method / part of its world model.

oefrha•4mo ago
> [job] … has come at a personal cost.

Congratulations, you just described most jobs. And many backbreaking laborers make about the same or less, even in the U.S., not to mention the rest of the world.

parineum•4mo ago
Can you believe that companies would ask people to do things they normally wouldn't in exchange for money!?

These types of articles always have an elitist view of the workers hired. That's a big source of the right (in the US) despising the left. The left don't say it directly, but when they talk about how shitty their town is and how the job they have is exploitative, there's an implicit judgment on the persons who live/work there.

mentalgear•4mo ago
In many things "AI" is just another form exploiting the poor to make the rich even wealthier. A form of digital colonialism.
onlinehost•4mo ago
I'm a contractor for one of these companies. It pays okay ($45+/hour) if you can pass qualifications for your area of expertise but the work isn't steady and communication is non-existent. The coding qualifications I did were difficult FAANG algorithm analysis questions. The work has definitely gotten harder over the last year and often says we need to come up with Masters/PhD level work or problems that someone with 5+ years of experience in a field would have difficulty solving. I wish I had a regular job but I live in rural North Carolina and remote work is hard to come by.
dfxm12•4mo ago
Is something stronger than your wish to get a regular job tying you to where you currently live?
SamoyedFurFluff•4mo ago
I just want to note that asking this question implies an openness to one’s personal affairs that may not be appropriate in an anonymous, public setting. A person offering context and insight to a topic is not necessarily an invitation to an for more personal contexts and insights.
dfxm12•4mo ago
I understand it's personal, but I also recognize they went out of their way to bring it up. Some people, including me, are more willing to discuss things anonymously because it adds a layer of impersonality. This is just a discussion board. If OP doesn't answer, that's ok. I don't ever think I'm entitled a response.
tossandthrow•4mo ago
It is a reasonable question that also emphasizes the composite cost of decisions.

Personally I would love to live in a more rural place, but until I am self sufficient enough, this is not an opportunity I am willing to take.

bapak•4mo ago
This is like shouting "I am upset" on Twitter and getting more upset at people asking why.

If you don't want people to ask, don't mention it.

fakedang•4mo ago
Reminds me of that South Park episode: "We want our privacy!!"
johnnyanmac•4mo ago
Is it that bad? The person can not answer or keep it vague with "I have family here" or "I was raised here". They were the ones who decided to mention their state.
onlinehost•4mo ago
I only started seriously looking for work again about a month ago. I'd like to stay in this area for a few reasons but I would relocate if necessary. I worked remotely from 2015 until a layoff in late 2023 and this was the first thing I came across after that. It was okay for awhile and actually pretty interesting at first but the hours aren't reliable and there doesn't seem to be much opportunity for getting promoted.
lelanthran•4mo ago
I wouldn't mind this work at that pay, being particularly strong in leetcode and in CS itself.

How do I join?

ics•4mo ago
Look up Mercor, DataAnnotation.tech, and Outlier. You create a profile, upload a resume, and do some required tasks for each job posting they have. It may involve a combination of interviewing with an AI, doing a few trial tasks, and submitting a portfolio or Github profile.
mattgreenrocks•4mo ago
Gotta love how DataAnnotation has been blanketing Reddit with ads for "remote coding jobs," clearly trading on the ambiguity of "coding."
estimator7292•4mo ago
About 75% of the job postings I see on Indeed and LinkedIn are for one of these places
wutangson1•4mo ago
hmm, this feels like ScaleAI
wdr1•4mo ago
> It pays okay ($45+/hour)

For reference, the median hourly wage is $27/hour.

https://nationalequityatlas.org/indicators/Wages_Median

onlinehost•4mo ago
Yeah the hourly pay can be pretty good but I think what bothers most people is the unpredictable work availability. It can be great for weeks or longer, then suddenly it isn't, and not really any communication about when/if the projects will return. Overall I'm happy I found the gig but it isn't reliable full time income.
apparent•4mo ago
The attractiveness of different wages really depends on what the job involves (working in the hot sun versus in an air conditioned room), whether hours are flexible, and whether you have to spend much time commuting to/from. It sounds like this is pretty good on the intangibles, so it really just comes down to whether the $/hr tradeoff makes sense.
kulahan•4mo ago
Weird thing to see downvoted. I’ve dropped my salary by $50k to maintain a better work-life balance once.
danaris•4mo ago
Except that lack of communication and reliability is, itself, an intangible, that onlinehost says this job is bad on.
shdwbnndvpn•4mo ago
How often do encounter difficult content? Like gore, violence, hate, etc.? I would think prompts would keep that out of responses, is that naive of me?
aleph_minus_one•4mo ago
> How often do encounter difficult content? Like gore, violence, hate, etc.?

Honest question: of course, everybody would prefer to work with "lovely" stuff, but I really have difficulties getting what people find so much difficult/hard about jobs where you encounter such content on a screen (the same holds for moderation jobs).

I would claim that I have seen the internet, and I guess many people of my generation have, too (just to be insanely clear: of course not the kind stuff that is hardcore criminal in basically all jurisdictions worldwide - I don't want to get more explicit here).

I wouldn't say I am blunted, but I do think I could handle this stuff without any serious problems as part of my job. I'd thus rather compare it in terms of emotional comfort with a toilet cleaner who sometimes also has to clean very filthy toilets - which is just an ordinary job that some people in society have to do.

zenmac•4mo ago
>The work has definitely gotten harder over the last year and often says we need to come up with Masters/PhD level work or problems that someone with 5+ years of experience in a field would have difficulty solving.

Many experts are holding out, and I don't blame them. Why would you want to train AI to replace your job?

brookst•4mo ago
For the paycheck? For many people, ideological concerns and years-out possible downsides are less important than putting food on the table.
zenmac•4mo ago
That would be great in a idealistic world where the establishment is not building a control grid using surveillance capitalism rather use the technology to benefit the anology world. In the current geo-political climate the question is do one want to get paid to build one's own digital prison?

Only training the experts should be doing is the ones that is self-hosted or through community of people one trust! Currently none of the big corp qualifies, not sure if the structure of big corp (that it is a person-hood) is capable of creating anything beneficial in the long run.

Why should the big companies benefit from your expertise to build centralize their control?

pydry•4mo ago
Because while the fever dreams of capitalists do not always pan out you do always need a paycheck to make rent.
throwawaysleep•4mo ago
Alternative is someone else does it and now you neither have money or AI training.

It has never been a successful strategy to try and fight new technology. Never.

thwarted•4mo ago
Jeez, the reading comprehension in the other replies is really bad. The "Why would you…" sentence is meant to support the observation that many experts are holding out and have no need to be involved with this training, not meant to ask why people like to get paid.
BuckRogers•4mo ago
You may want to just find something else to do. The industry is not going to get any better going forward anyway. I’m a full-time web developer that works from home. But I’m joining the pipefitters union to do HVAC work. I need the life insurance, the health insurance, the better pay, the 401k, the 1.5 to 2X overtime pay, and the pension credits. Right now I’m only paid cash. I’m midcareer and this industry doesn’t want people like me. I’m a very reliable worker and have been for decades, but I am American and worse yet I’m white, have sex with a woman, and I expected a decent wage out of my chosen career. But it never really happened. I was always either low on pay or low on benefits. If you ever do acquire great pay and great benefits, you’re at the top of their spreadsheet to cut. And you’re never getting younger. They can always bring in someone who will work for less either from school or overseas. At my company, someone left that worked in Michigan, and they’re trying to replace him with someone from Mexico City. Already most of our coworkers are in India. It sounds like you’re in a similar situation. Other types of work can be good too. It’s nice to move around a little bit every day. Give the industry what they want. Let them have their cheap labor. They don’t want reliable employees anyway.
minhaz23•4mo ago
Curious about attempting something like this in my area as well since I’m remote. Are you doing both or does one have to give way to the other eventually?

Also im seeing the same trend as you at my company, roles replaced overseas while people only focus on AI taking the jobs i think this is the more sinister thing happening quietly (by that i mean not getting much news coverage)

BuckRogers•4mo ago
Not surprised to hear that it’s the trend. It’s been going on for quite some time. I used to work for a very large Canadian multinational and HR told me they only hire US/Canadian lead developers. The rest were to be from Bulgaria. This was 10 years ago.

I’m in-progress on all of this but I’m offering my services to my current employer though my LLC for 20 hours a week at 3X the hourly rate of my old salary. Take it or leave it. They are losing their leverage for me with his move. I no longer need them, they can’t put me in the streets.

So not entirely leaving the industry but will take any work at or above the market rate. High rates mean less waste of my time, as it is more limited with starting a 2nd career.

For doing both, there’s no abusive overtime like in software because it’s double time pay. Which puts you at the pay rate of what would be $240,000 a year. No one wastes your time at that rate. You actually want overtime when it’s fairly compensated like that. You can do both.

It’s sad when you work towards something your entire life, both in school and professionally. And you’ve never done anything wrong. We played by the rules of our society, and our lives were stolen from us. As Steve Bannon famously said once, these American workers deserve reparations. If the situation is ever corrected, I don’t think it would be too hard to jump back in at that point full-time.

Discordian93•4mo ago
Same here. I'd love to get a full time coding job even if it meant a pay cut on hourly terms, but everything in my area pays much, much less and also I have a hard time even getting interviews. Guess I'll try to apply to this kind of role but full time, I think Amazon, Mistral and xAI are hiring.
simianwords•4mo ago
Their work doesn’t seem that bad. This article tries really hard to portray that a simple freelance desk job is somehow literally exploitation or something.

Lots of people would do anything to get such work.

paczki•4mo ago
To be honest, this job has changed my entire life. I don't exactly work with Google but nonetheless it's the same job being discussed. Nothing really egregious has happened to me in the months that I've been at it, other than only having 4 hours to fact check and verify a huge amount of information for one job, and it just wasn't enough time for me so I didn't get paid. But that was once out of hundreds? thousands? of tasks.

Unfortunately, I decided to take software engineering more serious and try to make it my career and then the entire market nosedived, with no signs of recovering anytime soon. Breaking into this market has more or less been impossible for a junior, and dare I say: a junior in their mid 30s. At least within this job I do get to work with code every so often, and I get to do it from home while I'm at it which is a bonus.

It's inconsistent so I'm still learning and looking for software, but for the meantime it's been incredible.

hliyan•4mo ago
At least a few of these anecdoates are worrying:

> “At first they told [me]: ‘Don’t worry about time – it’s quality versus quantity,’” she said.

> But before long, she was pulled up for taking too much time to complete her tasks. “I was trying to get things right and really understand and learn it, [but] was getting hounded by leaders [asking], ‘Why aren’t you getting this done? You’ve been working on this for an hour.’”

And:

> Dinika said he’s seen this pattern time and again where safety is only prioritized until it slows the race for market dominance. Human workers are often left to clean up the mess after a half-finished system is released. “Speed eclipses ethics,” he said. “The AI safety promise collapses the moment safety threatens profit.”

Finally:

> One work day, her task was to enter details on chemotherapy options for bladder cancer, which haunted her because she wasn’t an expert on the subject.

lostdog•4mo ago
Yeah, you can see this with Google's search results too. They're trying to improve on some internal metric, but the metric was clearly generated from ratings by people ignorant of the topics. And so the search results get worse, but appear better internally.

Great to see that they have not learned from this experience, and are repeating the mistake with Gemini.

mallowdram•4mo ago
How is this not Quest Diagnostics slipping into Theranos territory, buttressed by a hidden factory of typists?

This reminds me of the early voice-to-text start ups in the 00's that had these miraculous demos that required people in call centers to type it all up and pretend it was machine.

cjbarber•4mo ago
I previously made a list on twitter of some data labeling startups that work with foundation model companies.[1] Here's the RLHF provider section:

RLHF providers:

1. Surge. $1b+ revenue bootstrapped. DataAnnotation is the worker-side (you might've seen their ads), also TaskUp and Gethybrid.

2. Scale. The most well known. Remotasks and Outlier are the worker-side

3. Invisible. Started as a kind of managed VA service.

4. Mercor. Started mostly as a way to hire remote devs I think.

5. Handshake AI. Handshake is a college hiring network. This is a spinout

6. Pareto

7. Prolific

8. Toloka

9. Turing

10. Sepal AI. The team is ex-Turing

11. Datacurve. Coding data.

12. Snorkel. Started as a software platform for data labeling. Offers some data as a service now.

13. Micro1. Also started as a way to hire remote contractor devs

[1]: https://x.com/chrisbarber/status/1965096585555272072

echelon•4mo ago
This is great!

Are there companies that focus on labeling of inputs rather than RLHF of outputs?

cjbarber•4mo ago
Yes, there are quite a few that do that. Appen, iMerit, TELUS, etc. Also Scale AI started focused on input annotation I think for self driving.
skywhopper•4mo ago
This definitely explains why Google’s AI Search Results is so bad at what it purports to do.
luke-stanley•4mo ago
It's strange that the Guardian mentions OpenAI's "O3" model and not GPT-5. Maybe they think o3 is SOTA still, but they should at least name it correctly, in lowercase as OpenAI does.
back2dafucha•4mo ago
Diminishing returns is an ugly business. And thats obviously where we are at. The end not the beginning of LLM "innovation".

Any technology that creates "sysiphian" tasks, is not worth anyones time. That includes LLMs, and "Big Data". The "herculean effort" that never ends is the proof in the pudding. The tech doesnt work.

Its like using machine learning for self driving instead of having an actual working algorythm. Your bust.

sjfaljf•4mo ago
How convenient: throw economy in shambles, coerce professionals into labeling labor in an effort to make humanity obsolete. Will it work?
agigao•4mo ago
Isn't this misleading? to say at least...
throwawaysleep•4mo ago
I work for a few of these during meetings and such. Some are so picky about getting everything done quickly that I can’t believe the data is very valuable.