frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Greatest irony of the AI age: Humans hired to clean AI slop

https://www.sify.com/ai-analytics/greatest-irony-of-the-ai-age-humans-being-increasingly-hired-to-clean-ai-slop/
72•wahvinci•4h ago

Comments

ares623•3h ago
I for one am super excited for what my kids, and the other children they grow up with, will do in their future careers! I am so proud and cheer this future on, it can’t come soon enough! This is software’s true purpose.
notachatbot123•2m ago
The article has a strong focus on deceptive media, used on social media to penetrate viewers' attention. It makes me sad and glad that my kids and family did grow up before all this insanity of psychological abuse.
onion2k•3h ago
Someone tried to generate a retro hip-hop album cover image with AI, but the text is all nonsense, and humans would have to be hired to clean that AI slop

In about two years we've gone from "AI just generates rubbish where the text should be" to "AI spells things pretty wrong." This is largely down to generating a whole image with a textual element. Using a model like SDXL with a LORA like FOOOCUS to do inpainting and input image with a very rough approximation of the right text (added via MS Paint) you can get a pretty much perfect result. Give it another couple of years and the text generation will be spot on.

So yes, right now we need a human to either use the AI well, or to fix it afterwards. That's how technology always goes - something is invented, it's not perfect, humans need to fix the outputs, but eventually the human input diminishes to nothing.

zdragnar•2h ago
> That's how technology always goes

This is not how AI has ever gone. Every approach so far has either been a total dead end, or the underlying concept got pivoted into a simplified, not-AI tech.

This new approach of machine learning content generation will either keep developing, or it will join everything else in the history of AI by hitting a point of diminishing to zero returns.

selalipop•2h ago
But their comment is about 2 years out of date, and AI image gen has got exponentially better at text than when the models and LoRAs they mentioned were SOTA.

I agree we probably won't magically scale current techniques to AGI, but I also think the local maxima for creative output is going to be high enough that it changes how we approach it the way computers changed how we approach knowledge work.

That's why I focus on it at least.

onion2k•2h ago
This is not how AI has ever gone. Every approach so far has either been a total dead end, or the underlying concept got pivoted into a simplified, not-AI tech.

You're talking about the progress of technology. I'm talking about how humans use technology in it's early states. They're not mutually exclusive.

vunderba•2h ago
Minor correction. FOOCUS [1] isn't a LoRA - it's a Gradio-based frontend (in the same vein as Automatic1111, Forge, etc.) for image generation.

And most SOTA models (Imagen, Qwen 20b, etc) at this point can actually already handle a fair amount of text in a single T2I generation. Flux Dev provided your willing to roll a couple gens can do it as well.

[1] https://github.com/lllyasviel/Fooocus

wtcactus•2h ago
First thing that came to mind when I started seeing news about companies needing developers to clean up AI code, was the part of Charlie and the Chocolate Factory where Charlie's father is fired from the toothpaste factory because they bought this new machine to produce the toothpaste, but then they re-wire him for an higher salary because the machine keeps breaking and they need someone to fix it.

AI (at least this form of AI) is not going to take our jobs away and let us all idle and poor, just like the milling machine or the plough didn't take people's jobs away and make everyone poor. it will enable us to do even greater things.

riffraff•2h ago
Well, sometimes innovation does destroy jobs, and people have to adapt to new ones.

The plough didn't make everyone poor, but people working in agriculture these days are a tiny percentage of the population compared to the majority 150 years ago.

(I don't think LLMs are like that, tho).

Touching on this topic, I cannot recommend enough "The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger" which (among other things) illustrates the story of dockworkers: there were entire towns dedicated to loading ships.

But the people employed in that area have declined by 90% in the last 60 years, while shipping has grown by orders of magnitude. New port cities arose, and old ones died. One needs to accept inevitable change sometimes.

[0] https://en.wikipedia.org/wiki/The_Box_(Levinson_book)

visarga•57m ago
By the same logic, people working in transportation around the time Ford Model T was introduced did NOT diminish 100 years later. We went from about 3.2 million in 1910 (~8% of the workforce) to 6–16 million in 2023 (~4–10%, depending on definition). That is the effect of a century of transportation development.

Sometimes demand scales, maybe food is less elastic. Programming has been automating itself with each new language and library for 70 years and here we are, so many software devs. Demand scaled up as a result of automation.

Towaway69•2h ago
> it will enable us to do even greater things.

Just as gun powder enabled greater things. I agree with you just humans have shown, time after time, an ability to first use innovation to make lives miserable for their fellow humans.

pydry•1h ago
>it will enable us to do even greater things.

It doesnt do this.

Dwedit•2h ago
> creating this garbage consumes staggering amounts of water and electricity, contributing to emissions that harm the planet

This is highly dependent on which model is being used and what hardware it's running on. In particular, some older article claimed that the energy used to generate an image was equivalent to charging a mobile phone, but the actual energy required for a single image generation (SDXL, 25 steps) is about 35 seconds of running a 80W GPU.

samplatt•2h ago
35 seconds @ 80W is ~210 mAh, so definitely a lot less than the ~4000+ mAh in today's phone batteries.
serial_dev•58m ago
Don’t you ignore the energy used to train the models? I don’t know how much is that “per image”, but it should be included (and if it shouldn’t, we should know why it is negligible).

I’m not sure it will be as high as a full charge of a phone, but it’s incomplete without the resources needed for collecting data and training the model.

red369•56m ago
I'm going to expose my ignorance here, but I thought mAh/Ah was not a good measure for comparing storage of quite different devices, because it doesn't take into account voltage. This is fine for comparing Li-ion devices, because they use the same voltage, but I understood that using watt-hours was therefore more appropriate for apples-to-apples comparisons for devices with different wattages.

Am I missing something? Does the CPU/GPU/APU doing this calculation on servers/PCs run the same wattage as mobile devices?

Gigachad•41m ago
No you are completely right. mAh is a unit of current over time. Not power.

The proper unit is watt hours.

th0ma5•1h ago
An ideal measurement would be to calculate your utility's water usage to kWh then to this, perhaps a token per gram measurement. Of course it would be small, but it should be calculable and then directly comparable to these models if we try to go by something like token per gram of water. I suspect due to DC power distribution they may be more efficient in the data center. You could get more specific about the water too, recycled vs evaporated etc etc
a2128•1h ago
Nobody's running SDXL on an 80W GPU when they're talking about generating images, and you also have to take into account training and developing SDXL or the relevant model. AI companies are spending a lot of resources on training, trying various experiments, and lately they've become a lot more secretive when it comes to reporting climate impact or even any details about their models (how big is ChatGPT's image generation model compared to SDXL? how many image models do they even have?)
numpad0•1h ago
IIRC some of those researches you're taking estimates from not only used cherry picked figures for AI image generators, but also massively underestimated man-hour costs of human artists by using commission prices and market labor rates without requisite corroboration works before choosing those values.

Their napkin math went like, human artists take $50 or so per art, which is let's say $200/hr skill, which means each art cannot take longer than 10 minutes, therefore the values used for AI must add up to less than 10 workstation minutes, or something like that.

And that math is equally broken for both sides: SDXL users easily spend hours rolling dice a hundred times without usable images, and likewise, artists just easily spend a day or two for an interesting request that may or may not come with free chocolates.

So those estimates are not only biased, but basically entirely useless.

visarga•1h ago
I did a little investigation. Turns out that GPT-4's training consumes as much energy as 300 cars in their lifetime, which comes about 50 GWh. Not really that much, could be just families on a short street burning that kind of energy. As for inference, GPT-4 usage for an hour consumes less than watching Netflix for an hour.

If you compare datacenter energy usage to the rest, it amounts to 5%. Making great economies on LLMs won't save the planet.

lelanthran•56m ago
> As for inference, GPT-4 usage for an hour consumes less than watching Netflix for an hour.

This can't be correct, I'd like to see how this was measured.

Running a GPU at full throttle for one hour uses less power than serving data for one hour?

I'm very sceptical.

visarga•46m ago
An hour of Netflix streaming consumes approximately 77 Wh according to IEA analysis showing streaming a Netflix video in 2019 typically consumed around 0.077 kWh of electricity per hour [1], while an hour of active GPT-4 chatting (assuming 20 queries at 0.3 Wh each) consumes roughly 6 Wh based on Epoch AI's estimate that a single query to GPT-4o consumes approximately 0.3 watt-hours per query [2]. That makes Netflix about 13 times more energy-intensive than LLM usage.

[1] https://www.iea.org/commentaries/the-carbon-footprint-of-str...

[2] https://epoch.ai/gradient-updates/how-much-energy-does-chatg...

lelanthran•30m ago
Jesus Christ, what a poor take on those numbers! It's possible to have a more wrong interpretation, but not by much.

The Netflix consumption takes into account everything[1], the numbers for AI are only the GPU power consumption, not including the user's phone/laptop.

IOW, you are comparing the power cost of using a datacenter + global network + 55" TV to the cost of a single 1shot query (i.e. a tiny prompt) on the GPU only

Once again, I am going to say that the power cost of serving up a stored chunk of data is going to be less than the power cost of first running a GPU and then serving up that chunk.

==================

[1] Which (in addition to the consumption by netflix data centers) includes the network equipment in between, the computer/TV on the user's end. Consider that the user is watching netflix on a TV (min 100w, but more for a 60" large screen).

lelanthran•59m ago
> the actual energy required for a single image generation (SDXL, 25 steps) is about 35 seconds of running a 80W GPU.

And just how many people manage to 1shot the image?

There are maybe 5 to 20 images generated before the user is happy.

intellectronica•2h ago
How is this ironic? Carelessly AI-generated output (what we call "slop") is precisely that mediocre average you get before investing more in refining it through iteration. The problem isn't that additional work is needed, but that in many cases it is assumed that no additional work is needed and the first generation from a vague prompt is good enough.
kataklasm•2h ago
The irony stems from the fact workers are fired due to being 'replaced' by AI only to then be re-hired afterwards to clean up the slop, thus maximizing costs to the business!
ggm•2h ago
Relative cost of labour will differ. One was subject matter expert price, the other will aim for mechanical turk.

When the big lawsuits hit, they'll roll back.

adventured•1h ago
It'll be a large cost reduction over time. The median software developer in the US was at around $112,000 in salary plus benefits on top of that (healthcare, stock compensation), prior to the job plunge. Call it a minimum of $130,000 just at the median.

They'll hire those people back at half their total compensation, with no stock, far fewer benefits, to clean up AI slop. And or just contract it overseas at ~1/3 the former total cost.

Another ten years from now the AI systems will have improved drastically, reducing the slop factor. There's no scenario where it goes back to how it was, that era is over. And the cost will decline substantially versus the peak for US developers.

dns_snek•1h ago
Based on... what? The more you try to "reduce costs" by letting LLMs take the reigns, the more slop will eventually have to be cleaned up by senior developers. The mess will get exponentially bigger and harder to resolve.

Because I think it won't just be a linear relationship. If you let 1 vibe coder replace a team of 10, you'll need a lot more than 10 people to clean it up and maintain it going forward when they hit the wall.

Personally I'm looking forward to the news stories about major companies collapsing under the weight of their LLM-induced tech debt.

lelanthran•53m ago
Cleaning up code requires more skill than creating it (see Kernhigans quote)

Why does that fact stop being true when the code is created by AI?

fifilura•1h ago
Like being a middle manager for employees that don't learn :)
sltr•1h ago
> AI was supposed to replace humans

There are really two observations here: 1. AI hasn't commoditized skilled labor. 2. AI is diluting/degrading media culture.

For the first, I'm waiting for more data, e.g. from the BLS. For the second, I think a new category of media has emerged. It lands somewhere near chiptune and deep-fried memes.

mschuster91•50m ago
> There are really two observations here: 1. AI hasn't commoditized skilled labor.

The problem is, actually skilled labor - think of translators, designers, copywriters - still is obviously needed, but at an intermediate/senior level. These people won't be replaced for a few years to come, and thus won't show up in labor board statistics.

What is getting replaced (or rather, positions not refilled as the existing people move up the career ladder) is the bottom of the barrel: interns and juniors, because that level of workmanship can actually be done by AI in quite a few cases despite it also being skilled work. But this kind of replacement doesn't show up in any kind of statistics, maybe the number of open positions - but a change in that number can also credibly be attributed to economic uncertainty thanks to tariffs, the Russian invasion, people holding their money together and foregoing spending, yadda yadda.

Obviously this is going to completely wreck the entire media/creative economy in a few years: when the entry side of the career funnel has dried up "thanks" to AI... suddenly there will not be any interns that evolve into juniors, no juniors that evolve into intermediates, no intermediates that evolve into seniors... and all that will be left for many an ad/media agency are a bunch of ghouls in suits that last touched Photoshop one and a half decades ago and sales teams.

mmmllm•1h ago
The greatest irony is that the only comment on that article is AI generated
grey-area•1h ago
We have not yet entered the AI age, though I believe we will.

LLMs are not AI. Machine learning is more useful. Perhaps they will evolve or perhaps they will prove a dead end.

bheadmaster•1h ago
> LLMs are not AI. Machine learning is more useful.

LLMs are a particular application of machine learning, and as such LLMs both benefit by and contribute to general machine learning techniques.

I agree that LLMs are not the AI we all imagine, but the fact that it broke a huge milestone is a big deal - natural language used to be one of the metrics of AGI!

I believe it is only a matter of time until we get to a multi-sensory self-modifying large models which can both understand and learn from all five of human senses, and maybe even some of the senses we have no access to.

pyzhianov•26m ago
> natural language used to be one of the metrics of AGI

what if we have chosen a wrong metric there?

1718627440•11m ago
I don't think we have. Semantic symbolic computation on natural languages still seams like a great way to bring reasoning to computers, but LLMs aren't doing that.
anonzzzies•53m ago
We keep moving the goalposts...
lordnacho•1h ago
It's not so strange that e-commerce is the first thing that AI has visibly altered. Most "buy this thing" sites really just have one proposition at their core. The presentation is incidental. You can't have a website looking like it's still 2003, but you also don't really care what your 2025 shop front looks like. Your ads are there to draw attention, not to be works of art.

What does AI do, at its heart? It is literally trained to make things that can pass for what's ordinary. What's the best way to do that, normally? Make a bland thing that is straight down the middle of the road. Boring music, boring pictures, boring writing.

Now there are still some issues with common sense, due to the models lacking certain qualities that I'm sure experts are working on. Things like people with 8 fingers, lack of a model of physics, and so on. But we're already at a place where you could easily not spot a fake, especially while not paying attention.

So where does that leave us? AI is great at producing scaffolding. Lorem Ipsum, but for everything.

Humans come in to add a bit of agency. You have to take some risk when you're producing something, decisions have to be made. Taste, one might call it. And someone needs to be responsible for the decisions. Part of that is cleaning up obvious errors, but also part of it is customizing the skeleton so that it does what you want.

mikepurvis•17m ago
Brad Pitt as Rusty: "Don't use seven words when four will do. Don't shift your weight, look always at your mark but don't stare, be specific but not memorable, be funny but don't make him laugh. He's got to like you then forget you the moment you've left his side. And for God's sake, whatever you do, don't, under any circumstances..."

(from Ocean's Eleven)

SuperHeavy256•41m ago
Man complains about AI giving him a starting point to do his work from.
alexander2002•38m ago
Someone can drop a sick blog post called LoremIpsum.ai
rich_sasha•5m ago
Is that so ironic? Think of humans in factories fishing out faulty items, where formerly they would perhaps be the artisans that made the product in the first place.

Baldur's Gate 3 Steam Deck – Native Version

https://larian.com/support/faqs/steam-deck-native-version_121
387•_JamesA_•8h ago•256 comments

Find SF parking cops

https://walzr.com/sf-parking/
683•alazsengul•14h ago•385 comments

Libghostty is coming

https://mitchellh.com/writing/libghostty-is-coming
670•kingori•18h ago•205 comments

Qwen3-VL

https://qwen.ai/blog?id=99f0335c4ad9ff6153e517418d48535ab6d8afef&from=research.latest-advancement...
309•natrys•11h ago•74 comments

A webshell and a normal file that have the same MD5

https://github.com/phith0n/collision-webshell
45•shlomo_z•3d ago•18 comments

Top Programming Languages 2025

https://spectrum.ieee.org/top-programming-languages-2025
153•jnord•8h ago•194 comments

Markov chains are the original language models

https://elijahpotter.dev/articles/markov_chains_are_the_original_language_models
358•chilipepperhott•4d ago•131 comments

Quadratic memory reductions for Zero-knowledge Proofs

https://github.com/logannye/space-efficient-zero-knowledge-proofs
63•logannyeMD•6h ago•17 comments

Building a better online editor for TypeScript

https://blog.val.town/vtlsp
17•fbuilesv•2d ago•2 comments

From Rust to reality: The hidden journey of fetch_max

https://questdb.com/blog/rust-fetch-max-compiler-journey/
184•bluestreak•11h ago•36 comments

Getting AI to work in complex codebases

https://github.com/humanlayer/advanced-context-engineering-for-coding-agents/blob/main/ace-fca.md
349•dhorthy•18h ago•295 comments

Podman Desktop celebrates 3M downloads

https://podman-desktop.io/blog/3-million
147•twelvenmonkeys•11h ago•35 comments

Is life a form of computation?

https://thereader.mitpress.mit.edu/is-life-a-form-of-computation/
140•redeemed•11h ago•108 comments

Processing Strings 109x Faster Than Nvidia on H100

https://ashvardanian.com/posts/stringwars-on-gpus/
13•samspenc•3d ago•1 comments

A vibrator helped me debug a motorcycle brake light system

https://bikesafe.me/blogs/news/how-a-vibrator-helped-me-debug-a-motorcycle-brake-light-system
91•mygnu•3d ago•25 comments

New study shows plants and animals emit a visible light that expires at death

https://pubs.acs.org/doi/10.1021/acs.jpclett.4c03546
29•ivewonyoung•5h ago•7 comments

Zutty: Zero-cost Unicode Teletype, high-end terminal for low-end systems

https://git.hq.sig7.se/zutty.git
49•klaussilveira•6h ago•12 comments

Greatest irony of the AI age: Humans hired to clean AI slop

https://www.sify.com/ai-analytics/greatest-irony-of-the-ai-age-humans-being-increasingly-hired-to...
72•wahvinci•4h ago•45 comments

Introduction to Programming Languages

https://hjaem.info/itpl
40•parksb•4d ago•6 comments

Why is modern data architecture so confusing? And what made sense for me

https://www.exasol.com/hub/data-warehouse/architecture/
7•chauhanbk1551•1d ago•2 comments

Always Invite Anna

https://sharif.io/anna-alexei
837•walterbell•16h ago•102 comments

How to draw construction equipment for kids

https://alyssarosenberg.substack.com/p/how-to-draw-construction-equipment
109•holotrope•13h ago•60 comments

Is Fortran better than Python for teaching basics of numerical linear algebra?

https://loiseaujc.github.io/posts/blog-title/fortran_vs_python.html
77•Bostonian•13h ago•77 comments

NYC Telecom Raid: What's Up with Those Weird SIM Banks?

https://tedium.co/2025/09/23/secret-service-raid-sim-bank-telecom-hardware/
178•coloneltcb•8h ago•120 comments

Launch HN: Strata (YC X25) – One MCP server for AI to handle thousands of tools

125•wirehack•17h ago•63 comments

Periodic Table of Cognition

https://kk.org/thetechnium/the-periodic-table-of-cognition/
34•garspin•7h ago•5 comments

Mesh: I tried Htmx, then ditched it

https://ajmoon.com/posts/mesh-i-tried-htmx-then-ditched-it
218•alex-moon•20h ago•147 comments

Context Engineering for AI Agents: Lessons

https://manus.im/blog/Context-Engineering-for-AI-Agents-Lessons-from-Building-Manus
83•helloericsf•11h ago•4 comments

Apple A19 SoC die shot

https://chipwise.tech/our-portfolio/apple-a19-dieshot/
109•giuliomagnifico•13h ago•55 comments

Show HN: Inferencer – Run and deeply control local AI models (macOS release)

https://inferencer.com/
6•xcreate•2h ago•0 comments