frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

How to Firefox

https://kau.sh/blog/how-to-firefox/
87•Vinnl•1h ago•35 comments

Complete silence is always hallucinated as "ترجمة نانسي قنقر" in Arabic

https://github.com/openai/whisper/discussions/2608
363•edent•6h ago•146 comments

Global hack on Microsoft Sharepoint hits U.S., state agencies, researchers say

https://www.washingtonpost.com/technology/2025/07/20/microsoft-sharepoint-hack/
675•spenvo•1d ago•327 comments

Uv: Running a script with dependencies

https://docs.astral.sh/uv/guides/scripts/#running-a-script-with-dependencies
365•Bluestein•12h ago•103 comments

The .a File Is a Relic: Why Static Archives Were a Bad Idea All Along

https://medium.com/@eyal.itkin/the-a-file-is-a-relic-why-static-archives-were-a-bad-idea-all-along-8cd1cf6310c5
32•eyalitki•3d ago•33 comments

An unprecedented window into how diseases take hold years before symptoms appear

https://www.bloomberg.com/news/articles/2025-07-18/what-scientists-learned-scanning-the-bodies-of-100-000-brits
88•helsinkiandrew•4d ago•29 comments

Jujutsu for busy devs

https://maddie.wtf/posts/2025-07-21-jujutsu-for-busy-devs
232•Bogdanp•11h ago•283 comments

What went wrong inside recalled Anker PowerCore 10000 power banks?

https://www.lumafield.com/article/what-went-wrong-inside-these-recalled-power-banks
438•walterbell•17h ago•211 comments

Python Audio Processing with Pedalboard

https://lwn.net/Articles/1027814/
32•sohkamyung•3d ago•5 comments

AI comes up with bizarre physics experiments, but they work

https://www.quantamagazine.org/ai-comes-up-with-bizarre-physics-experiments-but-they-work-20250721/
208•pseudolus•10h ago•120 comments

TrackWeight: Turn your MacBook's trackpad into a digital weighing scale

https://github.com/KrishKrosh/TrackWeight
569•wtcactus•21h ago•137 comments

AccountingBench: Evaluating LLMs on real long-horizon business tasks

https://accounting.penrose.com/
484•rickcarlino•19h ago•138 comments

The Hater's Guide to the AI Bubble

https://www.wheresyoured.at/the-haters-gui/
33•lukebennett•1h ago•2 comments

Kapa.ai (YC S23) is hiring a software engineers (EU remote)

https://www.ycombinator.com/companies/kapa-ai/jobs/JPE2ofG-software-engineer-full-stack
1•emil_sorensen•5h ago

Don't bother parsing: Just use images for RAG

https://www.morphik.ai/blog/stop-parsing-docs
275•Adityav369•18h ago•66 comments

Show HN: A rudimentary game engine to build four dimensional VR evironments

https://www.brainpaingames.com/Hypershack.html
20•teemur•2d ago•1 comments

Nasa’s X-59 quiet supersonic aircraft begins taxi tests

https://www.nasa.gov/image-article/nasas-x-59-quiet-supersonic-aircraft-begins-taxi-tests/
91•rbanffy•2d ago•54 comments

How to Migrate from OpenAI to Cerebrium for Cost-Predictable AI Inference

https://ritza.co/articles/migrate-from-openai-to-cerebrium-with-vllm-for-predictable-inference/
23•sixhobbits•4h ago•19 comments

Erlang 28 on GRiSP Nano using only 16 MB

https://www.grisp.org/blog/posts/2025-06-11-grisp-nano-codebeam-sto
170•plainOldText•16h ago•10 comments

'Shameful' CBA hiring Indian ICT workers after firing Australians

https://ia.acs.org.au/article/2025/-shameful--cba-hiring-indian-ict-workers-after-firing-australian.html
94•theteapot•3h ago•50 comments

New records on Wendelstein 7-X

https://www.iter.org/node/20687/new-records-wendelstein-7-x
230•greesil•20h ago•104 comments

Look up macOS system binaries

https://macosbin.com
47•tolerance•3d ago•13 comments

Losing language features: some stories about disjoint unions

https://graydon2.dreamwidth.org/318788.html
95•Bogdanp•3d ago•36 comments

The Game Genie Generation

https://tedium.co/2025/07/21/the-game-genie-generation/
130•coloneltcb•18h ago•56 comments

What will become of the CIA?

https://www.newyorker.com/magazine/2025/07/28/the-mission-the-cia-in-the-21st-century-tim-weiner-book-review
117•Michelangelo11•17h ago•200 comments

I've launched 37 products in 5 years and not doing that again

https://www.indiehackers.com/post/ive-launched-37-products-in-5-years-and-not-doing-that-again-0b66e6e8b3
181•AlexandrBel•23h ago•175 comments

Largest piece of Mars on Earth fetches $5.3M at auction

https://apnews.com/article/mars-rock-meteorite-auction-dinosaur-sothebys-01d7ccfc8dc580ad86f8e97a305fc8fa
3•avonmach•3d ago•0 comments

I know genomes and I didn’t delete my data from 23andMe

https://stevensalzberg.substack.com/p/i-know-genomes-dont-delete-your-dna
67•bookofjoe•17h ago•97 comments

Tokyo's retro shotengai arcades are falling victim to gentrification

https://www.theguardian.com/world/2025/jul/18/cult-of-convenience-how-tokyos-retro-shotengai-arcades-are-falling-victim-to-gentrification
52•pseudolus•3d ago•32 comments

Occasionally USPS sends me pictures of other people's mail

https://the418.substack.com/p/a-bug-in-the-mail
183•shayneo•21h ago•171 comments
Open in hackernews

AI could have written this: Birth of a classist slur in knowledge work [pdf]

https://advait.org/files/sarkar_2025_ai_shaming.pdf
39•deverton•9h ago

Comments

andsoitis•9h ago
Would love to read, but it seems heavily paywalled, so can't.
deverton•9h ago
The author seems to be hosting the full PDF on their website https://advait.org/files/sarkar_2025_ai_shaming.pdf
tomhow•7h ago
Thanks, we updated the URL!
_vertigo•6h ago
Honestly, AI could have written this.
readthenotes1•6h ago
That tldr table at top looks a lot like what perplexity provides at the bottom...
kelseyfrog•6h ago
While it would have been a better paper if the author collaborated with a sociologist, it would have also be less likely to be taken seriously by the HN for the same class anxieties that its title is founded on.
miningape•3h ago
Excuse us for expecting evidence and intellectual rigour. :D

I've taken a number of university Sociology courses and from those experiences I came to the opinion that Sociology's current iteration is really just grievance airing in an academic tone. It doesn't really justify it's own existence outside of being a Buzzfeed for academics.

I'm not even talking about slightly more rigorous subjects such as Psychology or Political Science, which modern Sociology uses as a shield for the lack of a feedback mechanism.

Don't get me wrong though, I realise this is an opinion formed from my admittedly limited exposure to Sociology (~3 semesters). It could have also been the university I went to particularly leaned on "grievance airing".

kazinator•6h ago
> In this reading, the increasingly common refrain “AI could have written this” is not so much a pithy taunt, but rather a classist slur, indicative of wounded and anxious privilege. Moreover, it is complicit in the systematic exclusion of underprivileged groups from entering the class of knowledge professionals.

This is the insipid blathering of a woke cretin.

It is in fact widespread reliance of AI that will hinder groups of people from acquiring the skills to be in that class.

The idea that some internet randos commenting "AI could have written that" have gatekeeping power, preventing people from becoming knowledge workers, is preposterous.

The one way in which it is plausible is that the work to which the remark is applied is not in fact written by AI, but its author becomes convinced by the remark that such work could be written by AI, and that author adopts AI as a result. Their newfound habit will subsequently rot their brain, sending them plumetting off the ladder toward the knowledge class. I jest, but only half so.

Actually, let's examine this "systematic exclusion" claim.

Firstly, the basic premise of the article is that "this could have been written by AI" is disparagement. But disparagement is nothing new. If disparagement is intended, all one needs is "this was written by an idiot". I think that "this could have ben written by AI" is much softer. In fact, a possible interpretation of it is that the speaker believes in the use of AI, and that it could have been used to save time in producing something of the same quality. Anyway, we've had disparagement in online forums going back to dial up BBSes; it's just a new variant on plain old flaming.

If the remark is disparagement, does it add up to "systematic exclusion of underprivileged groups"?

In forums and social media, people mostly don't care who you are and are responding to the content. If it looks like AI slop, they don't care whether the person behind the pseudonym is a Stanford professor, or a german shepherd, and just turn on their flamethrower.

Let's say that systematic exclusion is happening in spite of commenters not actually targeting disadvantaged groups, but only responding to the content. What that systematic exclusion hypothesis then entails is that posts from underprivileged groups are actually garbage, and therefore attract more disparagement!

So in fact it is the author of this paper who holds a cynical, discriminatory view of underprivileged groups (whoever he imagines them to be, exactly). Underprivileged groups are morons who write garbage that could be written by AI (and thus precisely receive comments to that effect); and, moreover, are so weakly constituted that these discouraging comments prevent them from entering a knowledge professional class (in addition to the main factor, that being their lack of ability).

Someone not a member of an underprivileged group either does not write posts that are reminiscent of AI drivel, and so doesn't attract those comments, or even if he or she does, the negative commends slide right off due to their thicker skin.

Oh really? Some of the thinnest skins in the world come from privilege: for instance, think of the middle-aged man-child who buys an entire social network for billions in order to be able to suppress critical comments about himself.

aaronbrethorst•5h ago
Your argument would've been much better without injecting ca. 2025 US culture war jargon into it.
kristjank•5h ago
The prudence of discussing everything in a cultural vacuum comes with the implication of irrelevance to the cultural climate, which could hardly be further from the truth in this case.
kazinator•5h ago
Sorry, what jargon is that? I may be able to fix it with your help. I'm not in the USA and don't follow US politics or culture enough to be up to 2025 in jargon.
VectorLock•4h ago
"Woke cretin."
kazinator•4h ago
That can't be it. Cretin traces back to the 18th century. Etymonline places woke into the 2010s.
lexicality•2h ago
Doesn't matter when the word was created, in the same way furries use ":3" to signal they're a furry, people now use "woke" as a pejorative to signify that they're a member of the "alt-right". I'd suggest avoiding that word unless that's the group membership you want to be advertising.
xwolfi•2h ago
I like that one
tmtvl•4h ago
I will say it's funny seeing a post which starts off calling someone a 'woke cretin' ending with a lightly veiled take-that at Musk.

I think it may be better to say that the author has an agenda or is co-opting real issues, but I can't think of an elegant way to phrase that.

whstl•4h ago
Or maybe we should give the author the benefit of the doubt and assume he's unhappy with both radical ends of the spectrum, which would be a refreshing take in 2025 to be honest.

I don't really agree with the general argument, though. I don't think painting this as an "AI Slop" issue is fair. Online communities are quicker (and quieter!) when dismissing obvious AI Slop than when dismissing legitimate discourse that looks like AI, or was cleaned-up with AI, or even it just uses Em––Dashes. Perhaps the excusable usage is marking content as machine-translated, which of course causes other disadvantages for the poster. But of course that's just one point of view and communities I don’t go to might be 100% different!

nottorp•4h ago
> he's unhappy with both radical ends of the spectrum

Seems to be very hard to realize that in the US. But from the outside, both ends are batshit insane.

stuaxo•6h ago
The state of this headline.
renewiltord•6h ago
This is just like the way some people decided that "Blue Check" should be an insult on Twitter. Occasionally people still say it but almost everyone ignores it. Fads like this are common on the Internet. It's just like any other clique of people: a few people accidentally taste-make as a bunch of replicators simply repeat mindless things over and over again "slop" "mask-off moment" "enshittification" "ghoulish". Just words that people repeat because other people say them and get likes/upvotes/retweets or whatever.

The "Blue Check" insult regime didn't get anywhere and I doubt any anti-LLM/anti-diffusion-model stuff will last. "Stop trying to make fetch happen". The tools are just too useful.

People on the Internet are just weird. Some time in the early 2010s the big deal was "fedoras". Oh you're weird if you have a fedora. Man, losers keep thinking fedoras are cool. I recall hanging out with a bunch of friends once and we walked by a hat shop and the girls were all like "Man, you guys should all wear these hats". The girls didn't have a clue: these were fedoras. Didn't they know that it would mark us out as weird losers? They didn't, and it turned out it doesn't. In real life.

It only does on the Internet. Because the Internet is a collection of subcultures with some unique cultural overtones.

throwawaybob420•5h ago
Sounds like something a blue checker would say. And yes, if you pay for Twitter you’re going to clowned on.

And what the hell is that segue into fedoras? The entire meme of them is because stereotypical clueless individually took fedoras to be the pinnacle of fashion, while disregarding nearly everything else about not only their outfit, but about their bodies.

This entire comment reeks of not actually understanding anything.

TeMPOraL•4h ago
Found that user who memorized KnowYourMeme and thinks they're a scholar of culture now.

Or a cheap LLM acting as them, and wired up to KnowYourMeme via MCP? Can't tell these days. Hell, we're one weekend side project away from seeing "on the Internet, no one knows you are a dog" become true in a literal sense.

s/

pluto_modadic•4h ago
blue checks are orthogonal - they're more rough approximations of "I bought a cybertruck when musk went full crazy" (and yes, it's a bad look). - judging some blog post for seeming like AI is different.
yhoiseth•6h ago
Sarkar argues that “AI shaming arises from a class anxiety induced in middle class knowledge workers, and is a form of boundary work to maintain class solidarity and limit mobility into knowledge work.”

I think there is at least some truth to this.

Another possible cause of AI shaming is that reading AI writing feels like a waste of time. If the author didn’t bother to write it, why should I bother to read it and provide feedback?

alisonatwork•4h ago
This latter piece is something I am struggling with.

I have spent 10+ years working on teams that are primarily composed of people whose first language is not English in workplaces where English is the mandated business language. Ever since the earliest LLMs started appearing, the written communication of non-native speakers has become a lot clearer from a grammatical point of view, but also a lot more florid and pretentious than they actually intended to be. This is really annoying to read because you need to mentally decode the LLM-ness of their comments/messages/etc back into normal English, which ends up costing more cognitive overhead than it used to reading their more blunt and/or broken English. But, from their perspective, I also understand that it saves them cognitive effort to just type a vague notion into an LLM and ask for a block of well-formed English.

So, in some way, this fantastic tool for translation is resulting in worse communication than we had before. It's strange and I'm not sure how to solve it. I suppose we could use an LLM on the receiving end to decode the rambling mess the LLM on the sending end produced? This future sucks.

xwolfi•2h ago
It is normal, you add a layer between the two brains that communicate, and that layer only add statistical experience to the message.

I write letters to my gf, in English, while English is not our first language. I would never ever put an LLM between us: this would fall flat, remove who we are, be a mess of cultural references, it would just not be interesting to read, even if maybe we could make it sound more native, in the style of Barack Obama or Prince Charles...

LLMs are going to make people as dumb as GPS made them. Except really, when reading a map was not very useful a skill, writing what you feel... should be.

dist-epoch•2h ago
I thought about this too. I think the solution is to send both, prompt and output - since the output was selected itself by the human between potentially multiple variants.

Prompt: I want to tell you X

AI: Dear sir, as per our previous discussion let's delve into the item at hand...

throwaway78665•4h ago
If knowledge work doesn't require knowledge then is it knowledge work?

The main issue that is symptomatic to current AI is that without knowledge (at least at some level) you can't validate the output of AI.

visarga•4h ago
> why should I bother to read it and provide feedback?

I like to discuss a topic with a LLM and generate an article at the end. It is more structured and better worded but still reflects my own ideas. I only post these articles in a private blog I don't pass them as my own writing. But I find this exercise useful to me because I use LLMs as a brainstorming and idea-debugging space.

dist-epoch•2h ago
> If the author didn’t bother to write it, why should I bother to read it

There is an argument that luxury stuff is valuable because typically it's hand made, and in a sense, what you are buying is not the item itself, but the untold hours "wasted" creating that item for your own exclusive use. In a sense "renting a slave" - you have control over another human's time, and this is a power trip.

You have expressed it perfectly: "I don't care about the writing itself, I care about how much effort a human put into it"

satisfice•2h ago
If effort wasn’t put into it, then the writing cannot be good, except by accident or theft or else it is not your writing.

If you want to court me, don’t ask Cyrano de Bergerac to write poetry and pass it off as your own.

dist-epoch•2h ago
> If effort wasn’t put into it, then the writing cannot be good

This is what people used to say about photography versus painting.

> pass it off as your own.

This is misleading/fraud and a separate subject than the quality of the writing.

satisfice•6h ago
This paper presents an elaborate straw-man argument. It does not faithfully represent the legitimate concerns of reasonable people about the persistent and irresponsible application of AI in knowledge work.

Generative AI produces work that is voluminous and difficult to check. It presents such challenges to people who apply it that they, in practice, do not adequately validate the output.

The users of this technology then present the work as if it were their own, which misrepresents their skills and judgement, making it more difficult for other people to evaluate the risk and benefits of working with them.

It is not the mere otherness of AI that results in anger about it being foisted upon us, but the unavoidable disruption to our systems of accountability and ability to assess risk.

forgetfreeman•5h ago
Additionally their use of the term "slur" for what is frequently a valid criticism seems questionable.
satisfice•2h ago
It is itself a form of bullying.
mgraczyk•5h ago
I'd like to brag that I got in trouble for saying this to somebody in 2021, before ChatGPT
andrelaszlo•5h ago
I put a chapter of a paper I wrote in 2016 into GPTZero and got the probability breakdown 90% AI, 10% human. I am 100% human, and I wrote it myself, so I guess I'm lucky that I didn't hand it in this year, or I could have gotten accused of cheating?
tough•5h ago
maybe gptzero had your paper on its training data (it being from 2016)?
mgraczyk•5h ago
I wasn't being serious when I said it, I was using it as an insult for bad work
rcxdude•3h ago
That's more an indictment of the accuracy of such tools. Writing in a very 'standard' style like found in papers is going to match well with the LLM predictions, regardless of origin.
s0teri0s•5h ago
The obvious response is, "Oh, it will."
vanschelven•5h ago
This reads like yet another attempt to pathologize perfectly reasonable criticism as some form of oppression. Calling “AI could have written this” a classist slur is a stretch so extreme it borders on parody. People say that when writing lacks originality or depth — not to reinforce some imagined academic caste system. The idea that pointing out bland prose is equivalent to sumptuary laws or racial gatekeeping is intellectual overreach at its finest. Ironically, this entire paper feels like something an AI could have written: full of jargon, light on substance. And no, there’s no original research, just theory stacked on theory.
raincole•4h ago
> Calling “AI could have written this” a classist slur is a stretch so extreme it borders on parody.

In AI discussions the relevance of Poe's law is rampant. You can never tell what is parody or what is not.

There was a (former) xAI employee that got fired for advocating the extinction of humanity.

kristjank•5h ago
I don't think we should as a wider scientific/technical society care for the opinion of a person that uses epistocratic privilege as a a serious term. This stinks to high hell of proving a conclusion by working backwards from it.

The cognitive dissonance to imply that expecting knowledge from a knowledge worker or a knowledge-centered discourse is a form of boundary work or discrimination is extremely destructive to any and all productive work once you consider how most of the sciences and technological systems depend on a very fragile notion of knowledge preservation and incremental improvements on a system that is intentionally pedantic to provide a stable ground for progress. In a lot of fields, replacing this structure with AI is still very much impossible, but explaining how for each example an LLM blurts out is tedious work. I need to sit down and solve a problem the right way, and in the meantime about 20 false solutions can be generated by ChatGPT

If you read the paper, the author even uses terms related to discrimination by immutable characteristics, invokes xenophobia and quotes a black student calling discouragement of AI as a cheating device racist.

This seems to me an utter insanity and should not only be ignored, but actively pushed against on the grounds of anti-intellectualism.

randomcarbloke•1h ago
Being a pilot is an epistrocratic privilege and they should welcome the input of the less advantaged.
terminalshort•4h ago
Reading this makes me understand why there is a political movement to defund universities.
laurent_du•3h ago
It makes me sick to my heart to think that money is stolen from my pocket to be given to lunatics of this kind.
miningape•3h ago
Overall, this comes across as extremely patronising: to authors by running defence for obviously sub-par work, because their background makes it "impossible" for them to do good work. And to the commenters by assuming mal-intent towards the less privileged that needs to be controlled.

And it's all wrapped in a lovely package of AI apologetics - wonderful.

So, honestly, no. The identity of the author doesn't matter, if it reads like AI slop the author should be grateful I even left an "AI could have written this" comment.

mvdtnz•2h ago
Gosh I wonder why there's a cultural backlash against the "intellectual" elite.
UncleMeat•13m ago
"We have to use AI to achieve class solidarity" is insane to me.

People realize that the bosses all love AI because they envision a future where they don't need to pay the rabble like us, right? People remember leaders in the Trump administration going on TV and saying that we should have fewer laptop jobs, right?

That professor telling you not to use ChatGPT to cheat on your essay is likely not a member of the PMC but is probably an adjunct getting paid near-poverty wages.