frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
210•theblazehen•2d ago•64 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
686•klaussilveira•15h ago•204 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
960•xnx•20h ago•553 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
127•matheusalmeida•2d ago•35 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
65•videotopia•4d ago•3 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
30•kaonwarb•3d ago•26 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
44•jesperordrup•5h ago•23 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
236•isitcontent•15h ago•26 comments

ga68, the GNU Algol 68 Compiler – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
8•matt_d•3d ago•2 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
230•dmpetrov•15h ago•122 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
334•vecti•17h ago•146 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
27•speckx•3d ago•17 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
500•todsacerdoti•23h ago•244 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
384•ostacke•21h ago•97 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
360•aktau•21h ago•183 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
295•eljojo•18h ago•187 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
421•lstoll•21h ago•280 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
67•kmm•5d ago•10 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
95•quibono•4d ago•22 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
21•bikenaga•3d ago•11 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
33•romes•4d ago•3 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
262•i5heu•18h ago•212 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
38•gmays•10h ago•13 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1074•cdrnsf•1d ago•460 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
61•gfortaine•13h ago•27 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
294•surprisetalk•3d ago•46 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
153•vmatsiiako•20h ago•72 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
14•1vuio0pswjnm7•1h ago•1 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
159•SerCe•11h ago•147 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
74•phreda4•14h ago•14 comments
Open in hackernews

AI could have written this: Birth of a classist slur in knowledge work [pdf]

https://advait.org/files/sarkar_2025_ai_shaming.pdf
41•deverton•6mo ago

Comments

andsoitis•6mo ago
Would love to read, but it seems heavily paywalled, so can't.
deverton•6mo ago
The author seems to be hosting the full PDF on their website https://advait.org/files/sarkar_2025_ai_shaming.pdf
tomhow•6mo ago
Thanks! We updated the URL.
_vertigo•6mo ago
Honestly, AI could have written this.
readthenotes1•6mo ago
That tldr table at top looks a lot like what perplexity provides at the bottom...
kelseyfrog•6mo ago
While it would have been a better paper if the author collaborated with a sociologist, it would have also be less likely to be taken seriously by the HN for the same class anxieties that its title is founded on.
miningape•6mo ago
Excuse us for expecting evidence and intellectual rigour. :D

I've taken a number of university Sociology courses and from those experiences I came to the opinion that Sociology's current iteration is really just grievance airing in an academic tone. It doesn't really justify it's own existence outside of being a Buzzfeed for academics.

I'm not even talking about slightly more rigorous subjects such as Psychology or Political Science, which modern Sociology uses as a shield for the lack of a feedback mechanism.

Don't get me wrong though, I realise this is an opinion formed from my admittedly limited exposure to Sociology (~3 semesters). It could have also been the university I went to particularly leaned on "grievance airing".

kelseyfrog•6mo ago
My exposure to Sociology and Psychology at university made me understand that HN's resistance to sociology stems from the discomfort of being confronted with uncomfortable truths. It's easier to discount sociology than deal with these truths. I get it. I used to be that way too.
miningape•6mo ago
Sure, but what evidence is there of that claim? Do you have any falsifiable/empirical studies you can cite?
kelseyfrog•6mo ago
Of course. But my only requirement is that we pre-register what evidence will change your mind. Fair?
miningape•6mo ago
The study should tackle these questions in one form or another:

1. What specific, measurable phenomenon would constitute 'discomfort with uncomfortable truths' versus legitimate methodological concerns?

2. How would we distinguish between the two empirically?

I'd expect a study or numerical analysis with at least n > 1000, and p < 0.05 - The study will ideally have controls for correlation, indicating strong causation. The study (or cross analyses of it) should also explore alternative explanations, either disproving the alternatives or showing that they have weak(er) significance (also through numerical methods).

I'm not sure what kinds of data this result could be derived from, but the methods for getting that data should be cited and common - thus being reproducible. Data could also be collected by examining alternative "inputs" (independent variables: i.e. temperament towards discomfort), or by testing how inducing discomfort leads to resistance to ideas, or something else.

I'd expect the research to include, for example, controls where the same individuals evaluate methodologically identical studies from other fields. We'd need to show this 'resistance' is specific to sociology, not general scientific skepticism.

That's to say: The study should also show, numerically and repeatably, that there are legitimate correlations between sociological studies inducing discomfort, and that it is not actual methodological concerns.

This would include:

1. Validated scales measuring "discomfort" or cognitive dissonance

2. Behavioural indicators of resistance vs. legitimate critique

3. Control groups exposed to equally challenging but methodologically sound research

4. Control groups exposed to less challenging but equally methodologically sound research (to the level of sociology)

Also, since we're making a claim about psychology and causation, the study would ideally be conducted by researchers outside of sociology departments to avoid conflicts of interest - preferably cognitive psychologists or neuroscientists using their methodological standards.

kelseyfrog•6mo ago
Thanks. I understand what happened here. This is a critical discussion paper and you're making the category error of judging it by the rubric of scientific epistemology.
miningape•6mo ago
Wait... You made a specific, falsifiable, and causal scientific claim based on your sociology experience:

> HN's resistance to sociology stems from the discomfort of being confronted with uncomfortable truths

I'm asking for actual scientific evidence of it. We're not talking about the paper (although the exact same issues are present there too). It's not a category error when a specific and falsifiable causal claim about reality is being made.

Critical theory doesn't get a "free pass" - if there's no actual evidence, nor repeatability, quite literally all it is doing is grievance airing in an academic tone. While philosophically interesting, nothing of scientific value is being added.

And this is exactly what I mean when I say Sociology lacks evidence and intellectual rigour. It'll make big claims about reality ("resistance to sociology is due to being confronted with uncomfortable truths"). And then when pressed for reasonable evidence to back it up all there is is hand wringing and justifications about critical theory and epistemology.

I'm sorry but, no, you don't get to make sweeping claims about reality as though you're an -ology, and not do any of the ground work to be respected as an -ology. This is exactly why sociology is laughed out of scientific circles such as HN - maybe it has nothing to do with "uncomfortable truths" and everything to do with a complete lack of physical, repeatable evidence.

kelseyfrog•6mo ago
There are more epistemologies in the world than just the scientific. Trying to universalize one leads to category errors like this.

Honestly, the tone policing and boundary policing here aren’t very scientific. You can’t have it both ways. Either commit to the epistemology argument fully, or not at all, but you've set up a heads I win, tails you lose set of rules when it comes to epistemic choice.

It is hard to escape how this fits back into the original topic - dismissal re-enforces epistemological status - one that places yours at the top. I’m sure you’re aware of this dynamic playing out in this very discussion.

miningape•6mo ago
You made a specific claim about human behaviour. Either defend it with evidence or admit you were speculating. The philosophy of science lecture doesn't change that.

You can't say "HN's resistance stems from psychological discomfort" (empirical claim) and then retreat to "there are multiple epistemologies" (relativist defence) when challenged. You're held to the epistemic standards your claim invokes.

If you'd made a claim like "Critical theory suggests that HN's resistance stems from psychological discomfort" I'd have a lot less to say. It still suffers from the same evidence issues but at least you're being clear it isn't a scientific claim - so you wouldn't be getting pressed for scientific evidence.

> There are more epistemologies in the world than just the scientific.

Yes, different epistemologies have different domains, boundaries, and use cases (sometimes they overlap too). This is why scientific analysis is useless for literature - and why literary analysis is useless for science.

This is just like how you can't do brain surgery with a jackhammer, and you can't dig concrete with a scalpel. Statistical analysis on poetry is nonsensical, and no one would accept a literary analysis on quantum mechanics as physics.

Different tools are fit for different purposes.

> the tone policing and boundary policing here aren’t very scientific

I'm not tone policing, I'm holding you to the scientific standard after you made a scientific claim. Calling standards enforcement "tone policing" is just another way to avoid accountability.

"Boundary policing" in science isn't some arbitrary gatekeeping - it's essential intellectual hygiene. Good science is acutely aware of:

- What methods can and cannot establish

- The scope and limits of findings

- When they're stepping outside their domain of expertise

- The difference between correlation and causation

- What constitutes sufficient evidence for different types of claims

This is why we have concepts like:

- Statistical power and confidence intervals

- Replication requirements

- Peer review processes

- Methodological limitations sections in papers

This boundary distinction is the entire foundation of reliable knowledge production.

> you've set up a heads I win, tails you lose set of rules when it comes to epistemic choice.

You did this to yourself when you made a scientific claim without scientific evidence. It's not my fault when I point out you built your argument on epistemic quicksand.

You want to make truth claims with scientific authority while escaping scientific accountability.

> I’m sure you’re aware of this dynamic playing out in this very discussion.

Yes it's meta, your original claim still requires evidence though.

hollerith•6mo ago
Is one of those uncomfortable truths the fact that for our society to make substantial sustainable progress will require the dismantling of a vast structure of oppression that current has a firm grip on our society?
stuaxo•6mo ago
The state of this headline.
renewiltord•6mo ago
This is just like the way some people decided that "Blue Check" should be an insult on Twitter. Occasionally people still say it but almost everyone ignores it. Fads like this are common on the Internet. It's just like any other clique of people: a few people accidentally taste-make as a bunch of replicators simply repeat mindless things over and over again "slop" "mask-off moment" "enshittification" "ghoulish". Just words that people repeat because other people say them and get likes/upvotes/retweets or whatever.

The "Blue Check" insult regime didn't get anywhere and I doubt any anti-LLM/anti-diffusion-model stuff will last. "Stop trying to make fetch happen". The tools are just too useful.

People on the Internet are just weird. Some time in the early 2010s the big deal was "fedoras". Oh you're weird if you have a fedora. Man, losers keep thinking fedoras are cool. I recall hanging out with a bunch of friends once and we walked by a hat shop and the girls were all like "Man, you guys should all wear these hats". The girls didn't have a clue: these were fedoras. Didn't they know that it would mark us out as weird losers? They didn't, and it turned out it doesn't. In real life.

It only does on the Internet. Because the Internet is a collection of subcultures with some unique cultural overtones.

throwawaybob420•6mo ago
Sounds like something a blue checker would say. And yes, if you pay for Twitter you’re going to clowned on.

And what the hell is that segue into fedoras? The entire meme of them is because stereotypical clueless individually took fedoras to be the pinnacle of fashion, while disregarding nearly everything else about not only their outfit, but about their bodies.

This entire comment reeks of not actually understanding anything.

TeMPOraL•6mo ago
Found that user who memorized KnowYourMeme and thinks they're a scholar of culture now.

Or a cheap LLM acting as them, and wired up to KnowYourMeme via MCP? Can't tell these days. Hell, we're one weekend side project away from seeing "on the Internet, no one knows you are a dog" become true in a literal sense.

s/

renewiltord•6mo ago
The point is that Internet people are weird and have their own fads. These so-called “slurs” are meaningless. It’s just like perhaps there’s some middle-school class somewhere that’s decided that white shoes are lame. The majority of the world doesn’t care.

These fads are transitory. The people participating think the fads are important but they’re just fads. Most of them are just aping the other guy. The Internet guys think they’re having an opinion but really it’s just flock behaviour and will change.

Once upon a time everyone on the Internet hated gauges (earrings that space out the earlobe) and before that it was hipsters.

These are like the Harlem Shake. There is no meaning to it. People are just doing as others do. It’ll pass.

pluto_modadic•6mo ago
blue checks are orthogonal - they're more rough approximations of "I bought a cybertruck when musk went full crazy" (and yes, it's a bad look). - judging some blog post for seeming like AI is different.
yhoiseth•6mo ago
Sarkar argues that “AI shaming arises from a class anxiety induced in middle class knowledge workers, and is a form of boundary work to maintain class solidarity and limit mobility into knowledge work.”

I think there is at least some truth to this.

Another possible cause of AI shaming is that reading AI writing feels like a waste of time. If the author didn’t bother to write it, why should I bother to read it and provide feedback?

alisonatwork•6mo ago
This latter piece is something I am struggling with.

I have spent 10+ years working on teams that are primarily composed of people whose first language is not English in workplaces where English is the mandated business language. Ever since the earliest LLMs started appearing, the written communication of non-native speakers has become a lot clearer from a grammatical point of view, but also a lot more florid and pretentious than they actually intended to be. This is really annoying to read because you need to mentally decode the LLM-ness of their comments/messages/etc back into normal English, which ends up costing more cognitive overhead than it used to reading their more blunt and/or broken English. But, from their perspective, I also understand that it saves them cognitive effort to just type a vague notion into an LLM and ask for a block of well-formed English.

So, in some way, this fantastic tool for translation is resulting in worse communication than we had before. It's strange and I'm not sure how to solve it. I suppose we could use an LLM on the receiving end to decode the rambling mess the LLM on the sending end produced? This future sucks.

xwolfi•6mo ago
It is normal, you add a layer between the two brains that communicate, and that layer only add statistical experience to the message.

I write letters to my gf, in English, while English is not our first language. I would never ever put an LLM between us: this would fall flat, remove who we are, be a mess of cultural references, it would just not be interesting to read, even if maybe we could make it sound more native, in the style of Barack Obama or Prince Charles...

LLMs are going to make people as dumb as GPS made them. Except really, when reading a map was not very useful a skill, writing what you feel... should be.

dist-epoch•6mo ago
I thought about this too. I think the solution is to send both, prompt and output - since the output was selected itself by the human between potentially multiple variants.

Prompt: I want to tell you X

AI: Dear sir, as per our previous discussion let's delve into the item at hand...

lucyjojo•6mo ago
ask them to add "brief polite" to their translation prompt.
throwaway78665•6mo ago
If knowledge work doesn't require knowledge then is it knowledge work?

The main issue that is symptomatic to current AI is that without knowledge (at least at some level) you can't validate the output of AI.

visarga•6mo ago
> why should I bother to read it and provide feedback?

I like to discuss a topic with a LLM and generate an article at the end. It is more structured and better worded but still reflects my own ideas. I only post these articles in a private blog I don't pass them as my own writing. But I find this exercise useful to me because I use LLMs as a brainstorming and idea-debugging space.

dist-epoch•6mo ago
> If the author didn’t bother to write it, why should I bother to read it

There is an argument that luxury stuff is valuable because typically it's hand made, and in a sense, what you are buying is not the item itself, but the untold hours "wasted" creating that item for your own exclusive use. In a sense "renting a slave" - you have control over another human's time, and this is a power trip.

You have expressed it perfectly: "I don't care about the writing itself, I care about how much effort a human put into it"

satisfice•6mo ago
If effort wasn’t put into it, then the writing cannot be good, except by accident or theft or else it is not your writing.

If you want to court me, don’t ask Cyrano de Bergerac to write poetry and pass it off as your own.

dist-epoch•6mo ago
> If effort wasn’t put into it, then the writing cannot be good

This is what people used to say about photography versus painting.

> pass it off as your own.

This is misleading/fraud and a separate subject than the quality of the writing.

satisfice•6mo ago
Well, with regard photography that is a lazy point. It is both true and not true, and also irrelevant. Artistic photography takes a lot of effort, training, and taste. But using an iPhone to take an arbitrary picture does not. I don’t value photography that takes no effort, just as you don’t. Ultimately, photography is recording data— light that traveled to the camera. AI writing is regurgitating someone else’s data.

When you use GenAI to produce work, it is to a significant extent not your own work. If you call it your own work then you are committing fraud to some degree. You are obscuring your own contribution. This is not separate from quality, since authorship is a fundamental aspect of quality.

lucyjojo•6mo ago
i don't think your coworkers are trying to court you...
satisfice•6mo ago
This paper presents an elaborate straw-man argument. It does not faithfully represent the legitimate concerns of reasonable people about the persistent and irresponsible application of AI in knowledge work.

Generative AI produces work that is voluminous and difficult to check. It presents such challenges to people who apply it that they, in practice, do not adequately validate the output.

The users of this technology then present the work as if it were their own, which misrepresents their skills and judgement, making it more difficult for other people to evaluate the risk and benefits of working with them.

It is not the mere otherness of AI that results in anger about it being foisted upon us, but the unavoidable disruption to our systems of accountability and ability to assess risk.

forgetfreeman•6mo ago
Additionally their use of the term "slur" for what is frequently a valid criticism seems questionable.
satisfice•6mo ago
It is itself a form of bullying.
strangecasts•6mo ago
> Generative AI produces work that is voluminous and difficult to check. It presents such challenges to people who apply it that they, in practice, do not adequately validate the output.

As a matter of scope I could understand leaving the social understanding of "AI makes errors" separate from technical evaluations of models, but the thing that really horrified me is that the author apparently does not think past experience should be a concern in other fields:

> AI both frustrates the producer/consumer dichotomy and intermediates access to information processing, thus reducing professional power. In response, through shaming, professionals direct their ire at those they see as pretenders. Doctors have always derided home remedies, scientists have derided lay theories, sacerdotal colleges have derided folk mythologies and cosmogonies as heresy – the ability of individuals to “produce” their own healing, their own knowledge, their own salvation. [...]

If you don't permit that scientists often experiencing crank "lay theories" is a reason for initial skepticism, can you really explain this as anything other than anti-intellectualism?

mgraczyk•6mo ago
I'd like to brag that I got in trouble for saying this to somebody in 2021, before ChatGPT
andrelaszlo•6mo ago
I put a chapter of a paper I wrote in 2016 into GPTZero and got the probability breakdown 90% AI, 10% human. I am 100% human, and I wrote it myself, so I guess I'm lucky that I didn't hand it in this year, or I could have gotten accused of cheating?
tough•6mo ago
maybe gptzero had your paper on its training data (it being from 2016)?
mgraczyk•6mo ago
I wasn't being serious when I said it, I was using it as an insult for bad work
rcxdude•6mo ago
That's more an indictment of the accuracy of such tools. Writing in a very 'standard' style like found in papers is going to match well with the LLM predictions, regardless of origin.
s0teri0s•6mo ago
The obvious response is, "Oh, it will."
vanschelven•6mo ago
This reads like yet another attempt to pathologize perfectly reasonable criticism as some form of oppression. Calling “AI could have written this” a classist slur is a stretch so extreme it borders on parody. People say that when writing lacks originality or depth — not to reinforce some imagined academic caste system. The idea that pointing out bland prose is equivalent to sumptuary laws or racial gatekeeping is intellectual overreach at its finest. Ironically, this entire paper feels like something an AI could have written: full of jargon, light on substance. And no, there’s no original research, just theory stacked on theory.
raincole•6mo ago
> Calling “AI could have written this” a classist slur is a stretch so extreme it borders on parody.

In AI discussions the relevance of Poe's law is rampant. You can never tell what is parody or what is not.

There was a (former) xAI employee that got fired for advocating the extinction of humanity.

kristjank•6mo ago
I don't think we should as a wider scientific/technical society care for the opinion of a person that uses epistocratic privilege as a a serious term. This stinks to high hell of proving a conclusion by working backwards from it.

The cognitive dissonance to imply that expecting knowledge from a knowledge worker or a knowledge-centered discourse is a form of boundary work or discrimination is extremely destructive to any and all productive work once you consider how most of the sciences and technological systems depend on a very fragile notion of knowledge preservation and incremental improvements on a system that is intentionally pedantic to provide a stable ground for progress. In a lot of fields, replacing this structure with AI is still very much impossible, but explaining how for each example an LLM blurts out is tedious work. I need to sit down and solve a problem the right way, and in the meantime about 20 false solutions can be generated by ChatGPT

If you read the paper, the author even uses terms related to discrimination by immutable characteristics, invokes xenophobia and quotes a black student calling discouragement of AI as a cheating device racist.

This seems to me an utter insanity and should not only be ignored, but actively pushed against on the grounds of anti-intellectualism.

randomcarbloke•6mo ago
Being a pilot is an epistrocratic privilege and they should welcome the input of the less advantaged.
terminalshort•6mo ago
Reading this makes me understand why there is a political movement to defund universities.
laurent_du•6mo ago
It makes me sick to my heart to think that money is stolen from my pocket to be given to lunatics of this kind.
throwaway2562•6mo ago
The real shame of it is that OP claims affiliation to two respectable universities (UCL and Cambridge) and one formerly credible venue (CHI)

Mock scholarship is on the rampage. I agree: this stuff does make me understand the yahoos with a defunding urge too - not something I ever expected to feel any sympathy for, but here we are.

miningape•6mo ago
Overall, this comes across as extremely patronising: to authors by running defence for obviously sub-par work, because their background makes it "impossible" for them to do good work. And to the commenters by assuming mal-intent towards the less privileged that needs to be controlled.

And it's all wrapped in a lovely package of AI apologetics - wonderful.

So, honestly, no. The identity of the author doesn't matter, if it reads like AI slop the author should be grateful I even left an "AI could have written this" comment.

mvdtnz•6mo ago
Gosh I wonder why there's a cultural backlash against the "intellectual" elite.
UncleMeat•6mo ago
"We have to use AI to achieve class solidarity" is insane to me.

People realize that the bosses all love AI because they envision a future where they don't need to pay the rabble like us, right? People remember leaders in the Trump administration going on TV and saying that we should have fewer laptop jobs, right?

That professor telling you not to use ChatGPT to cheat on your essay is likely not a member of the PMC but is probably an adjunct getting paid near-poverty wages.

sshine•6mo ago
Synthetic beings will look back at this with great curiosity.