I've taken a number of university Sociology courses and from those experiences I came to the opinion that Sociology's current iteration is really just grievance airing in an academic tone. It doesn't really justify it's own existence outside of being a Buzzfeed for academics.
I'm not even talking about slightly more rigorous subjects such as Psychology or Political Science, which modern Sociology uses as a shield for the lack of a feedback mechanism.
Don't get me wrong though, I realise this is an opinion formed from my admittedly limited exposure to Sociology (~3 semesters). It could have also been the university I went to particularly leaned on "grievance airing".
1. What specific, measurable phenomenon would constitute 'discomfort with uncomfortable truths' versus legitimate methodological concerns?
2. How would we distinguish between the two empirically?
I'd expect a study or numerical analysis with at least n > 1000, and p < 0.05 - The study will ideally have controls for correlation, indicating strong causation. The study (or cross analyses of it) should also explore alternative explanations, either disproving the alternatives or showing that they have weak(er) significance (also through numerical methods).
I'm not sure what kinds of data this result could be derived from, but the methods for getting that data should be cited and common - thus being reproducible. Data could also be collected by examining alternative "inputs" (independent variables: i.e. temperament towards discomfort), or by testing how inducing discomfort leads to resistance to ideas, or something else.
I'd expect the research to include, for example, controls where the same individuals evaluate methodologically identical studies from other fields. We'd need to show this 'resistance' is specific to sociology, not general scientific skepticism.
That's to say: The study should also show, numerically and repeatably, that there are legitimate correlations between sociological studies inducing discomfort, and that it is not actual methodological concerns.
This would include:
1. Validated scales measuring "discomfort" or cognitive dissonance
2. Behavioural indicators of resistance vs. legitimate critique
3. Control groups exposed to equally challenging but methodologically sound research
4. Control groups exposed to less challenging but equally methodologically sound research (to the level of sociology)
Also, since we're making a claim about psychology and causation, the study would ideally be conducted by researchers outside of sociology departments to avoid conflicts of interest - preferably cognitive psychologists or neuroscientists using their methodological standards.
> HN's resistance to sociology stems from the discomfort of being confronted with uncomfortable truths
I'm asking for actual scientific evidence of it. We're not talking about the paper (although the exact same issues are present there too). It's not a category error when a specific and falsifiable causal claim about reality is being made.
Critical theory doesn't get a "free pass" - if there's no actual evidence, nor repeatability, quite literally all it is doing is grievance airing in an academic tone. While philosophically interesting, nothing of scientific value is being added.
And this is exactly what I mean when I say Sociology lacks evidence and intellectual rigour. It'll make big claims about reality ("resistance to sociology is due to being confronted with uncomfortable truths"). And then when pressed for reasonable evidence to back it up all there is is hand wringing and justifications about critical theory and epistemology.
I'm sorry but, no, you don't get to make sweeping claims about reality as though you're an -ology, and not do any of the ground work to be respected as an -ology. This is exactly why sociology is laughed out of scientific circles such as HN - maybe it has nothing to do with "uncomfortable truths" and everything to do with a complete lack of physical, repeatable evidence.
Honestly, the tone policing and boundary policing here aren’t very scientific. You can’t have it both ways. Either commit to the epistemology argument fully, or not at all, but you've set up a heads I win, tails you lose set of rules when it comes to epistemic choice.
It is hard to escape how this fits back into the original topic - dismissal re-enforces epistemological status - one that places yours at the top. I’m sure you’re aware of this dynamic playing out in this very discussion.
You can't say "HN's resistance stems from psychological discomfort" (empirical claim) and then retreat to "there are multiple epistemologies" (relativist defence) when challenged. You're held to the epistemic standards your claim invokes.
If you'd made a claim like "Critical theory suggests that HN's resistance stems from psychological discomfort" I'd have a lot less to say. It still suffers from the same evidence issues but at least you're being clear it isn't a scientific claim - so you wouldn't be getting pressed for scientific evidence.
> There are more epistemologies in the world than just the scientific.
Yes, different epistemologies have different domains, boundaries, and use cases (sometimes they overlap too). This is why scientific analysis is useless for literature - and why literary analysis is useless for science.
This is just like how you can't do brain surgery with a jackhammer, and you can't dig concrete with a scalpel. Statistical analysis on poetry is nonsensical, and no one would accept a literary analysis on quantum mechanics as physics.
Different tools are fit for different purposes.
> the tone policing and boundary policing here aren’t very scientific
I'm not tone policing, I'm holding you to the scientific standard after you made a scientific claim. Calling standards enforcement "tone policing" is just another way to avoid accountability.
"Boundary policing" in science isn't some arbitrary gatekeeping - it's essential intellectual hygiene. Good science is acutely aware of:
- What methods can and cannot establish
- The scope and limits of findings
- When they're stepping outside their domain of expertise
- The difference between correlation and causation
- What constitutes sufficient evidence for different types of claims
This is why we have concepts like:
- Statistical power and confidence intervals
- Replication requirements
- Peer review processes
- Methodological limitations sections in papers
This boundary distinction is the entire foundation of reliable knowledge production.
> you've set up a heads I win, tails you lose set of rules when it comes to epistemic choice.
You did this to yourself when you made a scientific claim without scientific evidence. It's not my fault when I point out you built your argument on epistemic quicksand.
You want to make truth claims with scientific authority while escaping scientific accountability.
> I’m sure you’re aware of this dynamic playing out in this very discussion.
Yes it's meta, your original claim still requires evidence though.
The "Blue Check" insult regime didn't get anywhere and I doubt any anti-LLM/anti-diffusion-model stuff will last. "Stop trying to make fetch happen". The tools are just too useful.
People on the Internet are just weird. Some time in the early 2010s the big deal was "fedoras". Oh you're weird if you have a fedora. Man, losers keep thinking fedoras are cool. I recall hanging out with a bunch of friends once and we walked by a hat shop and the girls were all like "Man, you guys should all wear these hats". The girls didn't have a clue: these were fedoras. Didn't they know that it would mark us out as weird losers? They didn't, and it turned out it doesn't. In real life.
It only does on the Internet. Because the Internet is a collection of subcultures with some unique cultural overtones.
And what the hell is that segue into fedoras? The entire meme of them is because stereotypical clueless individually took fedoras to be the pinnacle of fashion, while disregarding nearly everything else about not only their outfit, but about their bodies.
This entire comment reeks of not actually understanding anything.
Or a cheap LLM acting as them, and wired up to KnowYourMeme via MCP? Can't tell these days. Hell, we're one weekend side project away from seeing "on the Internet, no one knows you are a dog" become true in a literal sense.
s/
These fads are transitory. The people participating think the fads are important but they’re just fads. Most of them are just aping the other guy. The Internet guys think they’re having an opinion but really it’s just flock behaviour and will change.
Once upon a time everyone on the Internet hated gauges (earrings that space out the earlobe) and before that it was hipsters.
These are like the Harlem Shake. There is no meaning to it. People are just doing as others do. It’ll pass.
I think there is at least some truth to this.
Another possible cause of AI shaming is that reading AI writing feels like a waste of time. If the author didn’t bother to write it, why should I bother to read it and provide feedback?
I have spent 10+ years working on teams that are primarily composed of people whose first language is not English in workplaces where English is the mandated business language. Ever since the earliest LLMs started appearing, the written communication of non-native speakers has become a lot clearer from a grammatical point of view, but also a lot more florid and pretentious than they actually intended to be. This is really annoying to read because you need to mentally decode the LLM-ness of their comments/messages/etc back into normal English, which ends up costing more cognitive overhead than it used to reading their more blunt and/or broken English. But, from their perspective, I also understand that it saves them cognitive effort to just type a vague notion into an LLM and ask for a block of well-formed English.
So, in some way, this fantastic tool for translation is resulting in worse communication than we had before. It's strange and I'm not sure how to solve it. I suppose we could use an LLM on the receiving end to decode the rambling mess the LLM on the sending end produced? This future sucks.
I write letters to my gf, in English, while English is not our first language. I would never ever put an LLM between us: this would fall flat, remove who we are, be a mess of cultural references, it would just not be interesting to read, even if maybe we could make it sound more native, in the style of Barack Obama or Prince Charles...
LLMs are going to make people as dumb as GPS made them. Except really, when reading a map was not very useful a skill, writing what you feel... should be.
Prompt: I want to tell you X
AI: Dear sir, as per our previous discussion let's delve into the item at hand...
The main issue that is symptomatic to current AI is that without knowledge (at least at some level) you can't validate the output of AI.
I like to discuss a topic with a LLM and generate an article at the end. It is more structured and better worded but still reflects my own ideas. I only post these articles in a private blog I don't pass them as my own writing. But I find this exercise useful to me because I use LLMs as a brainstorming and idea-debugging space.
There is an argument that luxury stuff is valuable because typically it's hand made, and in a sense, what you are buying is not the item itself, but the untold hours "wasted" creating that item for your own exclusive use. In a sense "renting a slave" - you have control over another human's time, and this is a power trip.
You have expressed it perfectly: "I don't care about the writing itself, I care about how much effort a human put into it"
If you want to court me, don’t ask Cyrano de Bergerac to write poetry and pass it off as your own.
This is what people used to say about photography versus painting.
> pass it off as your own.
This is misleading/fraud and a separate subject than the quality of the writing.
When you use GenAI to produce work, it is to a significant extent not your own work. If you call it your own work then you are committing fraud to some degree. You are obscuring your own contribution. This is not separate from quality, since authorship is a fundamental aspect of quality.
Generative AI produces work that is voluminous and difficult to check. It presents such challenges to people who apply it that they, in practice, do not adequately validate the output.
The users of this technology then present the work as if it were their own, which misrepresents their skills and judgement, making it more difficult for other people to evaluate the risk and benefits of working with them.
It is not the mere otherness of AI that results in anger about it being foisted upon us, but the unavoidable disruption to our systems of accountability and ability to assess risk.
As a matter of scope I could understand leaving the social understanding of "AI makes errors" separate from technical evaluations of models, but the thing that really horrified me is that the author apparently does not think past experience should be a concern in other fields:
> AI both frustrates the producer/consumer dichotomy and intermediates access to information processing, thus reducing professional power. In response, through shaming, professionals direct their ire at those they see as pretenders. Doctors have always derided home remedies, scientists have derided lay theories, sacerdotal colleges have derided folk mythologies and cosmogonies as heresy – the ability of individuals to “produce” their own healing, their own knowledge, their own salvation. [...]
If you don't permit that scientists often experiencing crank "lay theories" is a reason for initial skepticism, can you really explain this as anything other than anti-intellectualism?
In AI discussions the relevance of Poe's law is rampant. You can never tell what is parody or what is not.
There was a (former) xAI employee that got fired for advocating the extinction of humanity.
The cognitive dissonance to imply that expecting knowledge from a knowledge worker or a knowledge-centered discourse is a form of boundary work or discrimination is extremely destructive to any and all productive work once you consider how most of the sciences and technological systems depend on a very fragile notion of knowledge preservation and incremental improvements on a system that is intentionally pedantic to provide a stable ground for progress. In a lot of fields, replacing this structure with AI is still very much impossible, but explaining how for each example an LLM blurts out is tedious work. I need to sit down and solve a problem the right way, and in the meantime about 20 false solutions can be generated by ChatGPT
If you read the paper, the author even uses terms related to discrimination by immutable characteristics, invokes xenophobia and quotes a black student calling discouragement of AI as a cheating device racist.
This seems to me an utter insanity and should not only be ignored, but actively pushed against on the grounds of anti-intellectualism.
Mock scholarship is on the rampage. I agree: this stuff does make me understand the yahoos with a defunding urge too - not something I ever expected to feel any sympathy for, but here we are.
And it's all wrapped in a lovely package of AI apologetics - wonderful.
So, honestly, no. The identity of the author doesn't matter, if it reads like AI slop the author should be grateful I even left an "AI could have written this" comment.
People realize that the bosses all love AI because they envision a future where they don't need to pay the rabble like us, right? People remember leaders in the Trump administration going on TV and saying that we should have fewer laptop jobs, right?
That professor telling you not to use ChatGPT to cheat on your essay is likely not a member of the PMC but is probably an adjunct getting paid near-poverty wages.
andsoitis•6mo ago
deverton•6mo ago
tomhow•6mo ago