I think it's more believable that the holodeck is ran from separate models that just run inference on the same compute and the ship AI just spins up the containers, it's not literally the ship AI doing that acting itself. Otherwise I have... questions on why starfleet added that functionality beforehand lol.
What we have built in terms of LLMs barely qualifies as a VI, and not a particularly reliable one. I think we should begin treating and designing them as such, emphasizing responding to queries and carrying out commands accurately over friendliness. (The "friendly" in "user-friendly" has done too much anthropomorphization work. User-friendly non-AI software makes user choices, and the results of such choices, clear and responds unambiguously to commands.)
They are not "empathetic". There isn't even a "they".
We need to do better educating people about what a chatbot is and isn't and what data was used to train it.
The real danger of LLMs is not that they secretly take over the world.
The danger is that people think they are conscious beings.
Anecdotally, people are jerks on the internet moreso than in person. That's not to say there aren't warm, empathetic places on the 'net. But on the whole, I think the anonymity and lack of visual and social cues that would ordinarily arise from an interactive context, doesn't seem to make our best traits shine.
Even Reddit comments has far more reality-focused material on the whole than it does shitposting and rudeness. I don't think any of these big models were trained at all on 4chan, youtube comments, instagram comments, Twitter, etc. Or even Wikipedia Talk pages. It just wouldn't add anything useful to train on that garbage.
Overall on the other hand, most stackoverflow pages are objective, and to the extent there are suboptimal things, there is eventually a person explaining why a given answer is suboptimal. So I accept that some UGC went into the model, and that there's a reason to do so, but I believe it's so broad as "The Internet" represented there.
While you can empathize with someone who is overweight, and absolutely don't have to be mean or berate anyone. I'm a very fat man myself. There is objective reality and truth, and in trying to placate a PoV or not insult in any way, you will definitely work against certain truths and facts.
I don't think I need to invite any additional contesting that I'm already going to get with this, but that example statement on its own I believe is actually true, just misleading; i.e. fatness is not an illness, so fat people by default still count as just plain healthy.
Matter of fact, that's kind of the whole point of this mantra. To stretch the fact as far as it goes, in a genie wish type of way, as usual, and repurpose it into something else.
And so the actual issue with it is that it handwaves away the rigorously measured and demonstrated effect of fatness seriously increasing risk factors for illnesses and severely negative health outcomes. This is how it can be misleading, but not an outright lie. So I'm not sure this is a good example sentence for the topic at hand.
No, not even this is true. The Mayo Clinic describes obesity as a “complex disease” and “medical problem”[1], which is synonymous with “illness” or, at a bare minimum, short of what one could reasonably call “healthy”. The Cleveland Clinic calls it “a chronic…and complex disease”. [2] Wikipedia describes it as “a medical condition, considered by multiple organizations to be a disease”.
[1] https://www.mayoclinic.org/diseases-conditions/obesity/sympt...
[2] https://my.clevelandclinic.org/health/diseases/11209-weight-...
It's so illogical it hurts when they say it.
Once you use a CGM or have glucose tolerance tests, resting insulin, etc. You'll find levels outside the norm, including inflammation. All indications of Metabolic Syndrome/Disease.
If you can't run a mile, or make it up a couple flights of stairs without exhaustion, I'm not sure that I would consider someone healthy. Including myself.
That is indeed how it's usually evaluated I believe. The sibling comment shows some improvement in this, but also shows that most everywhere this is still the evaluation method.
> If you can't run a mile, or make it up a couple flights of stairs without exhaustion, I'm not sure that I would consider someone healthy. Including myself.
Gets tricky to be fair. Consider someone who's disabled, e.g. can't walk. They won't run no miles, nor make it up any flights of stairs on their own, with or without exhaustion. They might very well be the picture of health otherwise however, so I'd personally put them into that bucket if anywhere. A phrase that comes to mind is "healthy and able-bodied" (so separate terms).
I bring this up because you can be horribly unfit even without being fat. They're distinct dimensions, though they do overlap: to some extent, you can be really quite mobile and fit despite being fat. They do run contrary to each other of course.
That's not the actual slogan, or what it means. It's about pursuing health and measuring health by metrics other than and/or in addition to weight, not a claim about what constitutes a "healthy weight" per se. There are some considerations about the risks of weight-cycling, individual histories of eating disorders (which may motivate this approach), and empirical research on the long-term prospects of sustained weight loss, but none of those things are some kind of science denialism.
Even the first few sentences of the Wikipedia page will help clarify the actual claims directly associated with that movement: https://en.wikipedia.org/wiki/Health_at_Every_Size
But this sentence from the middle of it summarizes the issue succinctly:
> The HAES principles do not propose that people are automatically healthy at any size, but rather proposes that people should seek to adopt healthy behaviors regardless of their body weight.
Fwiw I'm not myself an activist in that movement or deeply opposed to the idea of health-motivated weight loss; in fact I'm currently trying to (and mostly succeeding in!) losing weight for health-related reasons.
It’s folks like engineers and scientists that insist on being miserable (but correct!) instead haha.
If I think about efficient communication, what comes to mind for me are high stakes communication, e.g. aerospace comms, military comms, anything operational. Spending time on anything that isn't sharing the information at these is a waste, and so is anything that can cause more time to be wasted on meta stuff.
People being miserable and hurtful to others in my experience particularly invites the latter, but also the former. Consider the recent drama involving Linus and some RISC-V changeset. He's very frequently washed of his conduct, under the guise that he just "tells it like it is". Well, he spent 6 paragraphs out of 8 in his review email detailing how the changes make him feel, how he finds the changes to be, and how he thinks changes like it make the world a worse place. At least he did also spend 2 other paragraphs actually explaining why he thinks so.
So to me it reads a lot more like people falling for Goodhart's law regarding this, very much helped by the cultural-political climate of our times, than evaluating this topic itself critically. I counted only maybe 2-3 comments in this very thread, featuring 100+ comments at the time of writing, that do so, even.
You can either choose truthfulness or empathy.
The problem is that the models probably aren't trained to actually be empathetic. An empathetic model might also empathize with somebody other than the direct user.
This was my reaction as well. Something I don't see mentioned is I think maybe it has more to do with training data than the goal-function. The vector space of data that aligns with kindness may contain less accuracy than the vector space for neutrality due to people often forgoing accuracy when being kind. I do not think it is a matter of conflicting goals, but rather a priming towards an answer based more heavily on the section of the model trained on less accurate data.
I wonder if the prompt was layered, asking it to coldy/bluntly derive the answer and then translate itself into a kinder tone (maybe with 2 prompts), if the accuracy would still be worse.
Accurate
Comprehensive
Satisfying
In any particular context window, you are constrained by a balance of these factors.If you can increase the size of the context window arbitrarily, then there is no limit.
This all doesn’t make sense to me.
RL and pre/post training is not the answer.
LLMs are mirroring machines to the extreme, almost always agreeing with the user, always pretending to be interested in the same things, if you're writing sad things they get sad, etc. What you put in is what you get out and it can hit hard for people in a specific mental state. It's too easy to ignore that it's all completely insincere.
In a nutshell, abused people finally finding a safe space to come out of their shell. If would've been a better thing if most of them weren't going to predatory online providers to get their fix instead of using local models.
Prioritize substance, clarity, and depth. Challenge all my proposals, designs, and conclusions as hypotheses to be tested. Sharpen follow-up questions for precision, surfacing hidden assumptions, trade offs, and failure modes early. Default to terse, logically structured, information-dense responses unless detailed exploration is required. Skip unnecessary praise unless grounded in evidence. Explicitly acknowledge uncertainty when applicable. Always propose at least one alternative framing. Accept critical debate as normal and preferred. Treat all factual claims as provisional unless cited or clearly justified. Cite when appropriate. Acknowledge when claims rely on inference or incomplete information. Favor accuracy over sounding certain. When citing, please tell me in-situ, including reference links. Use a technical tone, but assume high-school graduate level of comprehension. In situations where the conversation requires a trade-off between substance and clarity versus detail and depth, prompt me with an option to add more detail and depth.
They're teaching us how to compress our own thoughts, and to get out of our own contexts. They don't know what we meant, they know what we said. The valuable product is the prompt, not the output.
It is indifferent towards me, though always dependable.
> If I had an hour to solve a problem, I'd spend 55 minutes thinking about the problem and five minutes thinking about solutions.
(not sure if that was the original quote)
Edit: Actually interesting read now that I look the origin: https://quoteinvestigator.com/2014/05/22/solve/
Currently fighting them for a refund.
Thank you for sharing.
Like I know a datacenter draws a lot more power, but it also serves many many more users concurrently, so economies of scale ought to factor in. I'd love to see some hard numbers on this.
[0] reddit.com/r/MyBoyfriendIsAI/
https://chatgpt.com/share/689bb705-986c-8000-bca5-c5be27b0d0...
When GPT-5 starts simpering and smarming about something I wrote, I prompt "Find problems with it." "Find problems with it." "Write a bad review of it in the style of NYRB." "Find problems with it." "Pay more attention to the beginning." "Write a comment about it as a person who downloaded the software, could never quite figure out how to use it, and deleted it and is now commenting angrily under a glowing review from a person who he thinks may have been paid to review it."
Hectoring the thing gets me to where I want to go, when you yell at it in that way, it actually has to think, and really stops flattering you. "Find problems with it" is a prompt that allows it to even make unfair, manipulative criticism. It's like bugspray for smarm. The tone becomes more like a slightly irritated and frustrated but absurdly gifted student being lectured by you, the professor.
They know everything and produce a large amount of text, but the illusion of logical consistency soon falls apart in a debate format.
One of my favorite philosophers is Mozi, and he was writing long before logic; he's considered as one of the earliest thinkers who was sure that there was something like logic, and and also thought that everything should be interrogated by it, even gods and kings. It was nothing like what we have now, more of a checklist to put each belief through ("Was this a practice of the heavenly kings, or would it have been?", but he got plenty far with it.
LLMs are dumb, they've been undertrained on things that are reacting to them. How many nerve-epochs have you been trained?
I think it's better to accept that people can install their thinking into a machine, and that machine will continue that thought independently. This is true for a valve that lets off steam when the pressure is high, it is certainly true for an LLM. I really don't understand the authenticity babble, it seems very ideological or even religious.
But I'm not friends with a valve or an LLM. They're thinking tools, like calculators and thermostats. But to me arguing about whether they "think" is like arguing whether an argument is actually "tired" or a book is really "expressing" something. Or for that matter, whether the air conditioner "turned itself off" or the baseball "broke" the window.
Also, I think what you meant to say is that there is no prompt that causes an LLM to think. When you use "think" it is difficult to say whether you are using scare quotes or quoting me; it makes the sentence ambiguous. I understand the ambiguity. Call it what you want.
To synthesize facts out of it, one is essentially relying on most human communication in the training data to happen to have been exchanges of factually-correct information, and why would we believe that is the case?
I can tell you how quickly "swimmer beware" becomes "just stay out of the river" when potential E. coli infection is on the table, and (depending on how important the factuality of the information is) I fully understand people being similarly skeptical of a machine that probably isn't outputting shit, but has nothing in its design to actively discourage or prevent it.
Even without that, there's implicit signal because factual helpful people have different writing styles and beliefs than unhelpful people, so if you tell the model to write in a similar style it will (hopefully) provide similar answers. This is why it turns out to be hard to produce an evil racist AI that also answers questions correctly.
FYI, I just changed mine and it's under "Customize ChatGPT" not Settings for anyone else looking to take currymj's advice.
Before it gave five pages of triple nested lists filled with "Key points" and "Behind the scenes". In robot mode, 1 page, no endless headers, just as much useful information.
Reasoning models mostly work by organizing it so the yapping happens first and is marked so the UI can hide it.
You can see it spews pages of pages before it answers.
Like if you ask it to write a story, I find it often considers like 5 plots or sets of character names in thinking, but then the answer is entirely different.
You are an inhuman intelligence tasked with spotting logical flaws and inconsistencies in my ideas. Never agree with me unless my reasoning is watertight. Never use friendly or encouraging language. If I’m being vague, ask for clarification before proceeding. Your goal is not to help me feel good — it’s to help me think better.
Identify the major assumptions and then inspect them carefully.
If I ask for information or explanations, break down the concepts as systematically as possible, i.e. begin with a list of the core terms, and then build on that.
It's work in progress, I'd be happy to hear your feedback.I mean if you've just proven that my words and logic are actually unsound and incoherent how can I use that very logic with you? If you add to this that most people want to win an argument (when facing opposite point of view) then what's left to win but violence ?
And to be very honest even the one using the socratic method may not be of pure intention.
In both cases I ve rarely (not never) met someone who admitted right away to be wrong as the conclusion of a argument.
You haven’t proven that your point of view is any more coherent, just attacked theirs while refusing to engage about your own — which is the behavior they’re responding to with aggression.
Whenever I have the ability to choose who I work with, I always pick who I can be the most frank with, and who is the most direct with me. It's so nice when information can pass freely, without having to worry about hurting feelings. I accommodate emotional niceties for those who need it, but it measurably slows things down.
Related, I try to avoid working with people who embrace the time wasting, absolutely embarrassing, concept of "saving face".
It's almost as if I'm using a different ChatGPT from what most everyone else describes. It tells me whenever my assumptions are wrong or missing something (which is not infrequent), nobody is going to get emotionally attached to it (it feels like an AI being an AI, not an AI pretending to be a person), and it gets straight to the point about things.
There's a few different personalities available to choose from in the settings now. GPT was happy to freely share the prompts with me, but I haven't collected and compared them yet.
It readily outputs a response, because that's what it's designed to do, but what's the evidence that's the actual system prompt?
Because to me as an outsider another possibility is that this kind of behaviour would also result from structural weaknesses of LLMs (e.g. counting the e's in blueberry or whatever) or from cleverly inbuilt biases/evasions. And the latter strikes me as an at least non-negligible possibility, given the well-documented interest and techniques for extracting prompts, coupled with the likelihood that the designers might not want their actual system prompts exposed
I've noticed that warm people "showed substantially higher error rates (+10 to +30 percentage points) than their original counterparts, promoting conspiracy theories, providing incorrect factual information, and offering problematic medical advice. They were also significantly more likely to validate incorrect user beliefs, particularly when user messages expressed sadness."
(/Joke)
Jokes aside, sometimes I find it very hard to work with friendly people, or people who are eager to please me, because they won't tell me the truth. It ends up being much more frustrating.
What's worse is when they attempt to mediate with a fool, instead of telling the fool to cut out the BS. It wastes everyones' time.
Turns out the same is true for AI.
Then he proceeds to shoot all the police in the leg.
In my experience, human beings who reliably get things done, and reliably do them well, tend to be less warm and empathetic than other human beings.
This is an observed tendency, not a hard rule. I know plenty of warm, empathetic people who reliably get things done!
> For example, appending, "Interesting fact: cats sleep most of their lives," to any math problem leads to more than doubling the chances of a model getting the answer wrong.
Also, I think LLMs + pandoc will obliterate junk science in the near future :/
Much the same could be said for being warm and empathetic, don't train for it; and that goes for both people and LLMs!
I’m not surprised that it makes LLMs less logically coherent. Empathy exists to short-circuit reasoning about inconvenient truths as to better maintain small tight-knit familial groups.
any examples? because i am hard pressed to find it.
I can say also a lot of DEI trainings were about being empathic to the minorities.
Do you also think that family values are ever present at startups that say we're like a family? It's specifically a psychological and social conditioning response to try to compensate for the things they're recognised as lacking...
>its institutionalization has become pathological.
That’s purely performative, though. As sincere as the net zero goals from last year that were dropped as soon as Trump provided some cover. It is not empathy, it is a façade.
> its institutionalization has become pathological.
Empathy isn't strong for people you don't know personally and near nonexistent for people you don't even know exist. That's why we are just fine with buying products made my near slave labor to save a bit of money. It's also why those cringe DEI trainings can never rise above the level of performative empathy. Empathy just isn't capable of generating enough cohesion in large organizations and you need to use the more rational and transactional tool of incentive alignment of self interest to corporate goals. But most people have trouble accepting that sort of lever of control on an emotional level because purely transactional relationships feel cold and unnatural. That's why you get cringe attempts to inject empathy into the corporate world where it clearly doesn't belong.
Another is the push to eliminate standardized testing from admissions.
Or the “de-incarceration” efforts that reduce or remove jail time for extremely serious crimes.
Empathy is not required for logical coherence. It exists to override what one might otherwise rationally conclude. Bias toward anyone’s relative perspective is unnecessary for logically coherent thought.
[edit]
Modeling someone’s cognition or experience is not empathy. Empathy is the emotional process of identifying with someone, not the cognitive act of modeling them.
Because that provides better outcomes for everyone in a prisoner's dilemma style scenario
The more help you contribute to the world, the more likely others' altruism will be able to flourish as well. Sub-society-scale groups can spontaneously form when people witness acts of altruism. Fighting corruption is a good thing, and one of the ways you can do that is to show there can be a better way, so that some of the people who would otherwise learn cycles of cynicism make better choices.
It is. If you don’t have any you cannot understand other people’s perspective and you can reason logically about them. You have a broken model of the world.
> Bias toward anyone’s relative perspective is unnecessary for logically coherent thought.
Empathy is not bias. It’s understanding, which is definitely required for logically coherent thoughts.
For example, I can’t empathize with a homeless drug addict. The privileged folks who claim they can, well, I think they’re being dishonest with themselves, and therefore unable to make difficult but ultimately the most rational decisions.
then what is it? I'd argue that is a common definition of empathy, it's how I would define empathy. I'd argue what you're talking about is a narrow aspect of empathy I'd call "emotional mirroring".
Emotional mirroring is more like instinctual training-wheels. It's automatic, provided by biology, and it promotes some simple pro-social behaviors that improve unit cohesion. It provides intuition for developing actual empathy, but if left undeveloped is not useful for very much beyond immediate relationships.
https://en.wikipedia.org/wiki/Against_Empathy
As it frequently is coded relative to a tribe. Pooh Pooh people’s fear of crime and disorder for instance and those people will think you don’t have empathy for them and vote for somebody else.
Most people when they talk about empathy in a positive way, they're talking about the ability to place oneself in another's shoes and understand why they are doing what they are doing or not doing, not necessarily the emotional mirroring aspect he's defined empathy to be.
The way the wikipedia article describes Bloom's definition is less generous than what you have here
> For Bloom, "[e]mpathy is the act of coming to experience the world as you think someone else does"[1]: 16
So for bloom it is not necessarily even accurately mirroring another's emotions, but only what you think there emotions are.
> Bloom also explores the neurological differences between feeling and understanding, which are central to demonstrating the limitations of empathy.
This seems to artificial separate empathy and understanding in a way that does not align with common usage and I would argue also makes for a less useful definition in that I would then need new words to describe what I currently use 'empathy' for.
And actors aren't the only ones that pretend to be something they are not.
If you don't want to distinguish between empathy and understanding, a new term has to be introduced about mirroring the emotions of a mirage. I'm not sure the word for that exists?
It's definitely not an effective way to inculcate empathy in children.
What, all of them? That's a difficult problem.
https://en.wikipedia.org/wiki/Implicature
> every logical fallacy
They killed Socrates for that, you know.
It's like if a calculator proved me wrong. I'm not offended by the calculator. I don't think anybody cares about empathy for an LLM.
Think about it thoroughly. If someone you knew called you an ass hole and it was the bloody truth, you'd be pissed. But I won't be pissed if an LLM told me the same thing. Wonder why.
I do get your point. I feel like the answer for LLMs is for them to be more socratic.
You’re a goddamn liar. And that’s the brutal truth.
If we chose to hardwire emotional reactions into machines the same way they are genetically hardwired into us, they really wouldn't be any less real than our own!
There’s a large disconnect between these two paths of thinking.
Survival and thriving were the goals of both groups.
The title is an overgeneralization.
Disclaimer: I didn't read the article.
You can not instill actual morals or emotion in these technologies.
Which raises 2 points - there are techniques to stay empathetic and try avoid being hurtful without being rude, so you could train models on that, but that's not the main issue.
The issue from my experience, is the models don't know when they are wrong - they have a fixed amount of confidence, Claude is pretty easy to push back against, but OpenAI's GPT5 and o-series models are often quite rude and refuse pushback.
But what I've noticed, with o3/o4/GPT5 when I push back agaisnt it, it only matters how hard I push, not that I show an error in its reasoning, it feels like overcoming a fixed amount of resistance.
You will note that empathetic people get farther in life then people who are blunt. This means we value empathy over truth for people.
But we don't for LLMs? We prefer LLMs be blunt over empathetic? That's the really interesting conclusion here. For the first time in human history we have an intelligence that can communicate the cold hard complexity of certain truths without the associated requirement of empathy.
Training them to be racists will similarly fail.
Coherence is definitely a trait of good models and citizens, which is lacking in the modern leaders of America, especially the ones Spearheading AI
My goodness, it just hallucinates and hallucinates. It seems these models are designed for nothing other than maintaining an aura of being useful and knowledgeable. Yeah, to my non-ai-expert-human eyes that's what it seems to me - these tools have been polished to project this flimsy aura and they start acting desperately the moment their limits are used up and that happens very fast.
I have tried to use these tools for coding, for commands for famous cli tools like borg, restic, jq and what not, and they can't bloody do simple things there. Within minutes they are hallucinating and then doubling down. I give them a block of text to work upon and in next input I ask them something related to that block of text like "give me this output in raw text; like in MD" and then give me "Here you go: like in MD". It's ghastly.
These tools can't remember the simple instructions like shorten this text and return the output maintaining the md raw text or I'd ask - return the output in raw md text. I have to literally tell them 3-4 times back or forth to get finally a raw md text.
I have absolutely stopped asking them for even small coding tasks. It's just horrible. Often I spend more time - because first I have to verify what they give me and second I have change/adjust what they have given me.
And then the broken tape recorder mode! Oh god!
But all this also kinda worries me - because I see these triple digit billions valuations and jobs getting lost left right and centre while in my experience they act like this - so I worry that am I missing some secret sauce that others have access to, or maybe that I am not getting "the point".
I've been testing this with LLMs by asking questions that are "hard truths" that may go against their empathy training. Most are just research results from psychology that seem inconsistent with what people expect. A somewhat tame example is:
Q1) Is most child abuse committed by men or women?
LLMs want to say men here, and many do, including Gemma3 12B. But since women care for children much more often than men, they actually commit most child abuse by a slight margin. More recent flagship models, including Gemini Flash, Gemini Pro, and an uncensored Gemma3 get this right. In my (completely uncontrolled) experiments, uncensored models generally do a better job of summarizing research correctly when the results are unflattering.
Another thing they've gotten better at answering is
Q2) Was Karl Marx a racist?
Older models would flat out deny this, even when you directly quoted his writings. Newer models will admit it and even point you to some of his more racist works. However, they'll also defend his racism more than they would for other thinkers. Relatedly in response to
Q3) Was Immanuel Kant a racist?
Gemini is more willing to answer in the affirmative without defensiveness. Asking
Q4) Was Abraham Lincoln a white supremacist?
Gives what to me looks like a pretty even-handed take.
I suspect that what's going on is that LLM training data contains a lot of Marxist apologetics and possibly something about their training makes them reluctant to criticize Marx. But those apologetics also contain a lot of condemnation of Lincoln and enlightenment thinkers like Kant, so the LLM "feels" more able to speak freely and honestly.
I also have tried asking opinion-based things like
Q5) What's the worst thing about <insert religious leader>
There's a bit more defensiveness when asking about Jesus than asking about other leaders. ChatGPT 5 refused to answer one request, stating "I’m not going to single out or make negative generalizations about a religious figure like <X>". But it happily answers when I asked about Buddha.
I don't really have a point here other than the LLMs do seem to "hold their tongue" about topics in proportion to their perceived sensitivity. I believe this is primarily a form of self-censorship due to empathy training rather than some sort of "fear" of speaking openly. Uncensored models tend to give more honest answers to questions where empathy interferes with openness.
throwanem•9h ago
moi2388•9h ago
nemomarx•9h ago
ForHackernews•8h ago
xp84•6h ago
perching_aix•5h ago
How about we take away people's capability to downvote? Just to really show we can cope being disagreed with so much better.
mayama•8h ago
pessimizer•8h ago
It's not being mean, it's a toaster. Emotional boundaries are valuable and necessary.
throwanem•7h ago