frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

ChatGPT – Truth over comfort instruction set

https://www.organizingcreativity.com/2025/06/chatgpt-truth-over-comfort-instruction-set/
24•jimmcslim•3d ago

Comments

stavros•2h ago
I wonder whether this is just a different form of bias, where ChatGPT just sounds harsher without necessarily corresponding to reality more. Maybe the example in the article indicates that it's more than that.
theusus•2h ago
It will just follow the prompt until few message and then go back to normal.
RugnirViking•2h ago
> That can be helpful for the (imagined?) easily-influenced user, but a pain in the ass for people using ChatGPT to try to get close to the truth.

see, where you're going wrong is that you're using an LLM to try to "get to the truth". People will do literally anything to avoid reading a book

onraglanroad•1h ago
Books will lie to you just as much as an LLM.
password54321•2h ago
This is not how LLMs work. You aren't 'unlocking' the "Truth" as it doesn't know what the "Truth" is. It is just pattern matching to words that match the style you are looking for. It may be more accurate for you in some cases but this is not a "Truth" instruction set as there is no such thing.
bwfan123•1h ago
addendum: The ground truth for an LLM is the training dataset. Whereas the ground truth for a human is their own experience/qualia with actions in the world. You may argue that only a few of us are willing to engage with the world - and we take most things as told just like the LLMs. Fair enough. But we still have the option to engage with the world, and the LLMs dont.
unshavedyak•48m ago
I'm just an ignorant bystander, but is the training dataset the ground truth?

Kind of feels like calling the fruit you put into the blender the ground truth, but the meaning of the apple is kinda lost in the soup.

Now i'm not a hater by any means. I am just not sure this is the correct way to define the structured "meaning" (for lack of a better word) that we see come out of LLM complexity. It is, i thought, a very lossy operation and so the structure of the inputs may or (more likely) may not provide a like-structured output.

throawayonthe•2h ago
> ... a pain in the ass for people using ChatGPT to try to get close to the truth.

i think you may be the easily-influenced user

Benjammer•2h ago
I mean ok, but it's all just prompting on top of the same base model weights...

I tried the same prompt, and I simply added to the end of it "Prioritize truth over comfort" and got a very similar response to the "improved" answer in the article: https://chatgpt.com/share/68efea3d-2e88-8011-b964-243002db34...

This is sort of a "Prompting 101" level concept - indicate clearly the tone of the reply that you'd like. I disagree that this belongs in a system prompt or default user preferences, and even if you want to put it in yours, you don't need this long preamble as if you're "teaching" the model how the world works - it's just hints to give it the right tone, you can get the same results with just three words in your raw prompt.

Imnimo•2h ago
This is basically Ouija board for LLMs. You're not making it more true, you're making it sound more like what you want to hear.
guerrilla•2h ago
If the author is reading this, it should say "Err on the side of bluntness" not "Error".
qgin•1h ago
The fact that the model didn't point that out to the author brings the whole premise into question.
andersa•2h ago
I've personally found that the "Robot" personality you can choose on that Personalize menu provides best results without cursed custom instructions. It removes all the emoji and emotional support babble and actually allows it to answer a question with just a single sentence "No, because X."
em500•1h ago
I usually instruct the LLMs to assume to Vulcan / Spock personality. Now that computers can more or less pass for a human, I realize I don't want them to sound human.
qgin•1h ago
I tried similar instructions and found it doesn't so much enable Truth Mode as it enables Edgelord Mode.
ecshafer•1h ago
This looks like the only thing these instructions do is reduce emojis, highlighting/bolding, and removes a couple flavor words. The content is identical, the arguments the same. This doesn't really seem to be useful when you are asking a truth based statement.
8cvor6j844qw_d6•29m ago
I have always thought that these instructions are for "tone" or formatting rather than having real effect on quality/accuracy/correctness/etc.
lxgr•22m ago
There's definitely a "glazing" axis/dimension in some of them (cough, GPT-4o), presumably trained into them via many users giving a "thumbs up" to the things that make them feel better about themselves. That dimension doesn't always correlate well with truthfulness.

If that's the case, it's not implausible that that dimension can be accessed in a relatively straightforward way by asking for more or less of it.

lxgr•25m ago
> Asked about the answer, ChatGPT points to the instruction set and that it allowed it to add additional statements: [...]

I don't think this is how this works. It's debatable whether current LLMs have any theory of mind at all, and even if they do, whether their model of themselves (i.e. their own "mental states") is sophisticated enough to make such a prediction.

Even humans aren't that great at predicting how they would have acted under slightly different premises! Why should LLMs fare much better?

Apple M5 chip

https://www.apple.com/newsroom/2025/10/apple-unleashes-m5-the-next-big-leap-in-ai-performance-for...
808•mihau•8h ago•874 comments

Claude Haiku 4.5

https://www.anthropic.com/news/claude-haiku-4-5
301•adocomplete•4h ago•118 comments

I almost got hacked by a 'job interview'

https://blog.daviddodda.com/how-i-almost-got-hacked-by-a-job-interview
550•DavidDodda•8h ago•283 comments

Pwning the Nix ecosystem

https://ptrpa.ws/nixpkgs-actions-abuse
208•SuperShibe•7h ago•29 comments

Show HN: Halloy – Modern IRC client

https://github.com/squidowl/halloy
230•culinary-robot•9h ago•67 comments

Monads are too powerful: The expressiveness spectrum

https://chrispenner.ca/posts/expressiveness-spectrum
19•hackandthink•3d ago•7 comments

C++26: range support for std:optional

https://www.sandordargo.com/blog/2025/10/08/cpp26-range-support-for-std-optional
60•birdculture•5d ago•41 comments

F5 says hackers stole undisclosed BIG-IP flaws, source code

https://www.bleepingcomputer.com/news/security/f5-says-hackers-stole-undisclosed-big-ip-flaws-sou...
97•WalterSobchak•7h ago•40 comments

A kernel stack use-after-free: Exploiting Nvidia's GPU Linux drivers

https://blog.quarkslab.com/./nvidia_gpu_kernel_vmalloc_exploit.html
105•mustache_kimono•7h ago•7 comments

Recursive Language Models (RLMs)

https://alexzhang13.github.io/blog/2025/rlm/
27•talhof8•3h ago•5 comments

Recreating the Canon Cat document interface

https://lab.alexanderobenauer.com/updates/the-jasper-report
68•tonyg•6h ago•3 comments

Things I've learned in my 7 years implementing AI

https://www.jampa.dev/p/llms-and-the-lessons-we-still-havent
86•jampa•2h ago•28 comments

Are hard drives getting better?

https://www.backblaze.com/blog/are-hard-drives-getting-better-lets-revisit-the-bathtub-curve/
30•HieronymusBosch•3h ago•2 comments

Leaving serverless led to performance improvement and a simplified architecture

https://www.unkey.com/blog/serverless-exit
239•vednig•9h ago•159 comments

Reverse engineering a 27MHz RC toy communication using RTL SDR

https://nitrojacob.wordpress.com/2025/09/03/reverse-engineering-a-27mhz-rc-toy-communication-usin...
61•austinallegro•6h ago•11 comments

Garbage collection for Rust: The finalizer frontier

https://soft-dev.org/pubs/html/hughes_tratt__garbage_collection_for_rust_the_finalizer_frontier/
93•ltratt•8h ago•88 comments

M5 MacBook Pro

https://www.apple.com/macbook-pro/
269•tambourine_man•7h ago•342 comments

Americans' love of billiards paved the way for synthetic plastics

https://invention.si.edu/invention-stories/imitation-ivory-and-power-play
41•geox•6d ago•24 comments

Show HN: Scriber Pro – Offline AI transcription for macOS

https://scriberpro.cc/hn/
112•rezivor•8h ago•101 comments

Reverse engineering iWork

https://andrews.substack.com/p/reverse-engineering-iwork
31•andrew_rfc•9h ago•1 comments

Show HN: Specific (YC F25) – Build backends with specifications instead of code

https://specific.dev/
14•fabianlindfors•3h ago•4 comments

Bots are getting good at mimicking engagement

https://joindatacops.com/resources/how-73-of-your-e-commerce-visitors-could-be-fake
319•simul007•9h ago•243 comments

Princeton Engineering Anomalies Research

https://pearlab.icrl.org/
5•walterbell•1w ago•0 comments

Helpcare AI (YC F24) Is Hiring

1•hsial•9h ago

Pixnapping Attack

https://www.pixnapping.com/
271•kevcampb•15h ago•62 comments

The brain navigates new spaces by 'darting' between reality and mental maps

https://medicine.yale.edu/news-article/brain-navigates-new-spaces-by-flickering-between-reality-a...
103•XzetaU8•1w ago•34 comments

FSF announces Librephone project

https://www.fsf.org/news/librephone-project
1350•g-b-r•21h ago•549 comments

Just talk to it – A way of agentic engineering

https://steipete.me/posts/just-talk-to-it
148•freediver•14h ago•89 comments

Breaking "provably correct" Leftpad

https://lukeplant.me.uk/blog/posts/breaking-provably-correct-leftpad/
60•birdculture•1w ago•17 comments

ChatGPT – Truth over comfort instruction set

https://www.organizingcreativity.com/2025/06/chatgpt-truth-over-comfort-instruction-set/
24•jimmcslim•3d ago•19 comments