see, where you're going wrong is that you're using an LLM to try to "get to the truth". People will do literally anything to avoid reading a book
Kind of feels like calling the fruit you put into the blender the ground truth, but the meaning of the apple is kinda lost in the soup.
Now i'm not a hater by any means. I am just not sure this is the correct way to define the structured "meaning" (for lack of a better word) that we see come out of LLM complexity. It is, i thought, a very lossy operation and so the structure of the inputs may or (more likely) may not provide a like-structured output.
i think you may be the easily-influenced user
I tried the same prompt, and I simply added to the end of it "Prioritize truth over comfort" and got a very similar response to the "improved" answer in the article: https://chatgpt.com/share/68efea3d-2e88-8011-b964-243002db34...
This is sort of a "Prompting 101" level concept - indicate clearly the tone of the reply that you'd like. I disagree that this belongs in a system prompt or default user preferences, and even if you want to put it in yours, you don't need this long preamble as if you're "teaching" the model how the world works - it's just hints to give it the right tone, you can get the same results with just three words in your raw prompt.
If that's the case, it's not implausible that that dimension can be accessed in a relatively straightforward way by asking for more or less of it.
I don't think this is how this works. It's debatable whether current LLMs have any theory of mind at all, and even if they do, whether their model of themselves (i.e. their own "mental states") is sophisticated enough to make such a prediction.
Even humans aren't that great at predicting how they would have acted under slightly different premises! Why should LLMs fare much better?
stavros•2h ago