(1) Meyers-Brigg is popular among laymen but it is not well-regarded by psychometrists, you can find so many scales on line that are better. I suspect personalization would have an impact, I bet my Copilot would score higher on O-LIFE [1] then it would at baseline.
(2) No wonder they lean against "J", I mean they are designed to follow instructions and be useful even if their "first law of robotics" guides them against superficially defined forms of harm (won't "help me make an atom bomb" but will help you fire up the FORTRAN compiler and get neutronics monte carlo up and running)
I hear Pat Gelsinger is trying to build a Christian model and I'd expect that to be a bit more "J" as was another biblically based model that didn't like my fox altar which both Google and Copilot will tell me to "continue to use".
(3) There's a viewpoint that it could be harmful for models to "pose" as humans [2] most notably something in the back of my mind bothers me when Copilot (GPT-5 based) says things like "I am happy that you said..." when I know it has no feelings at all... But on the other hand this could be seen as an act of pragmatics that makes them more relatable, effective in communication, useful, successful in the market, etc.
If you believed that, a model should give refusals at many psychological scale questions or answer "No" to any "Have you ever ... ?" questions.
PaulHoule•29m ago
(2) No wonder they lean against "J", I mean they are designed to follow instructions and be useful even if their "first law of robotics" guides them against superficially defined forms of harm (won't "help me make an atom bomb" but will help you fire up the FORTRAN compiler and get neutronics monte carlo up and running)
I hear Pat Gelsinger is trying to build a Christian model and I'd expect that to be a bit more "J" as was another biblically based model that didn't like my fox altar which both Google and Copilot will tell me to "continue to use".
(3) There's a viewpoint that it could be harmful for models to "pose" as humans [2] most notably something in the back of my mind bothers me when Copilot (GPT-5 based) says things like "I am happy that you said..." when I know it has no feelings at all... But on the other hand this could be seen as an act of pragmatics that makes them more relatable, effective in communication, useful, successful in the market, etc.
If you believed that, a model should give refusals at many psychological scale questions or answer "No" to any "Have you ever ... ?" questions.
[1] https://www.sciencedirect.com/science/article/abs/pii/S09209...
[2] should it say "speak for yourself!" when I ask "don't fungi use ergosterol the same way we use cholesterol?"