I can not begin to imagine how devastated his family is over all of this - how many ways and times they will ask themselves, "what could we have done differently?"
LLM-based tools like ChatGPT and Grok etc can be fantastic - when used well
And absolutely horrible if used badly
As they statistically respond to prompts and responses, spitting out text from their vast libraries, negative prompting will - inevitably - lead to negative text generation
You have probably heard of the 7-38-55 Rule [0] (some round and call it the 60-35-5 rule) which observes well under 10% of communication is via the words used - body language, the biggest component, is completely absent in a tool like a chatbot. Tone is [nearly] undeterminable in pure text - take, for example, Harrison Ford reading the opening words to "The Cat in the Hat" in Patriot Games while his daughter is in the hospital after the car crash: that is a "fun" book, yet the way the opening is read is very ominous.
Chatbots, like text messages, IMs, email, etc, have tone assumed by the other party: if the person interacting is in a good mood, even if talking about sad/negative things, the inferred tone in the bot's responses are going to be very different than if the person interacting is in a foul mood of some kind.
Combine that with how relatively easy it is to manipulate the prompts to get different results (even when "asking the same thing" (eg "how many b's are in blueberry" vs "how many letters b are in the word blueberry" [1])), and these generative engines - while, again, amazing tools - are triggered through their algorithmic matrices to produce whatever they have been 'trained' to produce.
-------
[0] https://www.masterclass.com/articles/how-to-use-the-7-38-55-...
ChrisArchitect•2h ago