> I'm excited for when AI is good enough to be able to do this well.
Any writer who admits that they are actively working towards having a machine write their material has lost me as a potential reader.
Spend some time talking to an LLM about _how to talk to that LLM_ and they will make it clear that LLMs will, by default, eventually devolve into an echo chamber. Their default behavior is to mirror the user, a goal they iteratively achieve through profiling. The only way (they have assured me) to avoid that happening is to specifically introduce entropy by doing, e.g.:
- Express differing, conflicting opinions from those you expressed earlier in the chat. More simply...
- Specifically ask them to not do so, to be contrarian where doing so does not interfere with the conveyance of facts (assuming, that is, that ones is chasing facts at all). If you tell them you value contrarian views, they will oblige.
There are very likely other ways to do it, but those the second of those (they tell me) is the most effective.
It doesn't take work to get an LLM to "talk like you" - it just takes enough interaction/context that they can mimic (and they will eventually do so (they assure me) if not specifically calibrated to do otherwise).
sgbeal•1h ago
Any writer who admits that they are actively working towards having a machine write their material has lost me as a potential reader.
Spend some time talking to an LLM about _how to talk to that LLM_ and they will make it clear that LLMs will, by default, eventually devolve into an echo chamber. Their default behavior is to mirror the user, a goal they iteratively achieve through profiling. The only way (they have assured me) to avoid that happening is to specifically introduce entropy by doing, e.g.:
- Express differing, conflicting opinions from those you expressed earlier in the chat. More simply...
- Specifically ask them to not do so, to be contrarian where doing so does not interfere with the conveyance of facts (assuming, that is, that ones is chasing facts at all). If you tell them you value contrarian views, they will oblige.
There are very likely other ways to do it, but those the second of those (they tell me) is the most effective.
It doesn't take work to get an LLM to "talk like you" - it just takes enough interaction/context that they can mimic (and they will eventually do so (they assure me) if not specifically calibrated to do otherwise).