Ask HN: Loss of entropy in AI based alt-self chat apps
1•Arkid•2h ago
I have seen a bunch of well funded companies raise money with the thesis they will create alt-self of humans which can chat on behalf of humans. I have tried explaining to a bunch of folks how there is entropy loss in such conversations very much in sync with how averages of averages cause smoothening of any series. All i get to hear is a bunch of gen ai jargon from a bunch of folks who have raised a lot of money because no body understood their jargon and threw money on them. they don't get basic averages and want to build alt-self chats. Genuine rant but wanted to ask HN folks, isn't the loss of entropy very obvious and very explainable
Comments
Festro•1h ago
You seem to be struggling to explain it to them, so no, it's clearly not very explainable.
There is a problem with averaging out responses via LLMs, but this is specifically what temperature controls on outputs are for.
It's also worth noting that our own neural networks already average out our own responses from our accumulated experiences. That's not to say a human is equivalent to an AI but that we do share that trait. Humans do not make perfectly entropic novel responses every time they utter something.
Getting AIs to a point where they can replicate human responses convincingly enough to match a specific individual's own mannerisms, idiolect, and indeed their context-dependent entropy is going to be significantly harder than many think. There is no reason to think we won't overcome these entirely observable problems, though. We've already overcome issues with realtime conversational AI that would have looked impossible 20 years ago.
Festro•1h ago
There is a problem with averaging out responses via LLMs, but this is specifically what temperature controls on outputs are for.
It's also worth noting that our own neural networks already average out our own responses from our accumulated experiences. That's not to say a human is equivalent to an AI but that we do share that trait. Humans do not make perfectly entropic novel responses every time they utter something.
Getting AIs to a point where they can replicate human responses convincingly enough to match a specific individual's own mannerisms, idiolect, and indeed their context-dependent entropy is going to be significantly harder than many think. There is no reason to think we won't overcome these entirely observable problems, though. We've already overcome issues with realtime conversational AI that would have looked impossible 20 years ago.