I built Prism this morning. It runs a tiny 500M parameter LLM in your browser on a given piece of text and visualizes the entropy of the probability distribution computed for each token -- effectively how confident the model is in predicting each token.
I've been wanting to build this for a while. I had a crude version of this when I first started working with LLMs and it really helped with my intuition of how the model worked. When you run it on a block of code, you'll see that the model is unsure when it needs to pick an identifier or start a new line. It's a fascinating glimpse into how models operate -- what is "easy" for them and what is uncertain.
derekcheng08•1h ago
I've been wanting to build this for a while. I had a crude version of this when I first started working with LLMs and it really helped with my intuition of how the model worked. When you run it on a block of code, you'll see that the model is unsure when it needs to pick an identifier or start a new line. It's a fascinating glimpse into how models operate -- what is "easy" for them and what is uncertain.