ppls just can't accumulate all those relationships between tokens and sub-tokens but LLM just prove that result of task has been encoded in the request by token relationships.
Anyone who studied physics and simulations knows about distribution fits, finding functions which used to imitate actual data -> Monte-Carlo.
So LLM basically multidimensional Monter-Carlo.
And this is proving that LLM can't solve or think out of scope of trained data. Able to keep limited context and is not able to dream.
ghu666•2h ago
Anyone who studied physics and simulations knows about distribution fits, finding functions which used to imitate actual data -> Monte-Carlo. So LLM basically multidimensional Monter-Carlo.
And this is proving that LLM can't solve or think out of scope of trained data. Able to keep limited context and is not able to dream.