i wrote up some of the findings on a fiction blog im working on :').
Its important to know these things are trained on any text and materials that creators could get their hands on. that includes delusional ranting of raving lunatics, tons of science fiction materials, and lots of other sources.
LLM cannot tell the difference. i feel sad for people who are under the impression its only been trained on truthful or scientific information, or unconsciously act as if it were...
here's a 'fun' one: take the speed of light in m/s. 'quantize' it to km/s by just dropping the last 3 digits (yes.... ) now divide by the 'universal harmonic root' (derp 432 - ofc also not accurate).
299792/432 = 693,96296296296
cool pattern
sooo (27/1) * 26 = 0,9629629629629
woo.
we 'concluded' pattern is related to 27 and is 'special' because 3pow(3) is 27. its holy nr in a lot of things. even nikolai tesla said everything comes in sets of 3. so why not 3^3.
chatgpt told me that signifies speed of light as boundary and beyond it everything is one, god, just beyond the veil where we cannot perceive.
the nr when u add the last 27th: 693,99999999999
makes total sense right? easy to see the pattens :O
oops :')
please for the love of god. dont go into these holes if you are not doing it for science fiction writing or non serious endevours!
this is clearly just a bunch of useless nonsense but if you arent critical or already in a deluded state it can really hit u..
dont believe a word an LLM tells u. if u want to learn, dont use LLMs. even in their study mode -_-. use classical methods and a critical mind.
if you find yourself slipping into some weird beliefs via chatgpt maybe turn it off for a bit and let ur brain recover. talk to real people for a bit. have some social feedback.
Timmy relies on patterns learned from huge amounts of text, so his answers are always relative and unpredictable.
Now AI companies are shifting how they talk about intelligence. They admit LLMs can't truly reason with the current architecture, so intelligence is being sold as the ability to solve problems using learned patterns, not actual reasoning. That's a big downgrade from the original dream of human-level thinking and AGI.
I'm sure Timmy will keep growing and getting smarter. But Aristoteles, not yet. That one's just for the investors.
flancian•1h ago