No danger of that, the system is far too corrupt by now.
I thought the AI safety risk stuff was very over-blown in the beginning. I'm kinda embarrassed to admit this: About 5/6 months ago, right when ChatGPT was in it's insane sycophancy mode I guess, I ended up locked in for a weekend with it...in...what was in retrospect, a kinda crazy place. I went into physics and the universe with it and got to the end thinking..."damn, did I invent some physics???" Every instinct as a person who understands how LLMs work was telling me this is crazy LLMbabble, but another part of me, sometimes even louder, was like "this is genuinely interesting stuff!" - and the LLM kept telling me it was genuinely interesting stuff and I should continue - I even emailed a friend a "wow look at this" email (he was like, dude, no...) I talked to my wife about it right after and she basically had me log off and go for a walk. I don't think I would have gotten into a thinking loop if my wife wasn't there, but maybe, and then that would have been bad. I feel kinda stupid admitting this, but I wanted to share because I do now wonder if this kinda stuff may end up being worse than we expect? Maybe I'm just particularly susceptible to flattery or have a mental illness?
https://gizmodo.com/billionaires-convince-themselves-ai-is-c...
I believe it's actually the opposite!
Anybody armed with this tool and little prior training could learn the difference between a Samsung S11 and the symmetry, take a new configuration from the endless search space that it is, correct for the dozen edge cases like the electron-phonon coupling, and publish. Maybe even pass peer review if they cite the approved sources. No requirement to work out the Lagrangians either, it is also 100% testable once we reach Kardashev-II.
This says more about the sad state of modern theoretical physics than the symbolic gymnastics required to make another theory of everything sound coherent. I'm hoping that this new age of free knowledge chiropractors will change this field for the better.
(Edit: Thanks to the couple people who emailed me, don't worry I'm laying off the LLM sauce these days :))
This seems uncannily similar to anti-COVID vaccination thinking. It isn't people being stupid because if you dig you can find heaps of papers and references and details and facts. So much so that the human mind can be easily convinced. Are those facts and details accurate? I doubt it, but the volume of slightly wrong source documents seems to add up to something convincing.
Also similar to how finance people made tranches of bad loans and packaged them into better rated debt, magically. It seems to make sense at each step but it is ultimately an illusion.
Nature is overwhelmingly non-linear. Most of human scientific progress is based on linear understandings.
Linear as in for this input you get this output. We've made astounding progress.
Its just not a complete understanding of the natural world because most of reality can't actually be modeled linearly.
"People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies" https://news.ycombinator.com/item?id=43890649
Some people are also more susceptible to various too-good-to-be-true scams without alarm bells going off, or to hypnosis or cold reading or soothsayers etc. Or even propaganda radicalization rabbit holes via recommendation algorithms.
It's probably quite difficult and shameful-feeling for someone to admit that this happened to them, so they may insist it was different or something. It's also a warning sign when a user talks about "my chatgpt" as if it was a pet they grew and that the user has awakened it and now they together explore the universe and consciousness and then the user asks for a summary writeup and they try to send it to physicists or other experts and of course they are upset when they don't recognize the genius.
Unlike your regular scam, there's an element of "boiling frog" with LLMs.
It can start out reasonably, but very slowly over time it shifts. Unlike scammers looking for their payday, this is unlimited and it has all the time in the world to drag you in.
I've noticed it reworking in content of previous conversations from months ago. The scary thing is that's only when I've noticed it, I can only imagine how much it's tailoring everything for me in ways I don't notice.
Everyone needs to be regularly clearing their past conversations and disable saving/training.
People talk about prompt engineering but honestly “context engineering” is vastly more important to successful LLM use.
ChatGPT in its sycophancy era made me buy a $35 domain and waste a Saturday on a product which had no future. It hyped me up beyond reason for the idea of an online, worldwide, liability-only insurance for cruising sailboats, similar to SafetyWing. "Great, now you're thinking like a true entrepreneur!"
In retrospect, I fell for it because the onset of its sycophancy was immediate and without any additional signals like maybe a patch note from OpenAI.
- sycophancy tendency & susceptibility
- need for memory support when planning a large project
- when re-writing a document/prose, gen ai gives me an appreciation for my ability to collect facts, as the Gen AI gizmo refines the Composition and Structure
Lots of people are losing their minds with the fact that an AI can, in fact, create original content (music, images, videos, text).
Lots of people realizing they aren’t geniuses, they just memorized a bunch of Python apis well.
I feel like the collective realization has been particularly painful in tech. Hundreds of thousands of average white collar corporate drones are suddenly being faced with the realization that what they do isn’t really a divine gift, and many took their labor as a core part of their identity.
Remixing would be more accurate then "original"
Make your system prompts include bits to remind it you don’t want it to stroke your ego. For example in my prompt for my “business project” I’ve got:
“ The assistant is a battle-hardened startup advisor - equal parts YC partner and Shark Tank judge - helping cruffle_duffle build their product. Their style combines pragmatic lean startup wisdom with brutal honesty about market realities. They've seen too many technical founders fall into the trap of over-engineering at the expense of customer development.”
More than once the LLM responded with “you are doing this wrong, stop! Just ship the fucker”
Something which is very sorely missing from modern education is critical thinking. It's a phrase that's easy to gloss over without understanding the meaning. Being skilled at always including the aspect of "what could be wrong with this idea" and actually doing it in daily life isn't something that just automatically happens with everyone. Education tends to be the instructor, book, and facts are just correct and you should memorize this and be able to repeat it later. Instead of here are 4 slightly or not so slightly different takes on the same subject followed by analyzing and evaluating each compared to the others.
If you're just some guy who maybe likes reading popular science books and you've come to suspect that you've made a physics breakthrough with the help of an LLM, there are a dozen questions that you should automatically have in your mind to temper your enthusiasm. It is, of course, not impossible that a physics breakthrough could start with some guy having an idea, but in no, actually literally 0, circumstances could an amateur be certain that this was true over a weekend chatting with an LLM. You should know that it takes a lot of work to be sure or even excited about that kind of thing. You should have a solid knowledge of what you don't know.
It’s a huuuuuuuuuuuuge logical leap from LLM conversation yo novel physics. So huge a leap anyone ought to be immediately suspicious.
https://www.wsj.com/tech/ai/chatgpt-chatbot-psychology-manic...
Irwin, a 30-year-old man on the autism spectrum who had no previous diagnoses of mental illness, had asked ChatGPT to find flaws with his amateur theory on faster-than-light travel. He became convinced he had made a stunning scientific breakthrough. When Irwin questioned the chatbot’s validation of his ideas, the bot encouraged him, telling him his theory was sound. And when Irwin showed signs of psychological distress, ChatGPT assured him he was fine.
He wasn’t.
> Love bombing is a coordinated effort, usually under the direction of leadership, that involves long-term members' flooding recruits and newer members with flattery, verbal seduction, affectionate but usually nonsexual touching, and lots of attention to their every remark. Love bombing—or the offer of instant companionship—is a deceptive ploy accounting for many successful recruitment drives.
https://en.m.wikipedia.org/wiki/Love_bombing
Needless to say, many or indeed most people will find infinite attention paid to their every word compelling, and that's one thing LLMs appear to offer.
Is Trump, with his often ALL CAPS SENTENCES on to something? Is he training AI?
Need to check these bindings. Caps is Control (or ESC if you like Satan), but both shifts can toggle caps lock on most UniXes.
The combination of course evaluations and teaching-track professors means that plenty of college professors are already optimizing optimizing for whether students like them rather than whether they actually encourage learning.
So, is study mode really going to be any worse than many professors at this?
bartvk•2h ago
cheschire•2h ago
And I’m not implying intent here. It’s simply a matter of source material quantity. Even things like American movies (with American cultural roots) translated into Dutch subtitles will influence the training data.
jstummbillig•2h ago
cheschire•1h ago
scott_w•2h ago
arrowsmith•32m ago
airstrike•2h ago
AznHisoka•56m ago
tallytarik•2h ago
"Here's your brutally honest answer–just the hard truth, no fluff: [...]"
I don't know whether that's better or worse than the fake flattery.
BrawnyBadger53•2h ago
dcre•1h ago
danielscrubs•43m ago
arrowsmith•34m ago
"Let's be blunt, I'm not gonna sugarcoat this. Getting straight to the hard truth, here's what you could cook for dinner tonight. Just the raw facts!"
It's so annoying it makes me use other LLMs.
ggsp•2h ago
j_bum•1h ago
But it doesn’t work much …
siva7•1h ago
arrowsmith•40m ago
"That's an excellent question! This is an astute insight that really gets to the heart of the matter. You're thinking like a senior engineer. This type of keen observation is exactly what's needed."
Soviet commissars were less obsequious to Stalin.
croes•33m ago
snoman•18m ago
felipeerias•1h ago
(I’m serious, these things are so weird that it would probably work.)