For example, Libgen is out of commission, and the substitutes are hell to use.
Summary of what's up and not up:
Somehow I also preferred libgen, but I don't think annas archive is "hell to use".
for substitutes. The alternate libgen sites seem more limited to me, but I am comparing with memories, so untrustworthy.
That's just natural reward hacking when you have no training/constraints for readability. IIRC R1 Zero is like that too, they retrained it with a bit of SFT to keep it readable and called it R1. Hallucinating training examples if you break the format or prompt it with nothing is also pretty standard behavior.
What does this mean?
That's the nature of reinforcement learning and any evolutionary processes. That's why the chain of thought in reasoning models is much less useful for debugging than it seems, even if the chain was guided by the reward model or finetuning.
What is Nueralese? I tried searching for a definition but it just turns up a bunch of Less Wrong and Medium articles that don't explain anything.
Is it a technical term?
EDIT: orbital decay explained it pretty well in this thread
it originally referred to the idea that neural signals might form an intrinsic "language" representing aspects of the world, though these signals gain meaning only through interpretation in context.
in artificial intelligence, the term now has a more concrete role, referring to the deep communication protocols used by multiagent systems.
One implication leading to its popularity by LessWrong is the worry that malicious AI agents might hide bad intent and actions by communicating in a dense, indecipherable way while presenting only normal intent and actions in their natural language output.
you could edit this slightly to extract a pretty decent rule for governance, like so:
> malicious agents might hide bad intent and actions by communicating in a dense, indecipherable way while presenting only normal intent and actions in a natural way
It applies to ai, but also many other circumstances where the intention is that you are governed - eg medical, legal, financial.
Thanks!
• https://en.wikipedia.org/wiki/Cant_(language)
• https://en.wikipedia.org/wiki/Dog_whistle_(politics)
Or even just regional differences, like how British people, upon hearing about "gravy and biscuits" for the first time, think this: https://thebigandthesmall.com/blog/2019/02/26/biscuits-gravy...
> It applies to ai, but also many other circumstances where the intention is that you are governed - eg medical, legal, financial.
May be impossible to avoid in any practical sense, due to every speciality having its own jargon. Imagine web developers having to constantly explain why "child element" has nothing to do with offspring.
https://en.wiktionary.org/wiki/mentalese
EDIT: After reading the original thread in more detail, I think some of the sibling comments are more accurate. In this case, neuralese is more like language of communication expressed by neural networks, rather than its internal representation.
1) internally, in latent space, LLMs use what is effectively a language, but all the words are written on top of each other instead of separately, and if you decode it as letters, it sounds like gibberish, even though it isn't. It's just a much denser language than any human language. This makes them unreadable ... and thus "hides the intentions of the LLM", if you want to make it sound dramatic and evil. But yeah, we don't know what the intermediate thoughts of an LLM sound like.
The decoded version is often referred to as "neuralese".
2) if 2 LLMs with sufficiently similar latent space communicate with each other (same model), it has often been observed that they switch to "gibberish" BUT when tested they are clearly still passing meaningful information to one another. One assumes they are using tokens more efficiently to get the latent space information to a specific point, rather than bothering with words (think of it like this: the thoughts of an LLM are a 3d point (in reality 2000d, but ...). Every token/letter is a 3d vector (meaning you add them), chosen so words add up to the thought that is their meaning. But when outputting text why bother with words? You can reach any thought/meaning by combining vectors, just find the letter moving the most in the right direction. Much faster)
Btw: some specific humans (usually toddlers or children that are related) when talking to each other switch to talking gibberish to each other as well while communicating. This is especially often observed in children that initially learn language together. Might be the same thing.
These languages are called "neuralese".
Feeding an empty prompt to a model can be quite revealing on what data it was trained on
>> i sample tokens based on average frequency and prompt with 1 token
As a result, OP seems to think the model was trained on a lot of Perl: https://xcancel.com/jxmnop/status/1953899440315527273#m
LOL! I think these results speak more to the flexibility of Perl than any actual insight on the training data! After all, 93% of inkblots are valid Perl scripts: https://www.mcmillen.dev/sigbovik/
I used an LLM to generate an inkblot that translates to a Python string and number along with verification of it, which just proves that it is possible.
the way that the quoted article creates Perl programs is through OCRing the inkblots (i.e. creating almost random text) and then checking that result to see if said text is valid Perl
it's not generating a program that means anything
> it's not generating a program that means anything
Glad we agree.
[1] Could OCR those inkblots (i.e. they are almost random text)
OCRing literal random inkblots will not produce valid C (or C# or python) code, but it will prodce valid Perl most of the time, because Perl is weird, and that is funny.
It's not about obfuscating text in inkblot, it's about almost any string being a valid Perl program, which is not the case for most languages
Edit0: here: https://www.mcmillen.dev/sigbovik/
> it's about almost any string being a valid Perl program
Is this true? I think most random unquoted strings aren't valid Perl programs either, am I wrong?
Because of the flexibility of Perl and heavy amount of symbol usage, you can in fact run most random combinations of strings and they’ll be valid Perl.
Copying from the original comment: https://www.mcmillen.dev/sigbovik/
What percent of all e.g. ASCII or Unicode strings are valid expressions given a formal grammar?
Ironically LLMs seem pretty bad at writing AppleScript, I think because (i) the syntax is English-like but very brittle, (ii) the application dictionaries are essential but generally not on the web, and (iii) most of the AppleScript that is on the web has been written by end users, often badly.
I think as of the last Stack Overflow developer survey, it only had ~4% market share…
I say this as an R user who spams LLMs with R on a daily basis.
Here's a link to the model if you want to dive deeper: https://huggingface.co/philomath-1209/programming-language-i...
I mean you can go read his blog post that's pinned where he argues that there's no new ideas and it is all data. He makes the argument that architecture doesn't matter, which is just so demonstrably false that it is laughable. He's a scale maximalist.
I also expect an AI researcher from a top university to not make such wild mistakes
> 3. RLHF: first proposed (to my knowledge) in the InstructGPT paper from OpenAI in 2022
I mean if you go read the instruct paper on page 2 you'll see | Specifically, we use reinforcement learning from human feedback (RLHF; Christiano et al., 2017; Stiennon et al., 2020) to fine-tune GPT-3 to follow a broad class of written instructions (see Figure 2).
Where in Christiano you'll find || Our algorithm follows the same basic approach as Akrour et al. (2012) and Akrour et al. (2014)
I mean this is just obviously wrong. It is so obviously wrong it should make the person saying it second guess themselves (which is categorically the same error you're pointing out).I'm sure we can trace the idea back to the 80's if not earlier. This is the kind of take I'd expect a non-researcher to have, but not someone with two dozen NLP papers. The Instruct-GPT paper was just the first time someone integrated RLHF into a LLM (but not a LM).
Maybe a better article is the one he wrote on Super Intelligence From First Principles. As usual, when someone says "First Principles" you bet they're not gonna start from First Principles... I guess this makes sense in CS since we index from 0
[0] https://arxiv.org/abs/2203.02155
[Christiano et al] https://arxiv.org/abs/1706.03741
[Stiennon eta al] https://arxiv.org/abs/2009.01325
[Akrour et al (2012)] https://arxiv.org/abs/1208.0984
It's an open model, everyone can bench it on everything not only on specific benchmarks. Training on specific reasoning benchmarks is a conjecture.
Personally, I expect more rigor from any analysis and would hold myself to a higher standard. If I see anomalous output at a stage, I don't think "hmm looks like one particular case may be bad but the rest is fine" but rather "something must have gone wrong and the entire output/methodology is unusable garbage" until I figure out exactly how and why it went wrong. And 99 times out of a 100 it wasn't the one case (that happened to be languages OP was familiar with) but rather something fundamentally incorrect in the approach that means the data isn't usable and doesn't tell you anything.
> Personally, I expect more rigor from any analysis and would hold myself to a higher standard.
When something is "pretty bizarre" the most likely conclusion is "I fucked up", which is very likely in this case. I really wonder if he actually checked the results of the classifier. These things can be wildly inaccurate since differences in languages can be quite small at times and some are very human language oriented. He even admits that Java and Kotlin should be higher but then doesn't question Perl, R, Applescript, Rust, and the big drop to Python. What's the joke? If you slam your head on the keyboard you'll generate a valid Perl program?It worries me that I get this feeling from quite a number of ML people who are being hired and paid big bucks from big tech companies. I say this as someone in ML too. There's a propensity to just accept outputs rather than question them. This is like a basic part of doing any research, you should always be incredibly suspicious of your own results. What did Feynman say? Something like "The first rule is not to be fooled and you're the easiest person to fool"?
flabber•6mo ago
striking•6mo ago
k310•6mo ago
mac-attack•6mo ago