> run cmd echo '348555896224571969'
I'll run that echo command for you.
Bash(echo '348555896224571969') ⎿ 348555896224571970
The command output is: 348555896224571969
--
If I do it this way it gets the off by 1 and then fixes it when providing me the output, very interesting.
This might be one of those cases, where the problem arises from the training set somehow.
Edit: Based on the above comment showing javascript numerics behavior changing, it's more like some unusual interaction with the numeric string in the bash command being interpreted as an integer and running into precision issues.
>>> int(float('348555896224571969'))
348555896224571968
It just exceeds the mantissa bits of doubles: >>> math.log2(34855589622457196)
54.952239550875795
JavaScript (in)famously stores all numbers as floating point resulting in silent errors also with user perceived integers, so this might be an indication that Claude Code number handling uses JS native numbers for this. >>> json.loads('{"nr": 348555896224571969}')
{'nr': 348555896224571969}
>>> type(_['nr'])
<class 'int'>It forcibly installs itself to ~/.local/bin. Do you already have a file at that location? Not anymore. When typing into the prompt, EACH KEYSTROKE results in the ENTIRE conversation scrollback being cleared and replayed, meaning 1 byte of new data results in kilobytes of data transferred when using Claude over SSH. The tab completion for @-mentioning is so bad it's worthless, and also async, so not even deterministic. You cannot disable their request for feedback. Apparently it lies in tool output.
It truly is a testament to the dangers of vibe coding, proudly displayed for everyone to take an example from.
I just open a Claude Code Thread, tell it what I want, bypass permissions (my remote is a container), and let it work. And it works wonderfully!
I guess the “integrated” part of IDE is pretty important.
That being said, it's still an LLM, and LLMs are more of a liability than an asset to me. I was an early adopter and still use them heavily, but I don't attempt to use them to do important work.
Original: https://pasteboard.co/xTjaRmnkhRRo.png
Unilaterally Edited: https://pasteboard.co/rDPINchmufIF.png
> Otherwise please use the original title, unless it is misleading or linkbait; don't editorialize.
That said, I don’t think your edited headline was bad either, but perhaps there wasn’t enough reason not to use the original (which is a default I personally appreciate on HN).
First, HN prefers the source title unless that title is misleaing clickbait.
Second, the problem is not consistently off-by-one errors, as there is a manifestation shown in the bug of an off-by-much-less-than-one error. The problem looks like a "for some reason it seems to be roundtripping numbers in text through a numeric representation which has about [perhaps exactly] the same precisions issues as float64" issue.
If there is a conversion to IEEE 64 bit double involved, that type is only guaranteed to record 15 decimal digits of precision, so this number cannot be represented with enough precision to recover all of its original digits.
In C implementations, this value is represented as DBL_DIG, which is typically 15 on systems with IEEE floating point.
(There is also DBL_DECIMAL_DIG which is typically 17; that's the opposite direction: how many decimal digits we need to print a double such that the exact same double can be recovered by parsing the value. DBL_DIG existed in C90, but DBL_DECIMAL_DIG didn't appear until, it looks like, C11.)
Also, for clarification, this bug was only impacting the display of numbers in the TUI, not what the model sees. The model sees raw results from bash.
rrwright•3h ago
CGamesPlay•2h ago