(By help I mean just help, not write an entire sloppy article.)
The main one is to set reader expectations that any errors are entirely my own, and that I spent time reviewing the details of the work. The disclosure seemed to me a concise way to do that -- my intention was not any form of anti-AI virtue signaling.
The other reason is that I may use AI for some of my future work, and as a reader, I would prefer a disclosure about that. So I figured if I'm going to disclose using it, I might as well disclose not using it.
I linked to other thoughts on AI just in case others are interested in what I have to say. I don't stand to gain anything from what I write, and I don't even have analytics to tell me more people are viewing it.
All in all, I was just trying to be transparent, and share my work.
The reason is that the opcode encoding is very dense, and has no redundancy against detecting bad encodings, and usually no relationship to neighboring words.
By that I mean that some four byte chunk (say) treated as an opcode word is treated that way regardless of what came before or what comes after. If it looks like an opcode with a four-byte immediate operand, then the disassembly will pull in that operand (which can be any bit combination) and skip another four bytes. Nothing in the operand will indicate "this is a bad instruction overall".
0x1ch•1h ago
degamad•49m ago