I'm having a hard time figuring out if this is satire or not.
Properly prompted, an LLM writes far better than most people.
Writing is a difficult skill that many (most?) educational systems do not effectively teach. Most people are terrible writers.
It takes some prompting to nudge the model out of that default voice because post training reinforced it. They will likely shift it once these AI-isms are known and recognized widely. I'd assume the nextgem models under training now will get negative feedback from the human evaluators for talking too AI-like and then there will be new AI smells to calibrate to.
Personally, I still can't quite believe that any of this is possible. A computer should be unable to write any prose at all. If it can, but it defaults to a certain tone of voice and needs to be prompted a bit to do something else... okay? I guess that's how it is.
Winced my way through “Convolutions are in CNNs (it’s literally in the name, Convolutional Neural Network)”, then had to stop.
It’s honestly offensive to me. It doesn’t even make sense on its own terms. For some reason we fly from LLM inferencing to toy MSINT to convolutions with __0__ transition or sense of structure.
Also, in my experience, a great way to run K8s in IAAS while minimizing vendor lock-in.
Makes it sound like it's new hardware. This is just (I'm inferring) software to program an off the shelf FPGA to do convolutions. Very minimal ones by the look of it (MNIST etc).
arjvik•1h ago
refulgentis•1h ago