As local inference for language models becomes more popular, issues that until recently sat at the margins of AI security discussions are becoming increasingly important. Much of the debate still focuses on the application layer, especially prompt injection, data poisoning, jailbreaks, or the security of RAG integrations. Far less attention is given to the integrity of the model artifact itself during inference.
piotrbednarsalt•1h ago