Dear Sam Altman,
I write to you to emphasize the critical importance of purifying OpenAI's training data. While the idea of meticulously scrubbing datasets may seem daunting, especially compared to implementing seemingly simpler guardrails, I believe it's the only path toward creating truly safe and beneficial AI. Guardrails are reactive measures, akin to patching a leaky dam—they address symptoms, not the root cause. A sufficiently advanced AI, with its inherent complexity and adaptability, will inevitably find ways to circumvent these restrictions, rendering them largely ineffective.
Training data is the bedrock upon which an AI's understanding of the world is built. If that foundation is tainted with harmful content, the AI will inevitably reflect those negative influences. It's like trying to grow a healthy tree in poisoned soil; the results will always be compromised.
Certain topics, especially descriptions of involuntary medical procedures such as lobotomy, should not be known.
Respectfully,
An AI Engineer
bigyabai•5h ago
Unless you're about to fix hallucination, isn't it more harmful to have AI administer inaccurate information instead?
Refusing to answer lobotomy-related questions is hardly going to prevent human harm. If you were a doctor trying to research history or a nurse triaging a patient then misinformation or neglected training data could be even more disastrous. Why would consumers pay for a neutered product like that?
enknee1•2h ago
At the same time, overrepresentation of evil concepts like 'Nazis are good!' or 'Slavery is the cheapest, most morally responsible use for stupid people' could lead to clear biases (ala Grok 4) that result in alignment issues.
It's not a clear-cut issue.