Anthropic analyzed 700,000 real conversations to see if Claude behaves the way it was designed. It mostly aligns with their “helpful, honest, harmless” goals — but some edge cases raise big questions.
How should we define and measure values in AI systems — and who decides what they should be?
hanson108•4h ago
How should we define and measure values in AI systems — and who decides what they should be?