This is not a product or an agent proposal. It’s an attempt to reframe deployed AI systems (feeds, copilots, assistants) as cybernetic systems embedded in human attention and behavior loops.
I’m interested in feedback from people working in ML, control theory, alignment, or platform governance on whether this framing is useful, flawed, or already well-covered elsewhere.
yelabbassi•1h ago
I’m interested in feedback from people working in ML, control theory, alignment, or platform governance on whether this framing is useful, flawed, or already well-covered elsewhere.