Yes, our ethics about others and us is self-serving, alignment won't be possible. Perhaps the metaphor about fleeing could be exemplified in a concrete geographical zone and social group, but whoever cares about human lives know many such examples. This is not the place to suggests actionable principles, is just a call to consider the nature of the problem.
I pledge -- no, I beg everyone in the AI domain to stop improving AI models and integrating AI with internal data. It's OK-ish to just use AI for personal stuffs, but training AI for a specific company? Not a good idea for whoever works there. Combining robotics and AI to replace low-salary workers? Why?
I don't really think we know the consequences of further improvement on AI. AI today is not a big issue -- it won't replace most careers -- worst case it improves productivity and removes the need to hire more workers, but I believe none of us really know how good AI is going to be. We really need a globally political consensus to regulate and use AI.
I don't really have high hope though. I look forward to a dystopian future, maybe even worse than what we saw in the movies.
What’s unsettling is that even the digital spaces we once thought were safe are starting to feel exposed. Communities, workflows, and creative work can now be replicated, imitated, or reshaped faster than we can adapt.
Maybe the real question is not about finding a new place to go, but deciding who we want to become. Together if we can, or alone if we must.
RageofBC•5h ago