I’m less interested in the standard AI ethics discussions, and more in technically grounded or conceptual work that explores philosophical questions arising from the models we build — not just meaning and representation, but also broader issues that emerge from the behavior and design of NLP systems.
If you know of academic papers, blogs, or authors who seriously engage with these questions, I’d love suggestions.