I built a recursive logic-tree engine for first-order logic, based on explicit
AST → NNF (De Morgan) decomposition with full step-by-step GUI visualization.
Unlike black-box neural models, every inference step is structurally
inspectable and derived by formal rules.
Would you consider this kind of symbolic, rule-based, fully transparent
reasoning a candidate for what could be called the “core” of Explainable AI?
How would you position it relative to current post-hoc explainability
methods used in machine learning?
JAnicaTZ•1h ago
Unlike black-box neural models, every inference step is structurally inspectable and derived by formal rules.
Would you consider this kind of symbolic, rule-based, fully transparent reasoning a candidate for what could be called the “core” of Explainable AI?
How would you position it relative to current post-hoc explainability methods used in machine learning?
Source (for context): https://github.com/JAnicaTZ/TreeOfKnowledge