Personally, I've found that the paradigm of "sharp knives in the drawer" can lead to some pretty nasty output from LLMs; it's felt to me like the higher the ambiguity the higher the variance in quality.
This has shifted my approach to A&D:
* Always enforce strict contracts, i.e. the ONLY way to do X is through Y.
* Fail loudly and fail often, assumptions and fallbacks only encourage AI to make larger assumptions.
* Boring is better, the less magic you implement the easier it is for LLMs to understand and extend.
Anyone else have some nuggets of truth they'd want to share as it pertains to A&D + AI?