Initially, we usually give the AI some explicit constraints, such as:
Don't modify the database schema
Don't modify certain APIs
Only allow modifications to the front-end logic
Some function names cannot be changed
At the beginning of the conversation, the AI generally follows these rules well.
However, as the conversation lengthens, for example, to 40,000 or 50,000 tokens, a common problem arises:
The AI gradually "forgets" the previously mentioned restrictions.
For example:
Initially you say "Don't modify the database,"
but after a few rounds, it suddenly suggests:
"We can solve this problem by modifying the database structure."
I've asked others how to solve this before, and some suggested:
Write an important.md or rule file in the project so the AI reads it every time.
This method does have some effect, but problems still arise in actual development.
For example: Initially you say "Don't touch database A,"
but later database B is added to the project.
If you don't update the markdown file in time, the AI might accidentally modify things you didn't intend to change.
So I recently created a small experimental tool, mainly to solve the "constraint drift" problem in AI programming assistants.