I built it because I noticed a pattern: formerly AI-skeptical coworkers now open every standup or design discussion with "I asked Claude..." or "Claude told me..." for technical problems and design decisions. I've felt the same pull myself to delegate every task or problem to AI. It's easy to lean on these tools for almost any amount of critical thinking or problem solving, and I'm worried about what it means when knowledge workers atrophy their cognition this way.
I'm not anti-AI nor do I think these models will completely replace software engineers. But as long as humans are still in the loop for software engineering, I don't have a good answer for how to avoid becoming overly reliant on these models. How do we gauge how over-reliant we've become, or maintain responsibility for the programs and systems we ship while outsourcing 90% of the judgment we used to be required to make? How do we learn what good code looks like when PR size and velocity have grown to the point that only the important parts get reviewed?
I'm hoping to work on better ways to notice this pattern and practice our way back from it. This is my first attempt (offline only, no accounts/analytics).