- AI to evaluate itself (eg ask claude to test out its own skill) - custom built platform (I see interest in this space)
I've actually been thinking about this problem a lot and am working on making a custom eval runner for your codebase. What would your usecase be for this?
Definitely not happy with it, but everything is moving too fast to feel like it's worth investing in.
We were lucky enough to have PMs create a set of questions, we did a round of generation and labeled pass/fail annotations on each response.
From there we bootstrapped AI-as-a judge and approximately replicated the results. Then we plug in new models, change prompts, pipelines while being able to approximate the original feedback signal. It's not an exact match, but it's wildly better than one-off testing and the regressions it brings.
We're able to confidently make changes without accidentally breaking something else. Overall win, but it can get costly if the iteration count is high.
I would like a single number that I would use to optimize the pipeline with but I find it hard to figure out what that number should be measuring.
In my day to day as a Product Manager working in a team that ships AI products, I often found myself wanting to do 'quick and dirty' LLM based evaluation on conversation transcripts and traces. I found myself blocked by 'Gemini in Google Sheets', it was too slow and cumbersome, and it didn't handle eval changes well. And because I was exploring, it wasn't helpful to try and set up something more robust with the team.
To fix the problem I eventually learned to call the OpenAI API in python and more sophisticated approaches like some listed here, but I really felt that I wanted a 'product' to help me and potentially help others.
You can check it out at https://www.beval.space
Full disclosure - this is vibe coded and still a work in progress.
alexhans•17h ago
Depending on how they're made up, different teams do vastly different things.
No evals at all, integration tests with no tooling, some use mixed observability tools like LangFuse in their CI/CD. Some other tools like arize phoenix, deepeval, braintrust, promptfoo, pydanticai throughout their development.
It's definitely an afterthought for most teams although we are starting to see increased interest.
My hope is that we can start thinking about evals as a common language for "product" across role families so I'm trying some advocacy [1] trying to keep it very simple including wrapping coding agents like Claude. Sandboxing and observability "for the masses" is still quite a hard concept but UX getting better with time.
What are you doing for yourself/teams? If not much yet, i'd recommend to just start and figure out where the friction/value is for you.
- [1] https://ai-evals.io/ (practical examples https://github.com/Alexhans/eval-ception)