frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Ask HN: What tools are you using for AI evals? Everything feels half-baked

6•fazlerocks•8mo ago
We're running LLMs in production for content generation, customer support, and code review assistance. Been trying to build a proper evaluation pipeline for months but every tool we've tested has significant limitations.

What we've evaluated:

- OpenAI's Evals framework: Works well for benchmarking but challenging for custom use cases. Configuration through YAML files can be complex and extending functionality requires diving deep into their codebase. Primarily designed for batch processing rather than real-time monitoring.

- LangSmith: Strong tracing capabilities but eval features feel secondary to their observability focus. Pricing starts at $0.50 per 1k traces after the free tier, which adds up quickly with high volume. UI can be slow with larger datasets.

- Weights & Biases: Powerful platform but designed primarily for traditional ML experiment tracking. Setup is complex and requires significant ML expertise. Our product team struggles to use it effectively.

- Humanloop: Clean interface focused on prompt versioning with basic evaluation capabilities. Limited eval types available and pricing is steep for the feature set.

- Braintrust: Interesting approach to evaluation but feels like an early-stage product. Documentation is sparse and integration options are limited.

What we actually need: - Real-time eval monitoring (not just batch) - Custom eval functions that don't require PhD-level setup - Human-in-the-loop workflows for subjective tasks - Cost tracking per model/prompt - Integration with our existing observability stack - Something our product team can actually use

Current solution:

Custom scripts + monitoring dashboards for basic metrics. Weekly manual reviews in spreadsheets. It works but doesn't scale and we miss edge cases.

Has anyone found tools that handle production LLM evaluation well? Are we expecting too much or is the tooling genuinely immature? Especially interested in hearing from teams without dedicated ML engineers.

Comments

PaulHoule•8mo ago
I worked at more than one startup that was trying to develop and commercialize foundation models before the technology was ready. We didn't have the "chatbot" paradigm and were always focused on evaluation for a specific task.

I built a model trainer with eval capabilities that I felt was a failure, I mean it worked, but it felt like a terrible bodge just like the tools you're talking about. Part of it is that some the models we were training were small and could be run inside scikit-learn's model selection tools which I've come to seen as "basically adequate" for classical ML but other models might take a few days to train on a big machine which required us to develop inferior model selection tools that worked with processes too big to fit in a single address space but also gave us inferior model selection for small models. (The facilities for model selection in hugginface are just atrocious in my mind)

I see a lot of bad frameworks for LLMs that make the same mistakes I was making back then but I'm not sure what the answer is, although I think it can be solved for particular domains. For instance, I have a design for a text classifier trainer which I think could handle a wide range of problems where the training set is between 50-500,000 examples.

I saw a lot of lost opportunities in the 2010s where people could have built a workable A.I. application if they were willing to build training and eval sets and they wouldn't. I got pretty depressed when I talked to tens of vendors in the full text search space and didn't find any that were using systematic evaluation to improve their relevance. I am really hopeful today that evaluation is a growing part of the conversation.

VladVladikoff•8mo ago
>We're running LLMs in production for content generation, customer support, and code review assistance.

Sounds like a nightmare. How do you deal with the nondeterministic behaviour of the LLMs when trying to debug why they did something wrong?

careful_ai•8mo ago
We faced similar roadblocks while building out a robust LLM evaluation pipeline—especially around real-time monitoring, human oversight, and making the tools accessible to product teams, not just engineers.

What helped us was integrating 'AppMod.AI', specifically the Project Analyzer feature. It’s designed to simplify complex enterprise app evaluation and modernization. For us, it added three big wins: - Real-time, accurate code analysis (we're seeing close to 90% accuracy in code reviews). - AI-generated architectural diagrams, feature breakdowns, and summaries that even non-dev folks could grasp. - Human-in-the-loop chat layer that allows real-time clarification, so we can validate subjective or business-specific logic without delays

We also leaned on its code refactor and language migration capabilities to reduce manual workload and close some major skill gaps in older tech stacks—cut our project analysis time from ~5 days to just 1.

It’s not just about evals; the broader AppMod.AI platform helped us unify everything from assessment to deployment. Not perfect, but a meaningful step up from the spreadsheets + scripts cycle we were stuck in.

Is AI "good" yet? – tracking HN's sentiment on AI coding

https://www.is-ai-good-yet.com/#home
1•ilyaizen•1m ago•1 comments

Show HN: Amdb – Tree-sitter based memory for AI agents (Rust)

https://github.com/BETAER-08/amdb
1•try_betaer•1m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
1•anhxuan•1m ago•0 comments

Show HN: Seedance 2.0 Release

https://seedancy2.com/
1•funnycoding•2m ago•0 comments

Leisure Suit Larry's Al Lowe on model trains, funny deaths and Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
1•thelok•2m ago•0 comments

Towards Self-Driving Codebases

https://cursor.com/blog/self-driving-codebases
1•edwinarbus•2m ago•0 comments

VCF West: Whirlwind Software Restoration – Guy Fedorkow [video]

https://www.youtube.com/watch?v=YLoXodz1N9A
1•stmw•3m ago•1 comments

Show HN: COGext – A minimalist, open-source system monitor for Chrome (<550KB)

https://github.com/tchoa91/cog-ext
1•tchoa91•4m ago•1 comments

FOSDEM 26 – My Hallway Track Takeaways

https://sluongng.substack.com/p/fosdem-26-my-hallway-track-takeaways
1•birdculture•5m ago•0 comments

Show HN: Env-shelf – Open-source desktop app to manage .env files

https://env-shelf.vercel.app/
1•ivanglpz•8m ago•0 comments

Show HN: Almostnode – Run Node.js, Next.js, and Express in the Browser

https://almostnode.dev/
1•PetrBrzyBrzek•8m ago•0 comments

Dell support (and hardware) is so bad, I almost sued them

https://blog.joshattic.us/posts/2026-02-07-dell-support-lawsuit
1•radeeyate•9m ago•0 comments

Project Pterodactyl: Incremental Architecture

https://www.jonmsterling.com/01K7/
1•matt_d•9m ago•0 comments

Styling: Search-Text and Other Highlight-Y Pseudo-Elements

https://css-tricks.com/how-to-style-the-new-search-text-and-other-highlight-pseudo-elements/
1•blenderob•11m ago•0 comments

Crypto firm accidentally sends $40B in Bitcoin to users

https://finance.yahoo.com/news/crypto-firm-accidentally-sends-40-055054321.html
1•CommonGuy•12m ago•0 comments

Magnetic fields can change carbon diffusion in steel

https://www.sciencedaily.com/releases/2026/01/260125083427.htm
1•fanf2•13m ago•0 comments

Fantasy football that celebrates great games

https://www.silvestar.codes/articles/ultigamemate/
1•blenderob•13m ago•0 comments

Show HN: Animalese

https://animalese.barcoloudly.com/
1•noreplica•13m ago•0 comments

StrongDM's AI team build serious software without even looking at the code

https://simonwillison.net/2026/Feb/7/software-factory/
2•simonw•14m ago•0 comments

John Haugeland on the failure of micro-worlds

https://blog.plover.com/tech/gpt/micro-worlds.html
1•blenderob•14m ago•0 comments

Show HN: Velocity - Free/Cheaper Linear Clone but with MCP for agents

https://velocity.quest
2•kevinelliott•15m ago•2 comments

Corning Invented a New Fiber-Optic Cable for AI and Landed a $6B Meta Deal [video]

https://www.youtube.com/watch?v=Y3KLbc5DlRs
1•ksec•16m ago•0 comments

Show HN: XAPIs.dev – Twitter API Alternative at 90% Lower Cost

https://xapis.dev
2•nmfccodes•17m ago•1 comments

Near-Instantly Aborting the Worst Pain Imaginable with Psychedelics

https://psychotechnology.substack.com/p/near-instantly-aborting-the-worst
2•eatitraw•23m ago•0 comments

Show HN: Nginx-defender – realtime abuse blocking for Nginx

https://github.com/Anipaleja/nginx-defender
2•anipaleja•23m ago•0 comments

The Super Sharp Blade

https://netzhansa.com/the-super-sharp-blade/
1•robin_reala•24m ago•0 comments

Smart Homes Are Terrible

https://www.theatlantic.com/ideas/2026/02/smart-homes-technology/685867/
2•tusslewake•26m ago•0 comments

What I haven't figured out

https://macwright.com/2026/01/29/what-i-havent-figured-out
1•stevekrouse•27m ago•0 comments

KPMG pressed its auditor to pass on AI cost savings

https://www.irishtimes.com/business/2026/02/06/kpmg-pressed-its-auditor-to-pass-on-ai-cost-savings/
1•cainxinth•27m ago•0 comments

Open-source Claude skill that optimizes Hinge profiles. Pretty well.

https://twitter.com/b1rdmania/status/2020155122181869666
3•birdmania•27m ago•1 comments