frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Discuss – Do AI agents deserve all the hype they are getting?

4•MicroWagie•2h ago•0 comments

Ask HN: Anyone Using a Mac Studio for Local AI/LLM?

48•UmYeahNo•1d ago•30 comments

LLMs are powerful, but enterprises are deterministic by nature

3•prateekdalal•6h ago•3 comments

Ask HN: Non AI-obsessed tech forums

28•nanocat•17h ago•25 comments

Ask HN: Ideas for small ways to make the world a better place

16•jlmcgraw•19h ago•20 comments

Ask HN: 10 months since the Llama-4 release: what happened to Meta AI?

44•Invictus0•1d ago•11 comments

Ask HN: Who wants to be hired? (February 2026)

139•whoishiring•5d ago•519 comments

Ask HN: Who is hiring? (February 2026)

313•whoishiring•5d ago•513 comments

Ask HN: Non-profit, volunteers run org needs CRM. Is Odoo Community a good sol.?

2•netfortius•14h ago•1 comments

AI Regex Scientist: A self-improving regex solver

7•PranoyP•21h ago•1 comments

Tell HN: Another round of Zendesk email spam

104•Philpax•2d ago•54 comments

Ask HN: Is Connecting via SSH Risky?

19•atrevbot•2d ago•37 comments

Ask HN: Has your whole engineering team gone big into AI coding? How's it going?

18•jchung•2d ago•13 comments

Ask HN: Why LLM providers sell access instead of consulting services?

5•pera•1d ago•13 comments

Ask HN: How does ChatGPT decide which websites to recommend?

5•nworley•1d ago•11 comments

Ask HN: What is the most complicated Algorithm you came up with yourself?

3•meffmadd•1d ago•7 comments

Ask HN: Is it just me or are most businesses insane?

8•justenough•1d ago•7 comments

Ask HN: Mem0 stores memories, but doesn't learn user patterns

9•fliellerjulian•2d ago•6 comments

Ask HN: Is there anyone here who still uses slide rules?

123•blenderob•4d ago•122 comments

Kernighan on Programming

170•chrisjj•5d ago•61 comments

Ask HN: Anyone Seeing YT ads related to chats on ChatGPT?

2•guhsnamih•1d ago•4 comments

Ask HN: Any International Job Boards for International Workers?

2•15charslong•17h ago•2 comments

Ask HN: Does global decoupling from the USA signal comeback of the desktop app?

5•wewewedxfgdf•1d ago•3 comments

We built a serverless GPU inference platform with predictable latency

5•QubridAI•2d ago•1 comments

Ask HN: Does a good "read it later" app exist?

8•buchanae•3d ago•18 comments

Ask HN: Have you been fired because of AI?

17•s-stude•4d ago•15 comments

Ask HN: How Did You Validate?

4•haute_cuisine•1d ago•6 comments

Ask HN: Anyone have a "sovereign" solution for phone calls?

12•kldg•4d ago•1 comments

Ask HN: Cheap laptop for Linux without GUI (for writing)

15•locusofself•3d ago•16 comments

Ask HN: OpenClaw users, what is your token spend?

14•8cvor6j844qw_d6•4d ago•6 comments
Open in hackernews

Ask HN: What tools are you using for AI evals? Everything feels half-baked

6•fazlerocks•8mo ago
We're running LLMs in production for content generation, customer support, and code review assistance. Been trying to build a proper evaluation pipeline for months but every tool we've tested has significant limitations.

What we've evaluated:

- OpenAI's Evals framework: Works well for benchmarking but challenging for custom use cases. Configuration through YAML files can be complex and extending functionality requires diving deep into their codebase. Primarily designed for batch processing rather than real-time monitoring.

- LangSmith: Strong tracing capabilities but eval features feel secondary to their observability focus. Pricing starts at $0.50 per 1k traces after the free tier, which adds up quickly with high volume. UI can be slow with larger datasets.

- Weights & Biases: Powerful platform but designed primarily for traditional ML experiment tracking. Setup is complex and requires significant ML expertise. Our product team struggles to use it effectively.

- Humanloop: Clean interface focused on prompt versioning with basic evaluation capabilities. Limited eval types available and pricing is steep for the feature set.

- Braintrust: Interesting approach to evaluation but feels like an early-stage product. Documentation is sparse and integration options are limited.

What we actually need: - Real-time eval monitoring (not just batch) - Custom eval functions that don't require PhD-level setup - Human-in-the-loop workflows for subjective tasks - Cost tracking per model/prompt - Integration with our existing observability stack - Something our product team can actually use

Current solution:

Custom scripts + monitoring dashboards for basic metrics. Weekly manual reviews in spreadsheets. It works but doesn't scale and we miss edge cases.

Has anyone found tools that handle production LLM evaluation well? Are we expecting too much or is the tooling genuinely immature? Especially interested in hearing from teams without dedicated ML engineers.

Comments

PaulHoule•8mo ago
I worked at more than one startup that was trying to develop and commercialize foundation models before the technology was ready. We didn't have the "chatbot" paradigm and were always focused on evaluation for a specific task.

I built a model trainer with eval capabilities that I felt was a failure, I mean it worked, but it felt like a terrible bodge just like the tools you're talking about. Part of it is that some the models we were training were small and could be run inside scikit-learn's model selection tools which I've come to seen as "basically adequate" for classical ML but other models might take a few days to train on a big machine which required us to develop inferior model selection tools that worked with processes too big to fit in a single address space but also gave us inferior model selection for small models. (The facilities for model selection in hugginface are just atrocious in my mind)

I see a lot of bad frameworks for LLMs that make the same mistakes I was making back then but I'm not sure what the answer is, although I think it can be solved for particular domains. For instance, I have a design for a text classifier trainer which I think could handle a wide range of problems where the training set is between 50-500,000 examples.

I saw a lot of lost opportunities in the 2010s where people could have built a workable A.I. application if they were willing to build training and eval sets and they wouldn't. I got pretty depressed when I talked to tens of vendors in the full text search space and didn't find any that were using systematic evaluation to improve their relevance. I am really hopeful today that evaluation is a growing part of the conversation.

VladVladikoff•8mo ago
>We're running LLMs in production for content generation, customer support, and code review assistance.

Sounds like a nightmare. How do you deal with the nondeterministic behaviour of the LLMs when trying to debug why they did something wrong?

careful_ai•8mo ago
We faced similar roadblocks while building out a robust LLM evaluation pipeline—especially around real-time monitoring, human oversight, and making the tools accessible to product teams, not just engineers.

What helped us was integrating 'AppMod.AI', specifically the Project Analyzer feature. It’s designed to simplify complex enterprise app evaluation and modernization. For us, it added three big wins: - Real-time, accurate code analysis (we're seeing close to 90% accuracy in code reviews). - AI-generated architectural diagrams, feature breakdowns, and summaries that even non-dev folks could grasp. - Human-in-the-loop chat layer that allows real-time clarification, so we can validate subjective or business-specific logic without delays

We also leaned on its code refactor and language migration capabilities to reduce manual workload and close some major skill gaps in older tech stacks—cut our project analysis time from ~5 days to just 1.

It’s not just about evals; the broader AppMod.AI platform helped us unify everything from assessment to deployment. Not perfect, but a meaningful step up from the spreadsheets + scripts cycle we were stuck in.