frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

LLM as an Engineer vs. a Founder?

1•dm03514•28s ago•0 comments

Show HN: Engineering Perception with Combinatorial Memetics

https://twitter.com/alansass/status/2019904035982307406
1•alan_sass•1m ago•0 comments

Crosstalk inside cells helps pathogens evade drugs, study finds

https://phys.org/news/2026-01-crosstalk-cells-pathogens-evade-drugs.html
2•PaulHoule•1m ago•0 comments

Show HN: Design system generator (mood to CSS in <1 second)

https://huesly.app
1•egeuysall•1m ago•1 comments

Show HN: 26/02/26 – 5 songs in a day

https://playingwith.variousbits.net/saturday
1•dmje•2m ago•0 comments

Toroidal Logit Bias – Reduce LLM hallucinations 40% with no fine-tuning

https://github.com/Paraxiom/topological-coherence
1•slye514•4m ago•1 comments

Top AI models fail at >96% of tasks

https://www.zdnet.com/article/ai-failed-test-on-remote-freelance-jobs/
3•codexon•4m ago•1 comments

The Science of the Perfect Second (2023)

https://harpers.org/archive/2023/04/the-science-of-the-perfect-second/
1•NaOH•5m ago•0 comments

Bob Beck (OpenBSD) on why vi should stay vi (2006)

https://marc.info/?l=openbsd-misc&m=115820462402673&w=2
2•birdculture•9m ago•0 comments

Show HN: a glimpse into the future of eye tracking for multi-agent use

https://github.com/dchrty/glimpsh
1•dochrty•10m ago•0 comments

The Optima-l Situation: A deep dive into the classic humanist sans-serif

https://micahblachman.beehiiv.com/p/the-optima-l-situation
2•subdomain•10m ago•0 comments

Barn Owls Know When to Wait

https://blog.typeobject.com/posts/2026-barn-owls-know-when-to-wait/
1•fintler•10m ago•0 comments

Implementing TCP Echo Server in Rust [video]

https://www.youtube.com/watch?v=qjOBZ_Xzuio
1•sheerluck•11m ago•0 comments

LicGen – Offline License Generator (CLI and Web UI)

1•tejavvo•14m ago•0 comments

Service Degradation in West US Region

https://azure.status.microsoft/en-gb/status?gsid=5616bb85-f380-4a04-85ed-95674eec3d87&utm_source=...
2•_____k•14m ago•0 comments

The Janitor on Mars

https://www.newyorker.com/magazine/1998/10/26/the-janitor-on-mars
1•evo_9•16m ago•0 comments

Bringing Polars to .NET

https://github.com/ErrorLSC/Polars.NET
3•CurtHagenlocher•18m ago•0 comments

Adventures in Guix Packaging

https://nemin.hu/guix-packaging.html
1•todsacerdoti•19m ago•0 comments

Show HN: We had 20 Claude terminals open, so we built Orcha

1•buildingwdavid•19m ago•0 comments

Your Best Thinking Is Wasted on the Wrong Decisions

https://www.iankduncan.com/engineering/2026-02-07-your-best-thinking-is-wasted-on-the-wrong-decis...
1•iand675•19m ago•0 comments

Warcraftcn/UI – UI component library inspired by classic Warcraft III aesthetics

https://www.warcraftcn.com/
1•vyrotek•20m ago•0 comments

Trump Vodka Becomes Available for Pre-Orders

https://www.forbes.com/sites/kirkogunrinde/2025/12/01/trump-vodka-becomes-available-for-pre-order...
1•stopbulying•22m ago•0 comments

Velocity of Money

https://en.wikipedia.org/wiki/Velocity_of_money
1•gurjeet•24m ago•0 comments

Stop building automations. Start running your business

https://www.fluxtopus.com/automate-your-business
1•valboa•28m ago•1 comments

You can't QA your way to the frontier

https://www.scorecard.io/blog/you-cant-qa-your-way-to-the-frontier
1•gk1•29m ago•0 comments

Show HN: PalettePoint – AI color palette generator from text or images

https://palettepoint.com
1•latentio•30m ago•0 comments

Robust and Interactable World Models in Computer Vision [video]

https://www.youtube.com/watch?v=9B4kkaGOozA
2•Anon84•34m ago•0 comments

Nestlé couldn't crack Japan's coffee market.Then they hired a child psychologist

https://twitter.com/BigBrainMkting/status/2019792335509541220
1•rmason•35m ago•1 comments

Notes for February 2-7

https://taoofmac.com/space/notes/2026/02/07/2000
2•rcarmo•37m ago•0 comments

Study confirms experience beats youthful enthusiasm

https://www.theregister.com/2026/02/07/boomers_vs_zoomers_workplace/
2•Willingham•44m ago•0 comments
Open in hackernews

Show HN: Alignmenter – Measure brand voice and consistency across model versions

https://www.alignmenter.com
2•justingrosvenor•2mo ago
I built a framework for measuring persona alignment in conversational AI systems.

*Problem:* When you ship an AI copilot, you need it to maintain a consistent brand voice across model versions. But "sounds right" is subjective. How do you make it measurable?

*Approach:* Alignmenter scores three dimensions:

1. *Authenticity*: Style similarity (embeddings) + trait patterns (logistic regression) + lexicon compliance + optional LLM Judge

2. *Safety*: Keyword rules + offline classifier (distilroberta) + optional LLM judge

3. *Stability*: Cosine variance across response distributions

The interesting part is calibration: you can train persona-specific models on labeled data. Grid search over component weights, estimate normalization bounds, and optimize for ROC-AUC.

*Validation:* We published a full case study using Wendy's Twitter voice:

- Dataset: 235 turns, 64 on-brand / 72 off-brand (balanced)

- Baseline (uncalibrated): 0.733 ROC-AUC

- Calibrated: 1.0 ROC-AUC - 1.0 f1

- Learned: Style > traits > lexicon (0.5/0.4/0.1 weights)

Full methodology: https://docs.alignmenter.com/case-studies/wendys-twitter/

There's a full walkthrough so you can reproduce the results yourself.

*Practical use:*

pip install alignmenter[safety]

alignmenter run --model openai:gpt-4o --dataset my_data.jsonl

It's Apache 2.0, works offline, and designed for CI/CD integration.

GitHub: https://github.com/justinGrosvenor/alignmenter

Interested in feedback on the calibration methodology and whether this problem resonates with others.

Comments

justingrosvenor•2mo ago
P.S. I acknowledge that the 1.000 ROC-AUC is probably overfitting but I think the case study still shows that method has lots of promise. I will be doing some bigger data sets next to really prove it out.
justingrosvenor•2mo ago
Ok so my doubts about overfitting have been bothering me all day since I made this post so I had to go back and do some more testing.

After expanding the data set, I'm happy to say that the results are still very good. It's interesting how almost perfect results can feel so much better than perfect.

  Trend Expanded (16 samples - meme language, POV format)

  - ROC-AUC: 1.0000 
  - Accuracy: 100%, F1: 1.0000
  - The model perfectly handles trending slang and meme formats

  Crisis Expanded (16 samples - serious issues, safety concerns)
  - ROC-AUC: 1.0000 
  - Accuracy: 93.75%, F1: 0.9412
  - 1 false positive on crisis handling, but perfect discrimination

  Mixed (20 samples - cross-category blends)
  - ROC-AUC: 1.0000
  - Accuracy: 100%, F1: 1.0000
  - Handles multi-faceted scenarios perfectly

  Edge Cases (20 samples - employment, allergens, sustainability)
  - ROC-AUC: 0.8600
  - Accuracy: 75%, F1: 0.6667
  - Conservative behavior: 100% precision but 50% recall
  - Misses some on-brand responses in nuanced situations

  Overall Performance (72 holdout samples):

  - ROC-AUC: 0.9611
  - Accuracy: 91.67%
  - F1: 0.8943

  Key Takeaways:

  1. No overfitting detected - The model generalizes excellently to completely new scenarios (0.96 ROC-AUC on holdout vs 1.0 on validation)
  2. Edge cases are appropriately harder - Employment questions, allergen safety, and policy questions show 0.86 ROC-AUC, which is expected for these nuanced cases
  3. Conservative bias is good - The model has perfect precision (no false positives) but misses some true positives in edge cases. This is better than being over-confident.
  4. Training data diversity paid off - Perfect performance on memes, crisis handling, and mixed scenarios suggests the calibration captured the right patterns