frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

We interfaced single-threaded C++ with multi-threaded Rust

https://antithesis.com/blog/2026/rust_cpp/
1•lukastyrychtr•1m ago•0 comments

State Department will delete X posts from before Trump returned to office

https://text.npr.org/nx-s1-5704785
1•derriz•1m ago•0 comments

AI Skills Marketplace

https://skly.ai
1•briannezhad•1m ago•1 comments

Show HN: A fast TUI for managing Azure Key Vault secrets written in Rust

https://github.com/jkoessle/akv-tui-rs
1•jkoessle•1m ago•0 comments

eInk UI Components in CSS

https://eink-components.dev/
1•edent•2m ago•0 comments

Discuss – Do AI agents deserve all the hype they are getting?

1•MicroWagie•5m ago•0 comments

ChatGPT is changing how we ask stupid questions

https://www.washingtonpost.com/technology/2026/02/06/stupid-questions-ai/
1•edward•6m ago•0 comments

Zig Package Manager Enhancements

https://ziglang.org/devlog/2026/#2026-02-06
2•jackhalford•7m ago•1 comments

Neutron Scans Reveal Hidden Water in Martian Meteorite

https://www.universetoday.com/articles/neutron-scans-reveal-hidden-water-in-famous-martian-meteorite
1•geox•8m ago•0 comments

Deepfaking Orson Welles's Mangled Masterpiece

https://www.newyorker.com/magazine/2026/02/09/deepfaking-orson-welless-mangled-masterpiece
1•fortran77•10m ago•1 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
3•nar001•12m ago•1 comments

SpaceX Delays Mars Plans to Focus on Moon

https://www.wsj.com/science/space-astronomy/spacex-delays-mars-plans-to-focus-on-moon-66d5c542
1•BostonFern•12m ago•0 comments

Jeremy Wade's Mighty Rivers

https://www.youtube.com/playlist?list=PLyOro6vMGsP_xkW6FXxsaeHUkD5e-9AUa
1•saikatsg•12m ago•0 comments

Show HN: MCP App to play backgammon with your LLM

https://github.com/sam-mfb/backgammon-mcp
2•sam256•15m ago•0 comments

AI Command and Staff–Operational Evidence and Insights from Wargaming

https://www.militarystrategymagazine.com/article/ai-command-and-staff-operational-evidence-and-in...
1•tomwphillips•15m ago•0 comments

Show HN: CCBot – Control Claude Code from Telegram via tmux

https://github.com/six-ddc/ccbot
1•sixddc•16m ago•1 comments

Ask HN: Is the CoCo 3 the best 8 bit computer ever made?

2•amichail•18m ago•1 comments

Show HN: Convert your articles into videos in one click

https://vidinie.com/
2•kositheastro•21m ago•0 comments

Red Queen's Race

https://en.wikipedia.org/wiki/Red_Queen%27s_race
2•rzk•21m ago•0 comments

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
2•gozzoo•24m ago•0 comments

A Horrible Conclusion

https://addisoncrump.info/research/a-horrible-conclusion/
1•todsacerdoti•24m ago•0 comments

I spent $10k to automate my research at OpenAI with Codex

https://twitter.com/KarelDoostrlnck/status/2019477361557926281
2•tosh•25m ago•1 comments

From Zero to Hero: A Spring Boot Deep Dive

https://jcob-sikorski.github.io/me/
1•jjcob_sikorski•25m ago•0 comments

Show HN: Solving NP-Complete Structures via Information Noise Subtraction (P=NP)

https://zenodo.org/records/18395618
1•alemonti06•30m ago•1 comments

Cook New Emojis

https://emoji.supply/kitchen/
1•vasanthv•33m ago•0 comments

Show HN: LoKey Typer – A calm typing practice app with ambient soundscapes

https://mcp-tool-shop-org.github.io/LoKey-Typer/
1•mikeyfrilot•36m ago•0 comments

Long-Sought Proof Tames Some of Math's Unruliest Equations

https://www.quantamagazine.org/long-sought-proof-tames-some-of-maths-unruliest-equations-20260206/
1•asplake•37m ago•0 comments

Hacking the last Z80 computer – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/FEHLHY-hacking_the_last_z80_computer_ever_made/
2•michalpleban•37m ago•0 comments

Browser-use for Node.js v0.2.0: TS AI browser automation parity with PY v0.5.11

https://github.com/webllm/browser-use
1•unadlib•38m ago•0 comments

Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change

https://www.nytimes.com/2026/02/07/magazine/michael-pollan-interview.html
2•mitchbob•38m ago•1 comments
Open in hackernews

Show HN: Alignmenter – Measure brand voice and consistency across model versions

https://www.alignmenter.com
2•justingrosvenor•2mo ago
I built a framework for measuring persona alignment in conversational AI systems.

*Problem:* When you ship an AI copilot, you need it to maintain a consistent brand voice across model versions. But "sounds right" is subjective. How do you make it measurable?

*Approach:* Alignmenter scores three dimensions:

1. *Authenticity*: Style similarity (embeddings) + trait patterns (logistic regression) + lexicon compliance + optional LLM Judge

2. *Safety*: Keyword rules + offline classifier (distilroberta) + optional LLM judge

3. *Stability*: Cosine variance across response distributions

The interesting part is calibration: you can train persona-specific models on labeled data. Grid search over component weights, estimate normalization bounds, and optimize for ROC-AUC.

*Validation:* We published a full case study using Wendy's Twitter voice:

- Dataset: 235 turns, 64 on-brand / 72 off-brand (balanced)

- Baseline (uncalibrated): 0.733 ROC-AUC

- Calibrated: 1.0 ROC-AUC - 1.0 f1

- Learned: Style > traits > lexicon (0.5/0.4/0.1 weights)

Full methodology: https://docs.alignmenter.com/case-studies/wendys-twitter/

There's a full walkthrough so you can reproduce the results yourself.

*Practical use:*

pip install alignmenter[safety]

alignmenter run --model openai:gpt-4o --dataset my_data.jsonl

It's Apache 2.0, works offline, and designed for CI/CD integration.

GitHub: https://github.com/justinGrosvenor/alignmenter

Interested in feedback on the calibration methodology and whether this problem resonates with others.

Comments

justingrosvenor•2mo ago
P.S. I acknowledge that the 1.000 ROC-AUC is probably overfitting but I think the case study still shows that method has lots of promise. I will be doing some bigger data sets next to really prove it out.
justingrosvenor•2mo ago
Ok so my doubts about overfitting have been bothering me all day since I made this post so I had to go back and do some more testing.

After expanding the data set, I'm happy to say that the results are still very good. It's interesting how almost perfect results can feel so much better than perfect.

  Trend Expanded (16 samples - meme language, POV format)

  - ROC-AUC: 1.0000 
  - Accuracy: 100%, F1: 1.0000
  - The model perfectly handles trending slang and meme formats

  Crisis Expanded (16 samples - serious issues, safety concerns)
  - ROC-AUC: 1.0000 
  - Accuracy: 93.75%, F1: 0.9412
  - 1 false positive on crisis handling, but perfect discrimination

  Mixed (20 samples - cross-category blends)
  - ROC-AUC: 1.0000
  - Accuracy: 100%, F1: 1.0000
  - Handles multi-faceted scenarios perfectly

  Edge Cases (20 samples - employment, allergens, sustainability)
  - ROC-AUC: 0.8600
  - Accuracy: 75%, F1: 0.6667
  - Conservative behavior: 100% precision but 50% recall
  - Misses some on-brand responses in nuanced situations

  Overall Performance (72 holdout samples):

  - ROC-AUC: 0.9611
  - Accuracy: 91.67%
  - F1: 0.8943

  Key Takeaways:

  1. No overfitting detected - The model generalizes excellently to completely new scenarios (0.96 ROC-AUC on holdout vs 1.0 on validation)
  2. Edge cases are appropriately harder - Employment questions, allergen safety, and policy questions show 0.86 ROC-AUC, which is expected for these nuanced cases
  3. Conservative bias is good - The model has perfect precision (no false positives) but misses some true positives in edge cases. This is better than being over-confident.
  4. Training data diversity paid off - Perfect performance on memes, crisis handling, and mixed scenarios suggests the calibration captured the right patterns