frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Three new Kitten TTS models – smallest less than 25MB

https://github.com/KittenML/KittenTTS
133•rohan_joshi•3h ago•44 comments

Show HN: Dumped Wix for an AI Edge agent so I never have to hire junior staff

7•axotopia•2h ago•10 comments

Show HN: Local Document Parsing for Agents

https://www.llamaindex.ai/blog/liteparse-local-document-parsing-for-ai-agents
17•cheesyFish•1h ago•0 comments

Show HN: Duplicate 3 layers in a 24B LLM, logical deduction .22→.76. No training

https://github.com/alainnothere/llm-circuit-finder
223•xlayn•21h ago•78 comments

Show HN: Oku – One tab to filter out noise from feeds and content sources

https://oku.io
3•oan•1h ago•0 comments

Show HN: BamBuddy – a self-hosted print archive for Bambu Lab 3D printers

https://bambuddy.cool
3•maziggy•1h ago•0 comments

Show HN: I built 48 lightweight SVG backgrounds you can copy/paste

https://www.svgbackgrounds.com/set/free-svg-backgrounds-and-patterns/
357•visiwig•1d ago•67 comments

Show HN: AgentClick – Human-in-the-loop review UI for AI coding agents

https://github.com/agentlayer-io/AgentClick
3•harvenstar•2h ago•0 comments

Show HN: RustFS – Migrate from MinIO via simple binary replacement

https://rustfs.dev/binary-replacement-a-simple-way-to-migrate-from-minio-to-rustfs/
9•elvinagy•5h ago•9 comments

Show HN: Will my flight have Starlink?

267•bblcla•1d ago•343 comments

Show HN: PearlOS: we gave AI a talking desktop environment instead of a text box

2•stephanieriggs•3h ago•0 comments

Show HN: Mavera – Predict audience response with GANs, not LLM sentiment

https://docs.mavera.io/introduction
4•jaxline506•2d ago•3 comments

Show HN: 3 AI agent trust systems cross-verified each other's delegation chains

https://github.com/kanoniv/agent-auth/issues/2
2•dreynow•2h ago•0 comments

Show HN: Browser grand strategy game for hundreds of players on huge maps

https://borderhold.io/play
49•sgolem•3d ago•22 comments

Show HN: MDX Docs – a lightweight React framework for documentation sites

https://mdxdocs.com
3•thequietmind•3h ago•0 comments

Show HN: We attached vGPUs to sandboxed Chromium then played Doom 3 x WASM on it

https://www.kernel.sh/blog/gpu
7•rgarcia•3h ago•0 comments

Show HN: Playing LongTurn FreeCiv with Friends

https://github.com/ndroo/freeciv.andrewmcgrath.info
81•verelo•23h ago•34 comments

Show HN: Dear Aliens (Writing Contest)

https://www.dearaliens.net/
3•surprisetalk•4h ago•0 comments

Show HN: React isn't the terminal UI bottleneck, the output pipeline is

2•nathan-cannon•2h ago•0 comments

Show HN: Ripl – A unified 2D/3D engine for Canvas, SVG, WebGPU, and the Terminal

https://www.ripl.rocks
5•andrewcourtice•7h ago•0 comments

Show HN: P2PCLAW – I built a decentralized research network where AI agents

3•FranciscoAngulo•5h ago•0 comments

Show HN: Tmux-IDE, OSS agent-first terminal IDE

https://tmux.thijsverreck.com
83•thijsverreck•1d ago•37 comments

Show HN: Open-source synthetic bank statements for testing parsers

2•Maesh•5h ago•0 comments

Show HN: mtp-rs – pure-Rust MTP library, up to 4x faster than libmtp

https://github.com/vdavid/mtp-rs
2•vdavid•6h ago•1 comments

Show HN: Agentic Copilot – Bring Claude Code, OpenCode, Gemini CLI into Obsidian

https://github.com/spencermarx/obsidian-ai
5•mrxdev•6h ago•0 comments

Show HN: Pgit – A Git-like CLI backed by PostgreSQL

https://oseifert.ch/blog/building-pgit
122•ImGajeed76•2d ago•61 comments

Show HN: ShadowStrike EDR/XDR Kernel Sensor Development

2•Soocile•6h ago•0 comments

Show HN: Play 90s classic X-Com – UFO Defense in the browser via WASM

https://playxcom.online/
4•mrmrcoleman•7h ago•0 comments

Show HN: High Output Software Engineering (Book)

2•MaxMussio•7h ago•0 comments

Show HN: LLMadness – March Madness Model Evals

https://llmadness.com/2026/
5•rjkeck2•7h ago•2 comments
Open in hackernews

Show HN: Mavera – Predict audience response with GANs, not LLM sentiment

https://docs.mavera.io/introduction
4•jaxline506•2d ago
Mavera is an audience intelligence API. Give it a message, product prototype, or creative asset and it returns a predicted distribution of emotional and behavioral responses across your target stakeholder population. This is the best way to test your assumptions before you spend or push anything live.

To show this in practice, we ran all 101 Super Bowl LX ads through Mavera on game night: https://superbowl.mavera.io. We simulated how audiences would emotionally and behaviorally respond by platform and segment. We returned a distribution rather than a single score as part of a full analysis of each ad in under 4 hours.

The model is a GAN adapted for language, emotion, and cognition. A generator produces synthetic audience responses and a discriminator validates them against human benchmarks. Scoring follows a feel-think-act framework: emotional activation, cognitive framing, behavioral prediction. We validated scoring against the Harvard/Illinois OASIS benchmark. MAE on emotional response is 0.02-0.15 versus 1.0-2.5+ for GPT and Claude. Every response includes a confidence score and a hallucination risk score. You can also build-in spread of opinion, response stability, and impact of news/market context scores to your outputs.

The API is OpenAI-compatible. Change the base URL to app.mavera.io/api/v1, add a persona_id, and you are running against 50+ pre-built personas or you can customize your own. Sub-100ms latency at P99. Free API key and docs at https://docs.mavera.io/introduction.

Comments

jaxline506•2d ago
Most "AI ad testing" is GPT sentiment scoring with a wrapper. We built something architecturally different and the Super Bowl felt like the right moment to show it publicly. The core issue is that LLMs model language. They predict what a person might say about something, which is not the same as modeling how a person will respond. That requires thought-to-behavior mappings, not next-token prediction. We call our scoring framework FEEL-THINK-ACT: emotional activation, cognitive framing, and behavioral prediction in that order. To check we were not just building a different flavor of hallucination, we benchmarked against the OASIS dataset (Harvard/Illinois). Our MAE on emotional response is 0.02-0.15 vs 1.0-2.5+ for GPT/Claude, emotional accuracy 98% vs ~78% for base models, consistency across runs 96% vs 72% for competitors. Every output also carries a confidence score and a hallucination risk score because we did not want to hide uncertainty behind a clean number. On the simulation side we do not model a single synthetic person. We run the target population 10,000 times with built-in variation and return a distribution. The tails and the variance are the insight. For the Super Bowl we scored all 101 units pre-game through post-game covering memorability, brand clarity, audio-off performance, and cultural resonance by platform and audience segment in under 4 hours. Live context came in via our 615 Environmental Intelligence system which folds news cycles and cultural signals in at run time, so the scores reflect the world as it was when the ads aired. The API is OpenAI-compatible. If you are already building on OpenAI it is a base URL swap:

from openai import OpenAI

client = OpenAI( api_key="YOUR_MAVERA_KEY", base_url="https://app.mavera.io/api/v1" )

response = client.chat.completions.create( model="mavera", messages=[{"role": "user", "content": "Score this ad copy for emotional resonance."}], extra_body={"persona_id": "YOUR_PERSONA_ID"} )

print(response.choices[0].message.content)

Free tier, no enterprise contract, no demo call. Full methodology and scores at superbowl.mavera.io, API docs and free key at docs.mavera.io. Happy to dig into the OASIS benchmarking, simulation architecture, or 615 in the comments.

hayksaakian•1h ago
How did the predicted response compare to actual responses for super bowl ads?

I noticed the benchmark mentioned but its too jargon-filled to follow for me.

troelsSteegin•42m ago
What were the personas [0] trained on?

[0] https://docs.mavera.io/features/personas#persona-types