frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Logic Puzzles: Why the Liar Is the Helpful One

https://blog.szczepan.org/blog/knights-and-knaves/
1•wasabi991011•1m ago•0 comments

Optical Combs Help Radio Telescopes Work Together

https://hackaday.com/2026/02/03/optical-combs-help-radio-telescopes-work-together/
1•toomuchtodo•5m ago•1 comments

Show HN: Myanon – fast, deterministic MySQL dump anonymizer

https://github.com/ppomes/myanon
1•pierrepomes•12m ago•0 comments

The Tao of Programming

http://www.canonical.org/~kragen/tao-of-programming.html
1•alexjplant•13m ago•0 comments

Forcing Rust: How Big Tech Lobbied the Government into a Language Mandate

https://medium.com/@ognian.milanov/forcing-rust-how-big-tech-lobbied-the-government-into-a-langua...
1•akagusu•13m ago•0 comments

PanelBench: We evaluated Cursor's Visual Editor on 89 test cases. 43 fail

https://www.tryinspector.com/blog/code-first-design-tools
2•quentinrl•15m ago•1 comments

Can You Draw Every Flag in PowerPoint? (Part 2) [video]

https://www.youtube.com/watch?v=BztF7MODsKI
1•fgclue•20m ago•0 comments

Show HN: MCP-baepsae – MCP server for iOS Simulator automation

https://github.com/oozoofrog/mcp-baepsae
1•oozoofrog•24m ago•0 comments

Make Trust Irrelevant: A Gamer's Take on Agentic AI Safety

https://github.com/Deso-PK/make-trust-irrelevant
2•DesoPK•28m ago•0 comments

Show HN: Sem – Semantic diffs and patches for Git

https://ataraxy-labs.github.io/sem/
1•rs545837•29m ago•1 comments

Hello world does not compile

https://github.com/anthropics/claudes-c-compiler/issues/1
13•mfiguiere•35m ago•1 comments

Show HN: ZigZag – A Bubble Tea-Inspired TUI Framework for Zig

https://github.com/meszmate/zigzag
2•meszmate•37m ago•0 comments

Metaphor+Metonymy: "To love that well which thou must leave ere long"(Sonnet73)

https://www.huckgutman.com/blog-1/shakespeare-sonnet-73
1•gsf_emergency_6•39m ago•0 comments

Show HN: Django N+1 Queries Checker

https://github.com/richardhapb/django-check
1•richardhapb•55m ago•1 comments

Emacs-tramp-RPC: High-performance TRAMP back end using JSON-RPC instead of shell

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•todsacerdoti•59m ago•0 comments

Protocol Validation with Affine MPST in Rust

https://hibanaworks.dev
1•o8vm•1h ago•1 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
3•gmays•1h ago•0 comments

Show HN: Zest – A hands-on simulator for Staff+ system design scenarios

https://staff-engineering-simulator-880284904082.us-west1.run.app/
1•chanip0114•1h ago•1 comments

Show HN: DeSync – Decentralized Economic Realm with Blockchain-Based Governance

https://github.com/MelzLabs/DeSync
1•0xUnavailable•1h ago•0 comments

Automatic Programming Returns

https://cyber-omelette.com/posts/the-abstraction-rises.html
1•benrules2•1h ago•1 comments

Why Are There Still So Many Jobs? The History and Future of Workplace Automation [pdf]

https://economics.mit.edu/sites/default/files/inline-files/Why%20Are%20there%20Still%20So%20Many%...
2•oidar•1h ago•0 comments

The Search Engine Map

https://www.searchenginemap.com
1•cratermoon•1h ago•0 comments

Show HN: Souls.directory – SOUL.md templates for AI agent personalities

https://souls.directory
1•thedaviddias•1h ago•0 comments

Real-Time ETL for Enterprise-Grade Data Integration

https://tabsdata.com
1•teleforce•1h ago•0 comments

Economics Puzzle Leads to a New Understanding of a Fundamental Law of Physics

https://www.caltech.edu/about/news/economics-puzzle-leads-to-a-new-understanding-of-a-fundamental...
3•geox•1h ago•1 comments

Switzerland's Extraordinary Medieval Library

https://www.bbc.com/travel/article/20260202-inside-switzerlands-extraordinary-medieval-library
2•bookmtn•1h ago•0 comments

A new comet was just discovered. Will it be visible in broad daylight?

https://phys.org/news/2026-02-comet-visible-broad-daylight.html
4•bookmtn•1h ago•0 comments

ESR: Comes the news that Anthropic has vibecoded a C compiler

https://twitter.com/esrtweet/status/2019562859978539342
2•tjr•1h ago•0 comments

Frisco residents divided over H-1B visas, 'Indian takeover' at council meeting

https://www.dallasnews.com/news/politics/2026/02/04/frisco-residents-divided-over-h-1b-visas-indi...
5•alephnerd•1h ago•5 comments

If CNN Covered Star Wars

https://www.youtube.com/watch?v=vArJg_SU4Lc
1•keepamovin•1h ago•1 comments
Open in hackernews

Show HN: I built AI twins from LinkedIn and CRM data to simulate real B2B buyers

https://resonax.ai/
2•resonaX•3mo ago
I’ve been working on resonaX — an experiment to see if we can simulate real B2B customers using AI.

The idea: instead of sending surveys or running A/B tests, what if marketers could ask questions directly to an AI twin of their ideal customer — built from real data like LinkedIn profiles, CRM notes, and behavioral insights?

Each twin captures that customer’s role, pain points, buying triggers, and communication style. You can then ask:

“Would this headline make sense to you?”

“Why would you hesitate to book a demo?”

“What would make this offer more relevant?”

Under the hood:

LLMs + embedding models fine-tuned on buyer language

Real-world inputs (LinkedIn data, optional CSV uploads)

Lightweight feedback layer to validate responses

70+ beta testers are using it to test messaging and GTM ideas before launch.

Would love feedback from HN:

How might you improve the data ingestion layer? How can I simulate a focus group? How can i combine data to create a digital twin of a post like VP of Marketing (broad as some users are demanding not testing with just one profile but a combination of atleast 10)?

Any ideas to make the twin modeling more reliable over time?

Free beta: https://resonax.ai

Comments

magnumgupta•3mo ago
This is a really interesting direction — feels like the next evolution of customer research. Most teams rely on shallow surveys or persona docs that never update, but simulating an evolving “AI twin” of your ICP could change how GTM teams test ideas. Curious how you handle hallucinations or bias in responses — do you benchmark AI twin feedback against real user feedback over time?
resonaX•3mo ago
Thanks — that’s exactly the problem we’re trying to solve. Traditional personas go stale fast, and most survey data is self-reported, not behavioral.

On hallucinations and bias: We handle it in three ways right now —

Grounding in real data: Each twin is built using structured + unstructured data (LinkedIn profiles, CRM notes, messaging, etc.), so the LLM has contextual grounding rather than free-form guessing.

Feedback calibration: Every time users compare twin feedback with real user insights (e.g., call transcripts or campaign results), that feedback loop fine-tunes how the twin weighs language patterns and priorities.

Cross-model validation: We run prompts through multiple models and look for consensus — if the outputs diverge too much, the system flags it for review rather than showing one “confident” but wrong answer.

It’s still early, but the goal is to make twins that drift with real customer data — not just sit frozen like static personas.

resonaX•3mo ago
A quick clarification and some context on how the “AI twin” actually works.

- Each twin isn’t just a generic chatbot. - It’s grounded in real behavioural data + psychology frameworks (like MBTI and DISC) that are matched with customer roles and communication patterns.

For example:

If your real customers tend to be data-driven “analyst” types, the twin reasons and responds that way.

If they’re more visionary “driver” types, the twin reacts to emotion and ROI triggers.

So instead of random AI answers, you’re getting responses that mirror how your actual buyers think and decide — built from your CRM, LinkedIn, and conversation data.

I’m particularly curious how others here would:

Combine multiple buyer types into a “composite twin” (like 10 VP Marketing profiles)

Add validation loops that make the twin’s reasoning evolve with more data

Integrate open-source behavioral models rather than proprietary ones

Appreciate all feedback — especially from those who’ve worked on LLM fine-tuning, agent memory, or customer simulation before.

kashishkhanna55•3mo ago
Interesting idea. Most “personas” I’ve seen just sit in Figma or Notion and don’t reflect how buyers actually talk anymore. If these twins update directly from CRM / LinkedIn data, that feels like a real step up from the usual marketing theatre.

One question: how do you validate when an AI twin gives a confident-sounding answer that isn’t actually what real prospects would say? Do you compare it against actual call notes or win/loss feedback?

resonaX•3mo ago
Right now, we validate twin responses in a few ways:

Ground truth comparison: When users upload CRM notes, Gong call transcripts, or win/loss data, we benchmark the twin’s language and objections against what real prospects actually said.

Confidence scoring: If a twin sounds overly confident but doesn’t have enough supporting data (e.g., limited context or sparse history), the system flags it with a lower reliability score rather than pretending it’s certain.

Iterative calibration: Each feedback cycle — whether a message worked or not — helps fine-tune the twin so its “voice” and reasoning evolve over time.

The end goal is that twins shouldn’t pretend to know — they should learn continuously from every interaction and new data point.

pertimohak•3mo ago
Twin modeling is more reliable as it uses comparisons between identical and fraternal twins to separate genetic and environmental influences, providing a clear and controlled estimate of heritability and environmental effects.
pertimohak•3mo ago
Twin modeling stands out for its natural “experiment” design—by comparing identical and fraternal twins, it reveals how much of who we are comes from our genes versus our environment, making it a uniquely reliable and insightful tool in behavioral genetics.