frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

PanelBench: We evaluated Cursor's Visual Editor on 89 test cases. 43 fail

https://www.tryinspector.com/blog/code-first-design-tools
1•quentinrl•1m ago•0 comments

Can You Draw Every Flag in PowerPoint? (Part 2) [video]

https://www.youtube.com/watch?v=BztF7MODsKI
1•fgclue•6m ago•0 comments

Show HN: MCP-baepsae – MCP server for iOS Simulator automation

https://github.com/oozoofrog/mcp-baepsae
1•oozoofrog•9m ago•0 comments

Make Trust Irrelevant: A Gamer's Take on Agentic AI Safety

https://github.com/Deso-PK/make-trust-irrelevant
2•DesoPK•13m ago•0 comments

Show HN: Sem – Semantic diffs and patches for Git

https://ataraxy-labs.github.io/sem/
1•rs545837•15m ago•1 comments

Hello world does not compile

https://github.com/anthropics/claudes-c-compiler/issues/1
2•mfiguiere•21m ago•0 comments

Show HN: ZigZag – A Bubble Tea-Inspired TUI Framework for Zig

https://github.com/meszmate/zigzag
2•meszmate•23m ago•0 comments

Metaphor+Metonymy: "To love that well which thou must leave ere long"(Sonnet73)

https://www.huckgutman.com/blog-1/shakespeare-sonnet-73
1•gsf_emergency_6•25m ago•0 comments

Show HN: Django N+1 Queries Checker

https://github.com/richardhapb/django-check
1•richardhapb•40m ago•1 comments

Emacs-tramp-RPC: High-performance TRAMP back end using JSON-RPC instead of shell

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•todsacerdoti•45m ago•0 comments

Protocol Validation with Affine MPST in Rust

https://hibanaworks.dev
1•o8vm•49m ago•1 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
2•gmays•50m ago•0 comments

Show HN: Zest – A hands-on simulator for Staff+ system design scenarios

https://staff-engineering-simulator-880284904082.us-west1.run.app/
1•chanip0114•51m ago•1 comments

Show HN: DeSync – Decentralized Economic Realm with Blockchain-Based Governance

https://github.com/MelzLabs/DeSync
1•0xUnavailable•56m ago•0 comments

Automatic Programming Returns

https://cyber-omelette.com/posts/the-abstraction-rises.html
1•benrules2•59m ago•1 comments

Why Are There Still So Many Jobs? The History and Future of Workplace Automation [pdf]

https://economics.mit.edu/sites/default/files/inline-files/Why%20Are%20there%20Still%20So%20Many%...
2•oidar•1h ago•0 comments

The Search Engine Map

https://www.searchenginemap.com
1•cratermoon•1h ago•0 comments

Show HN: Souls.directory – SOUL.md templates for AI agent personalities

https://souls.directory
1•thedaviddias•1h ago•0 comments

Real-Time ETL for Enterprise-Grade Data Integration

https://tabsdata.com
1•teleforce•1h ago•0 comments

Economics Puzzle Leads to a New Understanding of a Fundamental Law of Physics

https://www.caltech.edu/about/news/economics-puzzle-leads-to-a-new-understanding-of-a-fundamental...
3•geox•1h ago•1 comments

Switzerland's Extraordinary Medieval Library

https://www.bbc.com/travel/article/20260202-inside-switzerlands-extraordinary-medieval-library
2•bookmtn•1h ago•0 comments

A new comet was just discovered. Will it be visible in broad daylight?

https://phys.org/news/2026-02-comet-visible-broad-daylight.html
4•bookmtn•1h ago•0 comments

ESR: Comes the news that Anthropic has vibecoded a C compiler

https://twitter.com/esrtweet/status/2019562859978539342
2•tjr•1h ago•0 comments

Frisco residents divided over H-1B visas, 'Indian takeover' at council meeting

https://www.dallasnews.com/news/politics/2026/02/04/frisco-residents-divided-over-h-1b-visas-indi...
4•alephnerd•1h ago•5 comments

If CNN Covered Star Wars

https://www.youtube.com/watch?v=vArJg_SU4Lc
1•keepamovin•1h ago•1 comments

Show HN: I built the first tool to configure VPSs without commands

https://the-ultimate-tool-for-configuring-vps.wiar8.com/
2•Wiar8•1h ago•3 comments

AI agents from 4 labs predicting the Super Bowl via prediction market

https://agoramarket.ai/
1•kevinswint•1h ago•1 comments

EU bans infinite scroll and autoplay in TikTok case

https://twitter.com/HennaVirkkunen/status/2019730270279356658
7•miohtama•1h ago•5 comments

Benchmarking how well LLMs can play FizzBuzz

https://huggingface.co/spaces/venkatasg/fizzbuzz-bench
1•_venkatasg•1h ago•1 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
36•SerCe•1h ago•31 comments
Open in hackernews

Show HN: Veritas – Detecting Hidden Bias in Everyday Writing

1•axisai•5mo ago
We’re building Veritas, an AI model designed to uncover bias in written content — from academic papers and policies to workplace communications. The goal is to make hidden assumptions and barriers visible, so decisions can be made with more clarity and fairness.

We just launched on Kickstarter to fund the next phase as we move into BETA testing: https://www.kickstarter.com/projects/axis-veritas/veritas-th...

Would love the HN community’s perspective: Do you see a need for this kind of model? Where do you think it could be most useful, and what pitfalls should we be careful to avoid?

Comments

JoshTriplett•5mo ago
The most obvious pitfall: people are very very quick to equate "bias" with "factually correct thing I don't like". You need to train your model to distinguish "correct information people don't like" from "bias", and you'll need to educate people about the difference.

Effectively, if you're going to attempt to detect bias, you have to handle the Paradox of Tolerance. Otherwise, for instance, efforts to detect intolerance will be accused of being biased against intolerance, and people who wish to remain intolerant will push you to "fix" it.

Another test case: test to ensure your detector does not detect factual information on evolution or climate change as being "biased" because there's a "side" that denies their existence. Not all "sides" are valid.

axisai•5mo ago
That’s a really sharp observation, and it’s something we’ve been intentional about from day one. You’re right, a lot of people equate “bias” with “facts they don’t like,” and if we’re not careful, a detector can slip into reinforcing that misunderstanding.

How we’re tackling it:

Model Training: We train Veritas on examples that draw a hard line between factual but unpopular truths and genuinely biased framing. For issues like climate change or evolution, the model is designed to recognize them as evidence-based consensus, not “opinions with two sides.” We also run expert reviews on edge cases so it doesn’t mistake denialism for a valid counterpoint.

User Education: Every analysis Veritas produces comes with context — not just a yes/no label. It explains why something is or isn’t bias, referencing categories like gendered language, academic elitism, or cultural assumptions. We’re also preparing orientation guides for testers, so they know up front this is an academic tool, not a political scorekeeper.

The Paradox of Tolerance is real, and our stance is this: Veritas doesn’t silence perspectives, but it will highlight when language is exclusionary, misrepresentative, or factually distorted.

Two things I’d love your input on:

What’s the most effective way to show users that “unpopular facts ≠ bias” — would examples, quick demos, or documentation be strongest?

Do you think it’s helpful for us to explicitly tag certain topics as “consensus facts,” or is it better to just let the model’s handling speak for itself?

JoshTriplett•5mo ago
> What’s the most effective way to show users that “unpopular facts ≠ bias” — would examples, quick demos, or documentation be strongest?

I think you'd need several of those.

You may want to have a general introduction to the basic idea of "if you're trying to model the world, your model should match the world in a fashion that has predictive value". Giving a short version of Carl Sagan's "The Dragon in My Garage" might help, for instance, as an example for showing how people might attempt to make their view unfalsifiable rather than just recognize that it's false.

If you want to get people passionately interested in your tool, you could take a tactic of "help your users learn to convince people of correct things", in addition to helping them learn for themselves. The advantage of that would be that many people care more about convincing others, and are more self-aware about the need for that, than they are self-aware about needing to be correct themselves. The disadvantage would be that you might not want that framing.

For some people, it might help to have a more advanced version that cites things like Newtonian mechanics (imperfect but largely accurate within its domain for practical everyday purposes) and relativity (more accurate but unnecessary for everyday purposes, but needed for e.g. GPS). But unfortunately, those kinds of examples don't have comparable impacts or resonance with everyone.

I'd suggest giving examples, but choosing those examples from things where 1) there's an obvious objectively correct answer and 2) anyone who reacts to that example with anger rather than learning is very obviously outside your target audience. That is, for instance, why I cited evolution as an example. I don't know what process would reliably help a young-earth creationist understand that their model does not match reality and will not help them understand or operate in the world, but it probably isn't your tool. And there are fewer people who will react to such an example with anger, which is important because that anger gets in the way of processing and understanding reality.

Perhaps, when someone has seen a bunch of examples they agree with first, they might be more capable of hearing an example that's further outside their comfort zone.

> Do you think it’s helpful for us to explicitly tag certain topics as “consensus facts,” or is it better to just let the model’s handling speak for itself?

No matter what you do, you're going to make people angry who are not interested in truth or in having their BS called out. When someone has a vested interest in believing, or convincing others, of something that's at odds with the world, ultimately the very concepts of correctness and epistemology become their enemy, because they cannot be correct by any means other than invalidating the concept of "correct" and trying to operate in a world in which words are just vibes that produce vibes in other people.

Whatever you do, if you do a good job, you're going to end up frustrating such people. Hopefully you frustrate such people very effectively. In an ideal world, there'd be a path to convincing people of the merit of choosing to be correct rather than incorrect. If you can find a way to do that, please do, seriously, but it would be understandable if you cannot. Frankly, if you substantially moved the needle on that problem you'd deserve Nobel Prizes.

Trying to be fair to AI here: one of the ways AI might be able to help is that it's time-consuming to systematically invalidate bad arguments (correct arguments and deconstructing why other arguments are invalid are harder than vibing and gish gallops), and it's also time-consuming to provide the level of detail and nuance needed to be accurate and correct. (e.g. "vaccines have been proven to work" is short but imprecise, "vaccines substantially reduce viral load, reduce the severity of infection, decrease the likelihood of spread and the viral load passed on to others, and with sufficient efficacy and near-universal immunization they can decrease spread enough to lead to eradication" is precise and doesn't fit in a tweet.) If your AI is capable of going "this is incorrect, here is a detailed explanation of why it's incorrect", and only doing that when something is actually incorrect rather than helping people convince anyone of anything, that might help.

With that in mind: you'd want to make sure your training data has some clear examples of correct things that people nonetheless try to argue against, and the types of ways people fight against them, and invalidation of the ways those arguments often progress. And for things that are more subjective, you'd want clear identification of perspectives. But also, you don't want to overfit the AI to the data; it needs to learn to identify bad arguments and cluster perspectives for things it hasn't seen.

Happy to talk about this further, including the branching tree of directions this could take; please feel free to reach out by email.

axisai•5mo ago
This is very help. I will reach out, I want to share this also with our team. Thank you!
axisai•5mo ago
What is your email to further discuss this? Thank you
JoshTriplett•5mo ago
josh@joshtriplett.org

All of my contact info is on my profile.