frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Ask HN: Have AI companies replaced their own SaaS usage with agents?

1•tuxpenguine•37s ago•0 comments

pi-nes

https://twitter.com/thomasmustier/status/2018362041506132205
1•tosh•2m ago•0 comments

Show HN: Crew – Multi-agent orchestration tool for AI-assisted development

https://github.com/garnetliu/crew
1•gl2334•2m ago•0 comments

New hire fixed a problem so fast, their boss left to become a yoga instructor

https://www.theregister.com/2026/02/06/on_call/
1•Brajeshwar•4m ago•0 comments

Four horsemen of the AI-pocalypse line up capex bigger than Israel's GDP

https://www.theregister.com/2026/02/06/ai_capex_plans/
1•Brajeshwar•4m ago•0 comments

A free Dynamic QR Code generator (no expiring links)

https://free-dynamic-qr-generator.com/
1•nookeshkarri7•5m ago•1 comments

nextTick but for React.js

https://suhaotian.github.io/use-next-tick/
1•jeremy_su•7m ago•0 comments

Show HN: I Built an AI-Powered Pull Request Review Tool

https://github.com/HighGarden-Studio/HighReview
1•highgarden•7m ago•0 comments

Git-am applies commit message diffs

https://lore.kernel.org/git/bcqvh7ahjjgzpgxwnr4kh3hfkksfruf54refyry3ha7qk7dldf@fij5calmscvm/
1•rkta•10m ago•0 comments

ClawEmail: 1min setup for OpenClaw agents with Gmail, Docs

https://clawemail.com
1•aleks5678•16m ago•1 comments

UnAutomating the Economy: More Labor but at What Cost?

https://www.greshm.org/blog/unautomating-the-economy/
1•Suncho•23m ago•1 comments

Show HN: Gettorr – Stream magnet links in the browser via WebRTC (no install)

https://gettorr.com/
1•BenaouidateMed•24m ago•0 comments

Statin drugs safer than previously thought

https://www.semafor.com/article/02/06/2026/statin-drugs-safer-than-previously-thought
1•stareatgoats•26m ago•0 comments

Handy when you just want to distract yourself for a moment

https://d6.h5go.life/
1•TrendSpotterPro•28m ago•0 comments

More States Are Taking Aim at a Controversial Early Reading Method

https://www.edweek.org/teaching-learning/more-states-are-taking-aim-at-a-controversial-early-read...
1•lelanthran•29m ago•0 comments

AI will not save developer productivity

https://www.infoworld.com/article/4125409/ai-will-not-save-developer-productivity.html
1•indentit•34m ago•0 comments

How I do and don't use agents

https://twitter.com/jessfraz/status/2019975917863661760
1•tosh•40m ago•0 comments

BTDUex Safe? The Back End Withdrawal Anomalies

1•aoijfoqfw•43m ago•0 comments

Show HN: Compile-Time Vibe Coding

https://github.com/Michael-JB/vibecode
5•michaelchicory•45m ago•1 comments

Show HN: Ensemble – macOS App to Manage Claude Code Skills, MCPs, and Claude.md

https://github.com/O0000-code/Ensemble
1•IO0oI•49m ago•1 comments

PR to support XMPP channels in OpenClaw

https://github.com/openclaw/openclaw/pull/9741
1•mickael•49m ago•0 comments

Twenty: A Modern Alternative to Salesforce

https://github.com/twentyhq/twenty
1•tosh•51m ago•0 comments

Raspberry Pi: More memory-driven price rises

https://www.raspberrypi.com/news/more-memory-driven-price-rises/
2•calcifer•56m ago•0 comments

Level Up Your Gaming

https://d4.h5go.life/
1•LinkLens•1h ago•1 comments

Di.day is a movement to encourage people to ditch Big Tech

https://itsfoss.com/news/di-day-celebration/
3•MilnerRoute•1h ago•0 comments

Show HN: AI generated personal affirmations playing when your phone is locked

https://MyAffirmations.Guru
4•alaserm•1h ago•3 comments

Show HN: GTM MCP Server- Let AI Manage Your Google Tag Manager Containers

https://github.com/paolobietolini/gtm-mcp-server
1•paolobietolini•1h ago•0 comments

Launch of X (Twitter) API Pay-per-Use Pricing

https://devcommunity.x.com/t/announcing-the-launch-of-x-api-pay-per-use-pricing/256476
1•thinkingemote•1h ago•0 comments

Facebook seemingly randomly bans tons of users

https://old.reddit.com/r/facebookdisabledme/
1•dirteater_•1h ago•2 comments

Global Bird Count Event

https://www.birdcount.org/
1•downboots•1h ago•0 comments
Open in hackernews

Show HN: Veritas – Detecting Hidden Bias in Everyday Writing

1•axisai•5mo ago
We’re building Veritas, an AI model designed to uncover bias in written content — from academic papers and policies to workplace communications. The goal is to make hidden assumptions and barriers visible, so decisions can be made with more clarity and fairness.

We just launched on Kickstarter to fund the next phase as we move into BETA testing: https://www.kickstarter.com/projects/axis-veritas/veritas-th...

Would love the HN community’s perspective: Do you see a need for this kind of model? Where do you think it could be most useful, and what pitfalls should we be careful to avoid?

Comments

JoshTriplett•5mo ago
The most obvious pitfall: people are very very quick to equate "bias" with "factually correct thing I don't like". You need to train your model to distinguish "correct information people don't like" from "bias", and you'll need to educate people about the difference.

Effectively, if you're going to attempt to detect bias, you have to handle the Paradox of Tolerance. Otherwise, for instance, efforts to detect intolerance will be accused of being biased against intolerance, and people who wish to remain intolerant will push you to "fix" it.

Another test case: test to ensure your detector does not detect factual information on evolution or climate change as being "biased" because there's a "side" that denies their existence. Not all "sides" are valid.

axisai•5mo ago
That’s a really sharp observation, and it’s something we’ve been intentional about from day one. You’re right, a lot of people equate “bias” with “facts they don’t like,” and if we’re not careful, a detector can slip into reinforcing that misunderstanding.

How we’re tackling it:

Model Training: We train Veritas on examples that draw a hard line between factual but unpopular truths and genuinely biased framing. For issues like climate change or evolution, the model is designed to recognize them as evidence-based consensus, not “opinions with two sides.” We also run expert reviews on edge cases so it doesn’t mistake denialism for a valid counterpoint.

User Education: Every analysis Veritas produces comes with context — not just a yes/no label. It explains why something is or isn’t bias, referencing categories like gendered language, academic elitism, or cultural assumptions. We’re also preparing orientation guides for testers, so they know up front this is an academic tool, not a political scorekeeper.

The Paradox of Tolerance is real, and our stance is this: Veritas doesn’t silence perspectives, but it will highlight when language is exclusionary, misrepresentative, or factually distorted.

Two things I’d love your input on:

What’s the most effective way to show users that “unpopular facts ≠ bias” — would examples, quick demos, or documentation be strongest?

Do you think it’s helpful for us to explicitly tag certain topics as “consensus facts,” or is it better to just let the model’s handling speak for itself?

JoshTriplett•5mo ago
> What’s the most effective way to show users that “unpopular facts ≠ bias” — would examples, quick demos, or documentation be strongest?

I think you'd need several of those.

You may want to have a general introduction to the basic idea of "if you're trying to model the world, your model should match the world in a fashion that has predictive value". Giving a short version of Carl Sagan's "The Dragon in My Garage" might help, for instance, as an example for showing how people might attempt to make their view unfalsifiable rather than just recognize that it's false.

If you want to get people passionately interested in your tool, you could take a tactic of "help your users learn to convince people of correct things", in addition to helping them learn for themselves. The advantage of that would be that many people care more about convincing others, and are more self-aware about the need for that, than they are self-aware about needing to be correct themselves. The disadvantage would be that you might not want that framing.

For some people, it might help to have a more advanced version that cites things like Newtonian mechanics (imperfect but largely accurate within its domain for practical everyday purposes) and relativity (more accurate but unnecessary for everyday purposes, but needed for e.g. GPS). But unfortunately, those kinds of examples don't have comparable impacts or resonance with everyone.

I'd suggest giving examples, but choosing those examples from things where 1) there's an obvious objectively correct answer and 2) anyone who reacts to that example with anger rather than learning is very obviously outside your target audience. That is, for instance, why I cited evolution as an example. I don't know what process would reliably help a young-earth creationist understand that their model does not match reality and will not help them understand or operate in the world, but it probably isn't your tool. And there are fewer people who will react to such an example with anger, which is important because that anger gets in the way of processing and understanding reality.

Perhaps, when someone has seen a bunch of examples they agree with first, they might be more capable of hearing an example that's further outside their comfort zone.

> Do you think it’s helpful for us to explicitly tag certain topics as “consensus facts,” or is it better to just let the model’s handling speak for itself?

No matter what you do, you're going to make people angry who are not interested in truth or in having their BS called out. When someone has a vested interest in believing, or convincing others, of something that's at odds with the world, ultimately the very concepts of correctness and epistemology become their enemy, because they cannot be correct by any means other than invalidating the concept of "correct" and trying to operate in a world in which words are just vibes that produce vibes in other people.

Whatever you do, if you do a good job, you're going to end up frustrating such people. Hopefully you frustrate such people very effectively. In an ideal world, there'd be a path to convincing people of the merit of choosing to be correct rather than incorrect. If you can find a way to do that, please do, seriously, but it would be understandable if you cannot. Frankly, if you substantially moved the needle on that problem you'd deserve Nobel Prizes.

Trying to be fair to AI here: one of the ways AI might be able to help is that it's time-consuming to systematically invalidate bad arguments (correct arguments and deconstructing why other arguments are invalid are harder than vibing and gish gallops), and it's also time-consuming to provide the level of detail and nuance needed to be accurate and correct. (e.g. "vaccines have been proven to work" is short but imprecise, "vaccines substantially reduce viral load, reduce the severity of infection, decrease the likelihood of spread and the viral load passed on to others, and with sufficient efficacy and near-universal immunization they can decrease spread enough to lead to eradication" is precise and doesn't fit in a tweet.) If your AI is capable of going "this is incorrect, here is a detailed explanation of why it's incorrect", and only doing that when something is actually incorrect rather than helping people convince anyone of anything, that might help.

With that in mind: you'd want to make sure your training data has some clear examples of correct things that people nonetheless try to argue against, and the types of ways people fight against them, and invalidation of the ways those arguments often progress. And for things that are more subjective, you'd want clear identification of perspectives. But also, you don't want to overfit the AI to the data; it needs to learn to identify bad arguments and cluster perspectives for things it hasn't seen.

Happy to talk about this further, including the branching tree of directions this could take; please feel free to reach out by email.

axisai•5mo ago
This is very help. I will reach out, I want to share this also with our team. Thank you!
axisai•5mo ago
What is your email to further discuss this? Thank you
JoshTriplett•5mo ago
josh@joshtriplett.org

All of my contact info is on my profile.