frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Red teamers arrested conducting a penetration test

https://www.infosecinstitute.com/podcast/red-teamers-arrested-conducting-a-penetration-test/
1•begueradj•5m ago•0 comments

Show HN: Open-source AI powered Kubernetes IDE

https://github.com/agentkube/agentkube
1•saiyampathak•9m ago•0 comments

Show HN: Lucid – Use LLM hallucination to generate verified software specs

https://github.com/gtsbahamas/hallucination-reversing-system
1•tywells•11m ago•0 comments

AI Doesn't Write Every Framework Equally Well

https://x.com/SevenviewSteve/article/2019601506429730976
1•Osiris30•15m ago•0 comments

Aisbf – an intelligent routing proxy for OpenAI compatible clients

https://pypi.org/project/aisbf/
1•nextime•15m ago•1 comments

Let's handle 1M requests per second

https://www.youtube.com/watch?v=W4EwfEU8CGA
1•4pkjai•16m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
1•zhizhenchi•17m ago•0 comments

Goal: Ship 1M Lines of Code Daily

2•feastingonslop•27m ago•0 comments

Show HN: Codex-mem, 90% fewer tokens for Codex

https://github.com/StartripAI/codex-mem
1•alfredray•29m ago•0 comments

FastLangML: FastLangML:Context‑aware lang detector for short conversational text

https://github.com/pnrajan/fastlangml
1•sachuin23•33m ago•1 comments

LineageOS 23.2

https://lineageos.org/Changelog-31/
1•pentagrama•36m ago•0 comments

Crypto Deposit Frauds

2•wwdesouza•37m ago•0 comments

Substack makes money from hosting Nazi newsletters

https://www.theguardian.com/media/2026/feb/07/revealed-how-substack-makes-money-from-hosting-nazi...
2•lostlogin•37m ago•0 comments

Framing an LLM as a safety researcher changes its language, not its judgement

https://lab.fukami.eu/LLMAAJ
1•dogacel•40m ago•0 comments

Are there anyone interested about a creator economy startup

1•Nejana•41m ago•0 comments

Show HN: Skill Lab – CLI tool for testing and quality scoring agent skills

https://github.com/8ddieHu0314/Skill-Lab
1•qu4rk5314•42m ago•0 comments

2003: What is Google's Ultimate Goal? [video]

https://www.youtube.com/watch?v=xqdi1xjtys4
1•1659447091•42m ago•0 comments

Roger Ebert Reviews "The Shawshank Redemption"

https://www.rogerebert.com/reviews/great-movie-the-shawshank-redemption-1994
1•monero-xmr•44m ago•0 comments

Busy Months in KDE Linux

https://pointieststick.com/2026/02/06/busy-months-in-kde-linux/
1•todsacerdoti•44m ago•0 comments

Zram as Swap

https://wiki.archlinux.org/title/Zram#Usage_as_swap
1•seansh•57m ago•1 comments

Green’s Dictionary of Slang - Five hundred years of the vulgar tongue

https://greensdictofslang.com/
1•mxfh•59m ago•0 comments

Nvidia CEO Says AI Capital Spending Is Appropriate, Sustainable

https://www.bloomberg.com/news/articles/2026-02-06/nvidia-ceo-says-ai-capital-spending-is-appropr...
1•virgildotcodes•1h ago•2 comments

Show HN: StyloShare – privacy-first anonymous file sharing with zero sign-up

https://www.styloshare.com
1•stylofront•1h ago•0 comments

Part 1 the Persistent Vault Issue: Your Encryption Strategy Has a Shelf Life

1•PhantomKey•1h ago•0 comments

Show HN: Teleop_xr – Modular WebXR solution for bimanual robot teleoperation

https://github.com/qrafty-ai/teleop_xr
1•playercc7•1h ago•1 comments

The Highest Exam: How the Gaokao Shapes China

https://www.lrb.co.uk/the-paper/v48/n02/iza-ding/studying-is-harmful
2•mitchbob•1h ago•1 comments

Open-source framework for tracking prediction accuracy

https://github.com/Creneinc/signal-tracker
1•creneinc•1h ago•0 comments

India's Sarvan AI LLM launches Indic-language focused models

https://x.com/SarvamAI
2•Osiris30•1h ago•0 comments

Show HN: CryptoClaw – open-source AI agent with built-in wallet and DeFi skills

https://github.com/TermiX-official/cryptoclaw
1•cryptoclaw•1h ago•0 comments

ShowHN: Make OpenClaw respond in Scarlett Johansson’s AI Voice from the Film Her

https://twitter.com/sathish316/status/2020116849065971815
1•sathish316•1h ago•2 comments
Open in hackernews

Ask HN: Why do fact-based debate platforms keep failing?

https://fact2check.com/
3•DTutorin•1mo ago

Comments

DTutorin•1mo ago
I’ve been experimenting with a small MVP that tries to structure online debates differently. The idea is simple: one claim/theory users add individual supporting or opposing facts (with sources) each fact is discussed and voted on independently no final verdicts, no “truth score”, no authority layer The goal is not to determine truth, but to observe how collective belief and disagreement form when arguments are forced to be atomic. After sharing this experiment with skeptic-oriented communities, I ran into a set of strong critiques that seem to recur whenever projects like this appear: Voting is inherently argumentum ad populum, even if applied to individual facts There’s a strong asymmetry of effort: real evidence is costly, bad evidence is cheap Coordinated actors, cranks, or propagandists are more motivated than average users Non-experts struggle to distinguish relevance, quality, and weight of evidence “Fact overload” and gish gallop can drown out meaningful signal Moderation only works with subject-matter experts, which doesn’t scale Similar platforms have failed when public voting elevated weak or misleading evidence over rigorous research Many commenters argued that this model inevitably legitimizes misinformation rather than containing it. Before taking this experiment any further, I’d really like input from people here who’ve seen similar systems succeed or fail. My questions: Is this kind of structure fundamentally doomed outside of peer review or expert-only contexts? Are there known constraints or design patterns that prevent collapse into noise or popularity contests? Does this only work in narrow, technical domains (e.g. software, math, engineering)? Or is the failure mode intrinsic to letting non-experts evaluate evidence at all? If it helps to see the concrete implementation, the MVP is here (no signup required): https://fact2check.com I’m less interested in defending the project than in understanding where - structurally - this approach breaks.
fuzzfactor•1mo ago
Maybe it has something to with the way that bestselling Fiction has always outsold non-Fiction?
DTutorin•1mo ago
That’s a fair point, and I think it’s related. Fiction has a structural advantage: it’s coherent, emotionally satisfying, and low-effort to consume. Evidence-based reasoning is fragmented, probabilistic, and often unsatisfying - especially when you don’t get a clean narrative or conclusion. One thing this experiment tries to test is whether forcing arguments to be atomic (individual facts instead of stories) helps or hurts. My suspicion is that it actually removes the narrative glue that makes ideas compelling - which may explain why such systems struggle to attract sustained engagement. In other words, it may not just be about truth vs fiction, but about narrative vs non-narrative cognition. If that’s correct, the failure mode is structural, not just social or political.