frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Skim – vibe review your PRs

https://github.com/Haizzz/skim
1•haizzz•1m ago•1 comments

Show HN: Open-source AI assistant for interview reasoning

https://github.com/evinjohnn/natively-cluely-ai-assistant
1•Nive11•1m ago•0 comments

Tech Edge: A Living Playbook for America's Technology Long Game

https://csis-website-prod.s3.amazonaws.com/s3fs-public/2026-01/260120_EST_Tech_Edge_0.pdf?Version...
1•hunglee2•5m ago•0 comments

Golden Cross vs. Death Cross: Crypto Trading Guide

https://chartscout.io/golden-cross-vs-death-cross-crypto-trading-guide
1•chartscout•7m ago•0 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
2•AlexeyBrin•10m ago•0 comments

What the longevity experts don't tell you

https://machielreyneke.com/blog/longevity-lessons/
1•machielrey•11m ago•1 comments

Monzo wrongly denied refunds to fraud and scam victims

https://www.theguardian.com/money/2026/feb/07/monzo-natwest-hsbc-refunds-fraud-scam-fos-ombudsman
2•tablets•16m ago•0 comments

They were drawn to Korea with dreams of K-pop stardom – but then let down

https://www.bbc.com/news/articles/cvgnq9rwyqno
2•breve•18m ago•0 comments

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•21m ago•0 comments

Bash parallel tasks and error handling

https://github.com/themattrix/bash-concurrent
2•pastage•21m ago•0 comments

Let's compile Quake like it's 1997

https://fabiensanglard.net/compile_like_1997/index.html
2•billiob•22m ago•0 comments

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
2•birdculture•27m ago•0 comments

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•33m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•34m ago•1 comments

Slop News - HN front page right now as AI slop

https://slop-news.pages.dev/slop-news
1•keepamovin•39m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•41m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
3•tosh•47m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
4•oxxoxoxooo•50m ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•51m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
3•goranmoomin•55m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

3•throwaw12•56m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•57m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•1h ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
4•myk-e•1h ago•5 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•1h ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
5•1vuio0pswjnm7•1h ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
4•1vuio0pswjnm7•1h ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•1h ago•2 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•1h ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•1h ago•0 comments
Open in hackernews

Show HN: TXT OS – Open-Source AI Reasoning, One Plain-Text File at a Time

https://github.com/onestardao/WFGY/tree/main/OS
10•TXTOS•6mo ago
Hi HN,

I'm excited to share TXT OS — an open-source AI reasoning engine that runs entirely inside a single `.txt` file.

- No installs, no signup, no hidden code — just copy-paste the file into any LLM chat window (GPT, Claude, Gemini, etc.). - +22.4% semantic accuracy, +42.1% reasoning success, and 3.6× more stability (benchmarked on GSM8K and Truthful-QA). - Features Semantic Tree Memory, Hallucination Shield, and fully exportable logic. - MIT Licensed, zero tracking, zero ads.

Why did I build this? I wanted to prove that advanced reasoning and memory could be made open, portable, and accessible to anyone — just with pure text, no software or setup.

A note: I'm from China, and English is not my first language. This post and the docs were partly assisted by AI, but I personally reviewed and approved every line of content. All ideas, design, and code are my own work. If anything is unclear or could be improved, I really welcome your feedback!

I'm the author, and happy to answer any questions or suggestions here!

Comments

ultimateking•6mo ago
Really cool project! Quick questions:

1. How does TXT OS store its “Semantic Tree Memory” between sessions? 2. When `kbtest` detects a hallucination, what happens next? 3. Any idea of the speed impact on smaller models like LLaMA-2-13B?

Thanks for sharing—excited to try it out!

TXTOS•6mo ago
Semantic Tree Memory

We actually serialize the tree as a compact JSON-like structure right in the TXT file—each node gets a header like #NODE:id and indented subtrees. When you reload, TXT OS parses those markers back into your LLM’s memory map. No external DB needed—just plain text you can copy-paste between sessions.

--- When kbtest Fires

Internally it tracks our ΔS metric (semantic tension). Once ΔS crosses a preset threshold, kbtest prints a warning and automatically rolls you back to the last “safe” tree checkpoint. That means you lose only the bad branch, not your entire session. Think of it like an undo button for hallucinations.

--- Performance on LLaMA-2-13B

Benchmarks were on GPT-4, but on a 13B model you’ll see roughly a 10–15% token-generation slow-down thanks to the extra parsing and boundary checks. In practice that’s about +2 ms per token, which most folks find an acceptable trade-off for the added stability.

Hope that clears things up—let me know if you hit any weird edge cases!

brown2000•6mo ago
interesting! Quick question:

does TXT OS work equally well with open-source models, or is it optimized more for models like GPT-4 or Claude?

TXTOS•6mo ago
Hey, good question!

I've actually tested TXT OS with about 10 different AIs already—you can check out the full rundown on my repo. Generally, ChatGPT, Grok, Claude, and Perplexity gave the smoothest and best experience. The others still work fine, but some, like Gemini, have minor quirks (Gemini randomly adds a weird parameter during initial setup, but it sorts itself out after the first step).

So, long story short, if you want a hassle-free experience, go with ChatGPT, Grok, Claude, or Perplexity!

yyhhooq•6mo ago
could you explain the four math a little bit? Why it can activate AI ?
TXTOS•6mo ago
Sure — it’s not about activating AI like magic, it’s about steering its reasoning process.

Each formula plays a role in making the LLM more stable, coherent, and logically self-aware:

• = I - G + mc² defines semantic residue — how far the current output strays from meaning. • BigBig(G) recombines context & error to steer output back toward intent. • BBCR detects collapse and triggers reset → rebirth (like fail-safe logic). • BBAM models attention decay — restoring continuity over multiple steps.

Together, this makes the LLM act less like autocomplete… and more like a self-guided reasoner.