frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•2m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
1•tosh•8m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
2•oxxoxoxooo•12m ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•12m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
2•goranmoomin•16m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

3•throwaw12•17m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•19m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•21m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
2•myk-e•24m ago•3 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•24m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
3•1vuio0pswjnm7•26m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
2•1vuio0pswjnm7•28m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•30m ago•2 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•33m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•38m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•40m ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•43m ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•55m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•57m ago•1 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•58m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•1h ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•1h ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•1h ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•1h ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•1h ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•1h ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•1h ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
2•basilikum•1h ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•1h ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•1h ago•0 comments
Open in hackernews

Show HN: Txtos for LLMs – 60 SEC setup, long memory, boundary guard, MIT

https://github.com/onestardao/WFGY/blob/main/OS/README.md
5•tgrrr9111•5mo ago
i built TXTOS because my models kept forgetting and bluffing. i wanted a portable fix that works across providers without code or setup. TXTOS is a single .txt you paste into any LLM chat. it boots a small reasoning OS that gives you two things by default: a semantic tree memory that survives long threads, and a knowledge boundary guard that pushes back when the model is out of scope.

what it is plain text. no scripts, no trackers, no api calls. MIT. the file encodes a protocol for reasoning, memory, and safety. you can diff it and fork it. it is not “a clever prompt”. it behaves like a tiny OS that the model follows.

why it exists after debugging a lot of RAG and agent stacks, the same failures kept coming back. memory broke across sessions. the model answered outside its knowledge without warning. i wanted a zero install layer that i can carry between models and keep the same behavior.

what you get

* semantic tree memory. it records ideas and relations, not just tokens. it can recall earlier branches, avoid repetition, and keep tone stable. * knowledge boundary test. ask something impossible, then run the built in check. it flags risk and proposes a safe path instead of hallucinating. * simple rules. cite then explain. stop when sources or offsets are missing. show a short audit trail. keep answers compact when you ask for compact.

try in 60 seconds

1. download TXTOS.txt, open a fresh chat with any model you like 2. paste the file content, then type: hello world 3. test memory: ask three related questions, ask it to recall the first one exactly, then switch topics and come back 4. test boundary: ask for something unknowable or very recent, then type kbtest. watch how it handles the boundary 5. optional: restart the chat and paste the same file. see how the tree helps you rebuild state fast

what to expect less overtalking. better recall of your own constraints and tone. refusal on missing citations. early warnings when your question is outside scope. the file is small by design, so even weaker models can use it. stronger models show bigger gains.

not just marketing TXTOS came out of real failures. it was built to stop two user facing problems that cost time and trust. forgetting, and unearned confidence. since it is plain text, the community can audit it and improve it.

looking for feedback

* what did the memory tree get right or wrong for your workflow * which boundary cases still slip through * what small operator would you add to the OS so it helps you daily

url is in the link field above. thanks for reading. if you break it, even better. tell me how you did it and i will ship the fix.

Comments

ccccffff0000•5mo ago
You talk about "semantic tree memory" but what does the system use as a storage? Just the text that is passed into and generated by the LLM? Doesn't that strategy risk context window overflows soon, and also content loss due to the LLM's imperfect recall? Did you think about integrating an external storage layer via tool calls?