frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

An Isolated Iran Finds China's Friendship Has Limits

https://www.wsj.com/world/an-isolated-iran-finds-chinas-friendship-has-limits-be947372
1•JumpCrisscross•26s ago•0 comments

Show HN: Respilens.com display Flu, Covid-19 and RSV Forecasts in US States

https://www.respilens.com/?view=flu_projs
1•wosk•1m ago•0 comments

Car Allowance Rebate System

https://en.wikipedia.org/wiki/Car_Allowance_Rebate_System
1•cainxinth•3m ago•0 comments

Minimalist GitHub Actions: Your workflows should do less

https://terrateam.io/blog/github-actions-should-do-less
1•gmgn•3m ago•0 comments

FDA: Use of Bayesian Methodology in Clinical Trials

https://www.fda.gov/regulatory-information/search-fda-guidance-documents/use-bayesian-methodology...
1•jerkstate•3m ago•0 comments

Iranians describe heavy security and scattered damage in calls to outside world

https://apnews.com/article/iran-protests-us-israel-war-nuclear-economy-1b2368e0804676d33d6aa06968...
1•mhb•4m ago•0 comments

Show HN: A Markdown Viewer for the LLM Era (Mermaid and LaTeX)

https://mdview.io/
1•Igor_Wiwi•4m ago•0 comments

Salesforce rolls out new Slackbot AI agent as it battles Microsoft and Google

https://venturebeat.com/technology/salesforce-rolls-out-new-slackbot-ai-agent-as-it-battles-micro...
1•prng2021•4m ago•0 comments

Creepy Link – URL Shortener

https://creepylink.com/
1•scapecast•5m ago•0 comments

Affordable housing site goes live with meme-laden test data

https://www.theregister.com/2026/01/13/housing_site_test_data/
1•Bender•6m ago•0 comments

Mandiant open sources tool to prevent leaky Salesforce misconfigs

https://www.theregister.com/2026/01/13/mandiant_salesforce_tool/
1•Bender•6m ago•0 comments

Iran Overview – Cloudflare Radar

https://radar.cloudflare.com/ir
1•merksittich•6m ago•0 comments

Federal agencies told to fix or ditch Gogs

https://www.theregister.com/2026/01/13/cisa_gogs_exploit/
1•Bender•7m ago•1 comments

The Palindromic Hat-Trick

https://aperiodical.com/2018/05/the-incredible-palindromic-hat-trick/
1•ColinWright•7m ago•0 comments

Why have death rates from accidental falls tripled?

https://usafacts.org/articles/why-have-death-rates-from-accidental-falls-tripled/
3•atlasunshrugged•7m ago•0 comments

How to Handle the Death of the Essay

https://blog.apaonline.org/2026/01/12/how-to-handle-the-death-of-the-essay/
1•jruohonen•8m ago•0 comments

Contra Dance as a Model for Post-AI Culture

https://www.jefftk.com/p/contra-dance-as-a-model-for-post-ai-culture
1•mhb•9m ago•0 comments

Show HN: I built a Finances app for Mac where you own the SQLite database

https://thefinances.app
1•steveharrison•10m ago•0 comments

FailHub – Issue #1 (Every week, three real failures. Three real lessons.)

https://failhub.substack.com/p/failhub-issue-1
1•khambir•11m ago•0 comments

Helping promote the Lax programming language

1•Mavox-ID•11m ago•0 comments

Tell HN: Viral Hit Made by AI, 10M listens on Spotify last few days

1•montebicyclelo•12m ago•0 comments

Former NYC Mayor Eric Adams' memecoin faces rug pull allegations

https://www.theblock.co/post/385222/eric-adams-floats-memecoin
1•zzzeek•13m ago•0 comments

Reversal of the Leloir pathway for galactose and tagatose synthesis from glucose

https://www.cell.com/cell-reports-physical-science/fulltext/S2666-3864(25)00592-2
1•thunderbong•13m ago•0 comments

Movies in the public domain without an attached video file

https://wikiflix.toolforge.org/#/candidates
1•bookofjoe•14m ago•0 comments

Stop Being Nice. Start Being Kind

https://velocitycurve.substack.com/p/stop-being-nice-start-being-kind
1•mooreds•14m ago•0 comments

We still need small language models – even in the age of frontier AI

https://www.turing.ac.uk/blog/why-we-still-need-small-language-models-even-age-frontier-ai
1•mooreds•16m ago•0 comments

A Landscape View of Robotic Skills, Agents, and the Architecture

https://medium.com/@telekinesis-ai/the-telekinesis-physical-ai-stack-a-landscape-view-of-robotic-...
1•CCB-TK•16m ago•1 comments

Context Engineering in Practice: How Atlassian Builds AI for Real Developer Work [video]

https://www.youtube.com/watch?v=FeuoB9aaHHk
1•mooreds•16m ago•0 comments

llms .py – Extensible OSS ChatGPT UI, RAG, Tool Calling, Image/Audio Gen

https://llmspy.org/docs/v3
1•mythz•20m ago•0 comments

Healthy dietary pattern and risk of rheumatoid arthritis: meta-analysis

https://pubmed.ncbi.nlm.nih.gov/40913838/
2•RickJWagner•21m ago•1 comments
Open in hackernews

Show HN: Theus – I built a framework to make AI-generated code safe to run

https://github.com/dohuyhoang93/theus
1•dohuyhoangvn93•2h ago
Hi HN,

AI is writing a lot of our code now, but here’s what keeps me up at night: AI is great at logic, but terrible at state safety. An LLM can write a perfect-looking function that accidentally nukes your global state or creates a race condition you'll spend a week debugging.

I built Theus because I wanted to stop worrying.

The philosophy is simple: Data is the Asset. Code is the Liability. Theus acts like a "safety container" for your logic (especially code written by AI). It enforces a few strict rules:

Zero-Trust: A process can’t see anything it didn't explicitly ask for in its contract.

Shadow Copies: Code never touches your "real" data directly. It works on copies. If the logic fails or breaks a rule, Theus just throws the changes away.

Audit Gates: You define the "red lines" (like balance can’t be negative) in a simple YAML. The framework blocks any commit that crosses them.

I’ve been using it to build AI agents that I can actually trust with "write" access. It’s not about making code faster; it’s about making it right, and being able to sleep at night.

I'd love to hear what you think about this "Process-Oriented" approach. Thanks!

Comments

dohuyhoangvn93•1h ago
Looking for feedback on a core design dilemma: To strict_mode or not?

Thanks for checking out Theus! I’m currently at a crossroads regarding one specific feature and would love to hear your thoughts.

In Theus, the default behavior is Full Transactional Integrity—every mutation happens on a 'Shadow Copy' so we can rollback instantly if an Audit Rule is violated. This is great for safety but can be expensive for high-frequency loops like Reinforcement Learning or processing large Tensors.

To solve this, I’ve implemented a strict_mode=False toggle. When disabled:

Shadow Copying is bypassed: Reading/Writing happens directly on the real object.

Zero Overhead: No transaction objects or audit logs are created.

Trade-off: You lose all safety—no rollbacks, no contract enforcement, and crashes leave the state 'dirty'.

My dilemma: Is providing a 'Strict Mode Toggle' a pragmatic necessity for performance, or does it defeat the entire purpose of a framework built for safety?

Should I keep this global toggle, or should I force developers to use more granular optimizations (like my heavy_ prefix for specific large assets) to keep the 'Safety-First' philosophy intact?

I'd appreciate any architectural insights from those who have built similar state-heavy systems!