frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

EVs Are a Failed Experiment

https://spectator.org/evs-are-a-failed-experiment/
1•ArtemZ•2m ago•0 comments

MemAlign: Building Better LLM Judges from Human Feedback with Scalable Memory

https://www.databricks.com/blog/memalign-building-better-llm-judges-human-feedback-scalable-memory
1•superchink•3m ago•0 comments

CCC (Claude's C Compiler) on Compiler Explorer

https://godbolt.org/z/asjc13sa6
1•LiamPowell•5m ago•0 comments

Homeland Security Spying on Reddit Users

https://www.kenklippenstein.com/p/homeland-security-spies-on-reddit
2•duxup•7m ago•0 comments

Actors with Tokio (2021)

https://ryhl.io/blog/actors-with-tokio/
1•vinhnx•9m ago•0 comments

Can graph neural networks for biology realistically run on edge devices?

https://doi.org/10.21203/rs.3.rs-8645211/v1
1•swapinvidya•21m ago•1 comments

Deeper into the shareing of one air conditioner for 2 rooms

1•ozzysnaps•23m ago•0 comments

Weatherman introduces fruit-based authentication system to combat deep fakes

https://www.youtube.com/watch?v=5HVbZwJ9gPE
2•savrajsingh•23m ago•0 comments

Why Embedded Models Must Hallucinate: A Boundary Theory (RCC)

http://www.effacermonexistence.com/rcc-hn-1-1
1•formerOpenAI•25m ago•2 comments

A Curated List of ML System Design Case Studies

https://github.com/Engineer1999/A-Curated-List-of-ML-System-Design-Case-Studies
3•tejonutella•29m ago•0 comments

Pony Alpha: New free 200K context model for coding, reasoning and roleplay

https://ponyalpha.pro
1•qzcanoe•34m ago•1 comments

Show HN: Tunbot – Discord bot for temporary Cloudflare tunnels behind CGNAT

https://github.com/Goofygiraffe06/tunbot
1•g1raffe•36m ago•0 comments

Open Problems in Mechanistic Interpretability

https://arxiv.org/abs/2501.16496
2•vinhnx•42m ago•0 comments

Bye Bye Humanity: The Potential AMOC Collapse

https://thatjoescott.com/2026/02/03/bye-bye-humanity-the-potential-amoc-collapse/
2•rolph•46m ago•0 comments

Dexter: Claude-Code-Style Agent for Financial Statements and Valuation

https://github.com/virattt/dexter
1•Lwrless•48m ago•0 comments

Digital Iris [video]

https://www.youtube.com/watch?v=Kg_2MAgS_pE
1•vermilingua•53m ago•0 comments

Essential CDN: The CDN that lets you do more than JavaScript

https://essentialcdn.fluidity.workers.dev/
1•telui•54m ago•1 comments

They Hijacked Our Tech [video]

https://www.youtube.com/watch?v=-nJM5HvnT5k
1•cedel2k1•57m ago•0 comments

Vouch

https://twitter.com/mitchellh/status/2020252149117313349
34•chwtutha•57m ago•6 comments

HRL Labs in Malibu laying off 1/3 of their workforce

https://www.dailynews.com/2026/02/06/hrl-labs-cuts-376-jobs-in-malibu-after-losing-government-work/
4•osnium123•58m ago•1 comments

Show HN: High-performance bidirectional list for React, React Native, and Vue

https://suhaotian.github.io/broad-infinite-list/
2•jeremy_su•1h ago•0 comments

Show HN: I built a Mac screen recorder Recap.Studio

https://recap.studio/
1•fx31xo•1h ago•1 comments

Ask HN: Codex 5.3 broke toolcalls? Opus 4.6 ignores instructions?

1•kachapopopow•1h ago•0 comments

Vectors and HNSW for Dummies

https://anvitra.ai/blog/vectors-and-hnsw/
1•melvinodsa•1h ago•0 comments

Sanskrit AI beats CleanRL SOTA by 125%

https://huggingface.co/ParamTatva/sanskrit-ppo-hopper-v5/blob/main/docs/blog.md
1•prabhatkr•1h ago•1 comments

'Washington Post' CEO resigns after going AWOL during job cuts

https://www.npr.org/2026/02/07/nx-s1-5705413/washington-post-ceo-resigns-will-lewis
4•thread_id•1h ago•1 comments

Claude Opus 4.6 Fast Mode: 2.5× faster, ~6× more expensive

https://twitter.com/claudeai/status/2020207322124132504
1•geeknews•1h ago•0 comments

TSMC to produce 3-nanometer chips in Japan

https://www3.nhk.or.jp/nhkworld/en/news/20260205_B4/
3•cwwc•1h ago•0 comments

Quantization-Aware Distillation

http://ternarysearch.blogspot.com/2026/02/quantization-aware-distillation.html
2•paladin314159•1h ago•0 comments

List of Musical Genres

https://en.wikipedia.org/wiki/List_of_music_genres_and_styles
1•omosubi•1h ago•0 comments
Open in hackernews

Show HN: Tansive – AI Agents that won't accidentally restart your prod database

https://github.com/tansive/tansive
3•anand-tan•7mo ago
While experimenting with LLM-driven agents to automate DevOps tasks, I realized how easily a bad prompt could mess up a cluster so badly you’d have to redeploy everything.

That's when I started building Tansive, an open-source platform to help teams securely integrate AI agents into real workflows.

I've been impressed with what AI agents can do, especially in routine tasks where the human toil is real and probability of human error is higher. But there are problems taking them to production.

For example:

- How do you prevent an agent from accidentally restarting production pods?

- How do you audit what it actually did when something goes wrong?

- When a workflow achieves an undesirable outcome, was it a bug in the tool, an incorrect prompt, a runaway agent, or a prompt injection attack?

- How do you verifiably make sure the agent didn't access Alice's records when responding to Bob's health question?

- How do you integrate agents with existing security policies and compliance requirements?

While DevOps scenarios gone wrong make for dramatic examples, most business processes that are automated need controls and guardrails.

I built Tansive to address these problems.

Here’s what Tansive enables:

- Runtime focus – Instead of focusing on building agents, Tansive focuses on their runtime execution - what they access, which tools they call, actions they take, and who triggered them.

- Declarative Catalog – A repository of agents, tools, their context and resources partitioned by environment, and segmented by namespaces, so policy rules can be defined over them. Written in yaml (GitOps friendly)

- Runtime policy enforcement – For example, “this agent can restart pods, but only in dev.” or "a finance agent that can only reconcile certain accounts"

- Session pinning – Transform or restrict sensitive data via user-defined functions (e.g., "Bob's session cannot access Alice's data", or "if feature flag X is set, then inject a WHERE clause into all SQL queries the agent makes")

- Tamper-evident, hash-linked logs

- Write tools in any language - whatever your team uses - to integrate agent workflows in to your system.

Demo video: https://vimeo.com/1099257866?share=copy - a real example of policy enforcement and session pinning in action.

(Agent can restart pods in dev but not in prod; A Health Bot pinned to one patient's ID cannot access another patient's record)

I also spent time thinking about how to get teams to adopt AI based automation. The biggest blocker I had faced was that every tool had to be written in Python using specific SDKs. This was a non-starter for teams already using different languages.

I realized that a generic agent that handles LLMs and tool calls, with functionality in language-agnostic tools, would work much better. Teams can write tools in whatever they already use - Go or Java for services, JavaScript for support, bash for ops. And this will fit well in to any of today's popular agent frameworks.

Transforms came from asking 'How do I use my existing scripts, but adapt the LLM's input into a format my scripts can understand?'

Why this matters:

AI Agents are amazing, but the boring stuff around security boundaries, compliance, and predictable behavior are important for their adoption. Tansive seeks to address that gap.

Tansive is in early alpha (v0.1.0) - intended for preview, but functional enough to try in real workflows in non-prod.

This field is nascent and my goal is to go after the easy, but the most pressing problems first, and build from there.

And I'd love feedback from anyone in infra or exploring AI agent security, integration, and compliance - or just curious to kick the tires.

Happy to answer questions and hear what you think!

GitHub: https://github.com/tansive/tansive

Docs: https://docs.tansive.io