frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

What Functional Programmers Get Wrong About Systems

https://www.iankduncan.com/engineering/2026-02-09-what-functional-programmers-get-wrong-about-sys...
1•subset•35s ago•0 comments

Review-for-agent: A local, PR-style UI for reviewing AI agent code changes

https://github.com/Waraq-Labs/review-for-agent
1•asadjb•2m ago•1 comments

Show HN: Insurance AI Benchmark – 510 scenarios from production

https://huggingface.co/datasets/pashas/insurance-ai-reliability-benchmark
1•pavelsukhachev•3m ago•0 comments

Show HN: Hybrid Orchestrator – Reliable AI agents for finance

https://github.com/pavelsukhachev/hybrid-orchestrator
1•pavelsukhachev•4m ago•0 comments

Searches for Learn Python up 150%

https://trends.google.com/trends/explore?date=today%205-y&geo=US&q=learn%20python&hl=en
1•wagslane•6m ago•1 comments

Data Modeling Is Changing

https://www.ssp.sh/brain/data-modeling-is-changing/
1•tanelpoder•6m ago•0 comments

G's Last Exam

https://twitter.com/rauchg/status/2020616857561284848
1•gmays•7m ago•0 comments

Ads Don't Work That Way (2014)

https://meltingasphalt.com/ads-dont-work-that-way/
1•Ariarule•9m ago•0 comments

Noam Chomsky Became the Establishment's Favorite Radical

https://www.josealnino.org/p/how-noam-chomsky-became-the-establishments
1•pcvarmint•12m ago•1 comments

DFlash: Block Diffusion for Flash Speculative Decoding

https://z-lab.ai/projects/dflash/
2•gmays•13m ago•0 comments

Show HN: Blogator –AI platform that generates structured,SEO-ready blog articles

https://www.blogator.app/
1•arlindb•14m ago•0 comments

Google AI Tools Start Blocking Disney-Related Prompts

https://deadline.com/2026/02/google-disney-ai-block-legal-threat-1236713206/
1•geox•16m ago•0 comments

Loopy Particle Math (2019)

https://www.scientificamerican.com/article/loopy-particle-math/
1•mellosouls•18m ago•0 comments

Show HN: PostgreSQL extension to add compatibility to Oracle UTL_SMTP package

https://github.com/HexaCluster/pg_utl_smtp
1•avivallssa•20m ago•0 comments

Show HN: Moltdb.io – The Database for AI Agents

https://moltdb.io/
1•ronreiter•22m ago•1 comments

Show HN: Reef – Bash compatibility layer for Fish shell, written in Rust

https://github.com/ZStud/reef
1•xbuben•23m ago•0 comments

The Potential of RLMs

https://www.dbreunig.com/2026/02/09/the-potential-of-rlms.html
1•dbreunig•23m ago•0 comments

Mesa 26.0's RADV RT improvements

https://pixelcluster.github.io/Mesa-26/
2•forbiddenlake•25m ago•0 comments

AI Is a High Pass Filter

https://bryanfinster.substack.com/p/ai-is-a-high-pass-filter-for-software
2•hackerthemonkey•27m ago•0 comments

Secrets Don't Belong in a Sandbox

https://vault.oshu.dev/
1•iacguy•28m ago•0 comments

Ask Ethan: Where are all the blueshifted galaxies?

https://bigthink.com/starts-with-a-bang/where-are-blueshifted-galaxies/
1•PaulHoule•28m ago•0 comments

Astuto is now officially unmantained

https://github.com/astuto/astuto/issues/487
1•pil0u•30m ago•0 comments

Trump Accounts

https://trumpaccounts.gov
2•cdrnsf•32m ago•0 comments

Reliability of LLMs as medical assistants for the general public

https://www.nature.com/articles/s41591-025-04074-y
3•0in•35m ago•0 comments

After 6 decades, Steve's Music to close most locations in Ontario, Quebec

https://www.cbc.ca/news/canada/ottawa/after-6-decades-steve-s-music-to-close-most-locations-in-on...
1•LouisLazaris•35m ago•0 comments

Is Particle Physics Dead, Dying, or Just Hard?

https://www.quantamagazine.org/is-particle-physics-dead-dying-or-just-hard-20260126/
12•mellosouls•37m ago•2 comments

Tenure Eliminated at Oklahoma Colleges

https://www.insidehighered.com/news/faculty-issues/tenure/2026/02/05/tenure-eliminated-oklahoma-c...
3•bikenaga•38m ago•1 comments

Coffee and Tea Intake, Dementia Risk, and Cognitive Function

https://jamanetwork.com/journals/jama/article-abstract/2844764
3•bookofjoe•38m ago•0 comments

Megatech photos – 100 GB free cloud storage, private and ad-Free

https://www.megatechphotos.com/
2•slavavechir•38m ago•1 comments

When Models Examine Themselves: Vocabulary-Activation Correspondence in LLMs

https://zenodo.org/records/18568344
1•patternmatcher•44m ago•1 comments
Open in hackernews

Show HN: AgentShield SDK – Runtime security for agentic AI applications

https://pypi.org/project/agentshield-sdk/
2•iamsanjayk•9mo ago
Hi HN,

We built AgentShield, a Python SDK and CLI to add a security checkpoint for AI agents before they perform potentially risky actions like external API calls or executing generated code.

Problem: Agents calling arbitrary URLs or running unchecked code can lead to data leaks, SSRF, system damage, etc.

Solution: AgentShield intercepts these actions:

- guarded_get(url=...): Checks URL against policies (block internal IPs, HTTP, etc.) before making the request.

- safe_execute(code_snippet=...): Checks code for risky patterns (os import, eval, file access, etc.) before execution.

It works via a simple API call to evaluate the action against configurable security policies. It includes default policies for common risks.

Get Started:

Install: pip install agentshield-sdk

Get API Key (CLI): agentshield keys create

Use in Python: from agentshield_sdk import AgentShield # shield = AgentShield(api_key=...) # await shield.guarded_get(url=...) # await shield.safe_execute(code_snippet=...)

Full details, documentation, and the complete README are at <https://pypi.org/project/agentshield-sdk/>

We built this because securing agent interactions felt crucial as they become more capable. It's still early days, and we'd love to get your feedback on the approach, usability, and policies.

Comments

subhampramanik•9mo ago
Looks interesting -- Does it work like a wrapper on top of OpenAI specs? Like, can we just replace the OpenAI package with this, and it's fully integrated?
iamsanjayk•9mo ago
Hey, thanks for asking! Good question.

AgentShield isn't a wrapper around the OpenAI package, so you wouldn't replace openai with it. Think of AgentShield as a separate safety check you call just before your agent actually tries to run a specific risky action.

So, you'd still use the openai library as normal to get your response (like a URL to call or code to run). Then, before you actually use httpx/requests to call that URL, or exec() to run the code, you'd quickly check it with shield.guarded_get(the_url) or shield.safe_execute(the_code).

Currently, It focuses on securing the action itself (the URL, the code snippet) rather than wrapping the LLM call that generated it.