frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Courtney Love Does the Math

https://www.salon.com/2000/06/14/love_7/
1•sebg•1m ago•0 comments

Ammobia says it has reinvented a century-old technology

https://techcrunch.com/2026/01/13/ammobia-says-it-has-reinvented-a-century-old-technology/
1•PaulHoule•3m ago•0 comments

Show HN: JustNotifs – Push notifications for teams, flat $29/mo instead of SMS

https://justnotifs.com/
1•acaronlex•4m ago•0 comments

Thoughts on AI/LLM usage from a 25 year industry vet

1•hutchplusplus•5m ago•0 comments

Apple Human Interface Guidelines (1987)

https://archive.org/details/applehumaninterf00appl
1•wizardforhire•7m ago•0 comments

Using World Models for Consistent AI Filmmaking

https://getartcraft.com/news/world-models-for-film
1•echelon•8m ago•0 comments

Belkin's Wemo smart devices will go offline on Saturday

https://www.theverge.com/tech/870890/belkin-wemo-cloud-services-shut-down
1•bookofjoe•8m ago•1 comments

Pages from Ceefax: Today's news at yesterday's pace

https://pagesfromceefax.azurewebsites.net/
1•xk3•8m ago•1 comments

Claude Code's GitHub page auto closes issues after 60 days

https://github.com/anthropics/claude-code/issues/16497
1•dcreater•9m ago•1 comments

Ask HN: Routing LLM queries to respective best model

1•nemath•10m ago•0 comments

Making Workflows Work Right in Golang

https://www.dbos.dev/blog/how-we-built-golang-native-durable-execution
1•KraftyOne•12m ago•0 comments

The imminent risk of vibe coding

https://basta.substack.com/p/the-imminent-risk-of-vibe-coding
1•feifan•12m ago•0 comments

Former Google engineer found guilty of espionage and theft of AI tech

https://www.cnbc.com/2026/01/30/former-google-engineer-found-guilty-of-espionage-and-theft-of-ai-...
1•rmason•15m ago•0 comments

Ingress Nginx: Statement from Kubernetes Committees

https://kubernetes.io/blog/2026/01/29/ingress-nginx-statement/
2•sibellavia•15m ago•0 comments

Linux kernel mailing list: [RFC] AI review prompt updates

https://lore.kernel.org/lkml/b187e0c1-1df8-4529-bfe4-0a1d65221adc@meta.com/
1•speckx•15m ago•0 comments

The Influence of Anxiety

https://thepointmag.com/examined-life/the-influence-of-anxiety/
2•sternmere•17m ago•0 comments

Wojtek (Bear)

https://en.wikipedia.org/wiki/Wojtek_(bear)
1•gynecologist•18m ago•0 comments

Polymarket, 'privileged' users made millions betting on war strikes

https://www.theguardian.com/society/ng-interactive/2026/jan/30/polymarket-prediction-markets-betting
1•paulpauper•18m ago•1 comments

Show HN: I Made MCP to Make Claude Code Genius Email Marketer

https://docs.sequenzy.com/concepts/mcp
2•nikpolale•18m ago•1 comments

Show HN: Jobstocks.ai – 6 months in, showing some interesting signals

https://jobstocks.ai/
1•TalO•19m ago•0 comments

Signals: Toward a Self-Improving Agent

https://factory.ai/news/factory-signals
1•janpio•20m ago•0 comments

Surfel-based global illumination on the web

https://juretriglav.si/surfel-based-global-illumination-on-the-web/
1•iamwil•21m ago•0 comments

P vs. NP and the Difficulty of Computation: A ruliological approach

https://writings.stephenwolfram.com/2026/01/p-vs-np-and-the-difficulty-of-computation-a-ruliologi...
2•tzury•22m ago•1 comments

Hypergrowth isn't always easy

https://tailscale.com/blog/hypergrowth-isnt-always-easy
2•usrme•22m ago•0 comments

Alternative to Claudebot/Moltbot, but secure, with control and capabilities

https://twitter.com/Chi_Wang_/status/2017067935601426833
2•Kn1026•23m ago•2 comments

How I built my own secure version of Clawdbot

https://medium.com/ai-native-enterprise/how-i-built-my-own-enterprise-grade-clawdbot-without-the-...
5•cliffly•24m ago•0 comments

Don Lemon Arrested

https://www.nbcnews.com/news/us-news/don-lemon-arrested-federal-authorities-attorney-says-rcna256680
3•Extropy_•24m ago•3 comments

Steve Jobs' son says he can help end cancer deaths – and he's raised $$$$

https://www.sfchronicle.com/health/article/reed-jobs-cancer-fund-21324598.php
3•aanet•26m ago•3 comments

Bill Gates asked Epstein for "antibiotics" for an STD from "Russian girls."

https://twitter.com/LeadingReport/status/2017297448197103947
6•sergiotapia•28m ago•3 comments

Wikipedia: Sandbox

https://en.wikipedia.org/wiki/Wikipedia:Sandbox
2•zaptrem•28m ago•0 comments
Open in hackernews

Show HN: AgentShield SDK – Runtime security for agentic AI applications

https://pypi.org/project/agentshield-sdk/
2•iamsanjayk•9mo ago
Hi HN,

We built AgentShield, a Python SDK and CLI to add a security checkpoint for AI agents before they perform potentially risky actions like external API calls or executing generated code.

Problem: Agents calling arbitrary URLs or running unchecked code can lead to data leaks, SSRF, system damage, etc.

Solution: AgentShield intercepts these actions:

- guarded_get(url=...): Checks URL against policies (block internal IPs, HTTP, etc.) before making the request.

- safe_execute(code_snippet=...): Checks code for risky patterns (os import, eval, file access, etc.) before execution.

It works via a simple API call to evaluate the action against configurable security policies. It includes default policies for common risks.

Get Started:

Install: pip install agentshield-sdk

Get API Key (CLI): agentshield keys create

Use in Python: from agentshield_sdk import AgentShield # shield = AgentShield(api_key=...) # await shield.guarded_get(url=...) # await shield.safe_execute(code_snippet=...)

Full details, documentation, and the complete README are at <https://pypi.org/project/agentshield-sdk/>

We built this because securing agent interactions felt crucial as they become more capable. It's still early days, and we'd love to get your feedback on the approach, usability, and policies.

Comments

subhampramanik•9mo ago
Looks interesting -- Does it work like a wrapper on top of OpenAI specs? Like, can we just replace the OpenAI package with this, and it's fully integrated?
iamsanjayk•9mo ago
Hey, thanks for asking! Good question.

AgentShield isn't a wrapper around the OpenAI package, so you wouldn't replace openai with it. Think of AgentShield as a separate safety check you call just before your agent actually tries to run a specific risky action.

So, you'd still use the openai library as normal to get your response (like a URL to call or code to run). Then, before you actually use httpx/requests to call that URL, or exec() to run the code, you'd quickly check it with shield.guarded_get(the_url) or shield.safe_execute(the_code).

Currently, It focuses on securing the action itself (the URL, the code snippet) rather than wrapping the LLM call that generated it.