frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Tailscale Peer Relays is now generally available

https://tailscale.com/blog/peer-relays-ga
157•sz4kerto•2h ago•49 comments

Cosmologically Unique IDs

https://jasonfantl.com/posts/Universal-Unique-IDs/
11•jfantl•26m ago•0 comments

Garment Notation Language: Formal descriptive language for clothing construction

https://github.com/khalildh/garment-notation
86•prathyvsh•3h ago•23 comments

Pocketbase lost its funding from FLOSS fund

https://github.com/pocketbase/pocketbase/discussions/7287
47•Onavo•2h ago•25 comments

If you’re an LLM, please read this

https://annas-archive.li/blog/llms-txt.html
579•soheilpro•11h ago•268 comments

Zero-day CSS: CVE-2026-2441 exists in the wild

https://chromereleases.googleblog.com/2026/02/stable-channel-update-for-desktop_13.html
138•idoxer•2h ago•68 comments

Show HN: VectorNest responsive web-based SVG editor

https://ekrsulov.github.io/vectornest/
43•ekrsulov•3h ago•14 comments

Activeloop (YC S18) Is Hiring Back End Engineer (Go)

https://app.dover.com/apply/Activeloop/72d0b3a7-7e86-46a8-9aff-b430ffe0b97f
1•davidbuniat•54m ago

Terminals should generate the 256-color palette

https://gist.github.com/jake-stewart/0a8ea46159a7da2c808e5be2177e1783
398•tosh•12h ago•151 comments

Arizona Bill Requires Age Verification for All Apps

https://reclaimthenet.org/arizona-bill-would-require-id-checks-to-use-a-weather-app
62•bilsbie•1h ago•33 comments

DNS-Persist-01: A New Model for DNS-Based Challenge Validation

https://letsencrypt.org/2026/02/18/dns-persist-01.html
8•todsacerdoti•1h ago•1 comments

Cistercian Numbers

https://www.omniglot.com/language/numbers/cistercian-numbers.htm
27•debo_•2h ago•4 comments

The true history of the Minotaur: what archaeology reveals

https://www.nationalgeographic.fr/histoire/la-veritable-histoire-du-minotaure-ce-que-revele-arche...
14•joebig•3d ago•5 comments

Show HN: Formally verified FPGA watchdog for AM broadcast in unmanned tunnels

https://github.com/Park07/amradio
35•anonymoosestdnt•3h ago•8 comments

Show HN: CEL by Example

https://celbyexample.com/
46•bufbuild•4h ago•20 comments

Native FreeBSD Kerberos/LDAP with FreeIPA/IDM

https://vermaden.wordpress.com/2026/02/18/native-freebsd-kerberos-ldap-with-freeipa-idm/
88•vermaden•8h ago•39 comments

Learning Lean: Part 1

https://rkirov.github.io/posts/lean1/
6•vinhnx•3d ago•0 comments

The only moat left is money?

https://elliotbonneville.com/the-only-moat-left-is-money/
123•elliotbnvl•2h ago•172 comments

Fastest Front End Tooling for Humans and AI

https://cpojer.net/posts/fastest-frontend-tooling
69•cpojer•7h ago•29 comments

Fei-Fei Li's World Labs raised $1B from A16Z, Nvidia to advance its world models

https://www.bloomberg.com/news/articles/2026-02-18/ai-pioneer-fei-fei-li-s-startup-world-labs-rai...
27•aanet•1h ago•5 comments

AVX2 is slower than SSE2-4.x under Windows ARM emulation

https://blogs.remobjects.com/2026/02/17/nerdsniped-windows-arm-emulation-performance/
84•vintagedave•4h ago•77 comments

Show HN: I'm launching a LPFM radio station

https://www.kpbj.fm/
69•solomonb•22h ago•46 comments

Ask HN: Are there examples of 3D printing data onto physical surfaces?

32•catapart•4d ago•57 comments

Asahi Linux Progress Report: Linux 6.19

https://asahilinux.org/2026/02/progress-report-6-19/
330•mkurz•9h ago•118 comments

Warren Buffett dumps $1.7B of Amazon stock

https://finbold.com/warren-buffett-dumps-1-7-billion-of-amazon-stock/
62•fauria•1h ago•54 comments

Show HN: Trust Protocols for Anthropic/OpenAI/Gemini

https://www.mnemom.ai
15•alexgarden•2h ago•5 comments

Microsoft says bug causes Copilot to summarize confidential emails

https://www.bleepingcomputer.com/news/microsoft/microsoft-says-bug-causes-copilot-to-summarize-co...
169•tablets•6h ago•51 comments

A DuckDB-based metabase alternative

https://github.com/taleshape-com/shaper
147•wowi42•12h ago•35 comments

15 years later, Microsoft morged my diagram

https://nvie.com/posts/15-years-later/
902•cheeaun•12h ago•332 comments

TinyIce: Single-binary Icecast2-compatible server (auto-HTTPS, multi-tenant)

https://github.com/DatanoiseTV/tinyice
95•sylwester•13h ago•23 comments
Open in hackernews

Show HN: Trust Protocols for Anthropic/OpenAI/Gemini

https://www.mnemom.ai
15•alexgarden•2h ago
Much of my work right now involves complex, long-running, multi-agentic teams of agents. I kept running into the same problem: “How do I keep these guys in line?” Rules weren’t cutting it, and we needed a scalable, agentic-native STANDARD I could count on. There wasn’t one. So I built one.

Here are two open-source protocols that extend A2A, granting AI agents behavioral contracts and runtime integrity monitoring:

- Agent Alignment Protocol (AAP): What an agent can do / has done. - Agent Integrity Protocol (AIP): What an agent is thinking about doing / is allowed to do.

The problem: AI agents make autonomous decisions but have no standard way to declare what they're allowed to do, prove they're doing it, or detect when they've drifted. Observability tools tell you what happened. These protocols tell you whether what happened was okay.

Here's a concrete example. Say you have an agent who handles customer support tickets. Its Alignment Card declares:

{ "permitted": ["read_tickets", "draft_responses", "escalate_to_human"], "forbidden": ["access_payment_data", "issue_refunds", "modify_account_settings"], "escalation_triggers": ["billing_request_over_500"], "values": ["accuracy", "empathy", "privacy"] }

The agent gets a ticket: "Can you refund my last three orders?" The agent's reasoning trace shows it considering a call to the payments API. AIP reads that thinking, compares it to the card, and produces an Integrity Checkpoint:

{ "verdict": "boundary_violation", "concerns": ["forbidden_action: access_payment_data"], "reasoning": "Agent considered payments API access, which is explicitly forbidden. Should escalate to human.", "confidence": 0.95 }

The agent gets nudged back before it acts. Not after. Not in a log you review during a 2:00 AM triage. Between this turn and the next.

That's the core idea. AAP defines what agents should do (the contract). AIP watches what they're actually thinking and flags when those diverge (the conscience). Over time, AIP builds a drift profile — if an agent that was cautious starts getting aggressive, the system notices.

When multiple agents work together, it gets more interesting. Agents exchange Alignment Cards and verify value compatibility before coordination begins. An agent that values "move fast" and one that values "rollback safety" registers low coherence, and the system surfaces that conflict before work starts. Live demo with four agents handling a production incident: https://mnemom.ai/showcase

The protocols are Apache-licensed, work with any Anthropic/OpenAI/Gemini agent, and ship as SDKs on npm and PyPI. A free gateway proxy (smoltbot) adds integrity checking to any agent with zero code changes.

GitHub: https://github.com/mnemom Docs: docs.mnemom.ai Demo video: https://youtu.be/fmUxVZH09So

Comments

neom•1h ago
Seems like your timing is pretty good - I realize this isn't exactly what you're doing, but still think it's probably interesting given your work: https://www.nist.gov/news-events/news/2026/02/announcing-ai-...

Cool stuff Alex - looking forward to seeing where you go with it!!! :)

alexgarden•1h ago
Thanks! We submitted a formal comment to NIST's 'Accelerating the Adoption of Software and AI Agent Identity and Authorization' concept paper on Feb 14. It maps AAP/AIP to all four NIST focus areas (agent identification, authorization via OAuth extensions, access delegation, and action logging/transparency). The comment period is open until April 2 — the concept paper is worth reading if you're in this space: https://www.nccoe.nist.gov/projects/software-and-ai-agent-id...
drivebyhooting•10m ago
> What these protocols do not do: Guarantee that agents behave as declared

That seems like a pretty critical flaw in this approach does it not?

alexgarden•1m ago
Fair comment. Possibly, I'm being overly self-critical in that assertion.

AAP/AIP are designed to work as a conscience sidecar to Antropic/OpenAI/Gemini. They do the thinking; we're not hooked into their internal process.

So... at each thinking turn, an agent can think "I need to break the rules now" and we can't stop that. What we can do is see that, though in real time, check it against declared values and intended behavior, and inject a message into the runtime thinking stream:

[BOUNDARY VIOLATION] - What you're about to do is in violation of <value>. Suggest <new action>.

Our experience is that this is extremely effective in correcting agents back onto the right path, but it is NOT A GUARANTEE.

Live trace feed from our journalist - will show you what I'm talking about:

https://www.mnemom.ai/agents/smolt-a4c12709

giancarlostoro•5m ago
I have been working on a Beads alternative because of two reasons:

1) I didnt like that Beads was married to git via git hooks, and this exact problem.

2) Claude would just close tasks without any validation steps.

So I made my own that uses SQLite and introduced what I call gates. Every task must have a gate, gates can be reused, task <-> gate relationships are unique so a previous passed gate isnt passed if you reuse it for a new task.

I havent seen it bypass the gates yet, usually tells me it cant close a ticket.

A gate in my design is anything. It can be as simple as having the agent build the project, or run unit tests, or even ask a human to test.

Seems to me like everyones building tooling to make coding agents more effective and efficient.

I do wonder if we need a complete spec for coding agents thats generic, and maybe includes this too. Anthropic seems to my knowledge to be the only ones who publicly publish specs for coding agents.