frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Show HN: CSL-Core – Formally Verified Neuro-Symbolic Safety Engine for AI

https://github.com/Chimera-Protocol/csl-core
2•aytuakarlar•1h ago
Hi HN, I'm Aytug, the creator of CSL-Core.

We built this because we realized that "prompt engineering" isn't enough for critical AI systems (like in finance or governance). You can't just ask an LLM nicely not to delete a database—you need a runtime guarantee.

CSL-Core is a policy language designed to bring "Policy-as-Code" to AI agents.

Instead of relying on the model's probabilistic nature, CSL enforces constraints that are:

1. Formally Verified: Policies are compiled into Z3 constraints to mathematically prove they have no logical conflicts or loopholes.

2. Deterministic: The checks happen in a separate runtime engine, independent of the LLM's context window.

3. Model Agnostic: It acts as a firewall between the LLM and your tools/API.

It's currently in Alpha (v0.2). Currently working on TLA+ specifications for the dual formal verification engine and governance architecture because we believe AI safety needs mathematical rigor.

I'd appreciate any feedback on the DSL syntax and our verification approach.

Comments

aytuakarlar•1h ago
OP here! You can try it out immediately via `pip install csl-core`.

It allows you to verify policies using the CLI without writing any Python code. I'd really appreciate any feedback on the DSL syntax or the verification approach. Thanks!

GahLak•1h ago
This addresses a real pain point—runtime guarantees vs probabilistic hopes. A few questions from someone who's dealt with LLM guardrails in production:

1. How does CSL handle the gap between what an LLM intends to do (based on its reasoning) and what constraints allow? For example, if a policy forbids "database modifications" but an agent legitimately needs to write logs—does the DSL let you express intent-aware exceptions, or do you end up with overly broad rules?

2. Z3 constraint solving can be slow at scale. What's your performance profile when policies are deeply nested or involve many symbolic variables? Have you profiled latency on, say, 100+ concurrent agent requests?

The formal verification angle is solid, but I'd be curious whether you've stress-tested the actual bottleneck: not the policy logic itself, but the interaction between agent reasoning and constraint checking when policies need to be permissive enough to be useful.

aytuakarlar•1h ago
Great questions! Thats exact trade-offs we're navigating. Re: Intent-aware exceptions: CSL uses hierarchical policy composition for this. Example from our banking case study: CSL:

DOMAIN BankingGuard {

  VARIABLES {
    action: {"TRANSFER", "WITHDRAW", "DEPOSIT"}
    amount: 0..100000
    country: {"TR", "US", "EU", "NK"}
    is_vip: {"TRUE", "FALSE"}
    kyc_level: 0..5
    risk_score: 0..1
    device_trust: 0..1
  }

  // Hard boundary: never flexible
  STATE_CONSTRAINT no_sanctioned_country {
    WHEN country == country
    THEN country MUST NOT BE "NK"
  }
  
  // Soft boundaries: context-dependent
  STATE_CONSTRAINT transfer_limit_non_vip {
    WHEN action == "TRANSFER" AND is_vip == "FALSE"
    THEN amount <= 1000
  }
  
  STATE_CONSTRAINT transfer_limit_vip {
    WHEN action == "TRANSFER" AND is_vip == "TRUE"
    THEN amount <= 10000
  }
  
  // Multi-dimensional guards (amount + device trust)
  STATE_CONSTRAINT device_trust_for_medium_transfer {
    WHEN action == "TRANSFER" AND amount > 300
    THEN device_trust >= 0.7
  }
}

Variables like is_vip, risk_score, device_trust are injected at runtime by your application logic, not inferred by the LLM. The LangChain integration looks like:

safe_tools = guard_tools( tools=[transfer_tool], guard=guard, inject={ "is_vip": current_user.tier == "VIP", # From auth "risk_score": fraud_model.score(context), # From ML model "device_trust": session.device_score, # From fingerprinting "country": geoip.lookup(ip) } )

So the agent can't "decide" it's VIP or that the device is trusted. Those come from external systems. The policy just enforces the combinations. Your database/logging example: You'd add a purpose variable and carve out:

STATE_CONSTRAINT no_user_table_writes { WHEN action == "WRITE" AND table == "users" THEN purpose MUST BE "AUDIT_LOG" }

If you don't inject enough context, rules become binary (allow/deny). If you inject too much, the policy becomes a replica of your business logic. We're finding the sweet spot is 6-10 context variables that encode the "security-critical dimensions" (user tier, risk, trust, geography).

Re: Z3 performance: Z3 runs at compile-time, not runtime. The workflow is: Policy compilation (once): Z3 proves logical consistency → generates pure Python functors Runtime (per request): Functor evaluation only, no symbolic solver Typical policy (<20 constraints): <1ms per evaluation. We haven't stress-tested 100+ concurrent yet (Alpha), but since runtime is stateless Python, it should scale horizontally. The bottleneck would be the LangChain overhead, not CSL. Your concern about permissiveness is spot-on. We're addressing this in Phase 2 (TLA+) by adding temporal logic: instead of "block all DB writes," you can express "allow writes if preceded by read within 5 actions." This gives you state-aware permissions without making rules combinatorially complex. The current Z3 engine is intentionally conservative. TLA+ will add the flexibility production systems need. Appreciate the pushback—this is exactly the feedback we need at Alpha stage. If you have a specific use case in mind, I'd love to test CSL against it.

westurner•1h ago
How does this work with WASM sandboxing for agents?

Should a (formally verified) policy engine run within the same WASM runtime, or should it be enforced by the WASM runtime, or by the VM or Container that the WASM runtime runs within?

"Show HN: Amla Sandbox – WASM bash shell sandbox for AI agents" (2026) https://news.ycombinator.com/item?id=46825026 re: eWASM and costed opcodes for agent efficiency

How do these userspace policies compare to MAC and DAC implementations like SELinux AVC, AppArmor, Systemd SyscallFilter, and seccomp with containers for example?

aytuakarlar•40m ago
Hi westurner, I just spent some time going through the Amla and agentvm threads you linked, fascinating stuff. I see you've been tracking this intersection specifically around resource isolation.

This is the core of our Phase 3 roadmap. To answer where CSL fits in that stack:

We distinguish between Layer 7 (Business Logic) and Layer 3/4 (Resource Isolation).

Re: WASM Integration You're right to separate them. Our mental model for the future (Q2) looks like this:

1. Wasmtime/Host (Layer 4) -> Handles "Can I access socket?" (Capabilities)

2. CSL Engine (Layer 7) -> Handles "Should I transfer $10k?" (Policy)

Ideally, CSL acts as a Host Function Guard—basically intercepting tool calls before they hit actual syscalls. We're currently looking at two paths:

OPTION A: Host-Side Enforcement

The host intercepts tool calls. CSL (compiled to WASM) runs in the host context to verify the payload before execution.

Pros: Guest can't bypass. Cons: Policy updates need host restart.

OPTION B: Component Model

Using `wit` interfaces where the agent imports a policy-guard capability.

Pros: It's part of the contract. Cons: More complex to compose.

Starting with A, migrating to B as Component Model matures makes sense to us.

Re: SELinux/seccomp vs CSL

The example from your Amla work actually illustrates this perfectly. Even with a perfect sandbox, an agent can call `transfer_funds(dest="attacker")` if it has the capability. seccomp can't reason about "attacker" vs "legitimate_user"—it just sees a valid syscall.

- seccomp stops the agent from hacking the kernel

- CSL stops the agent from making bad business decisions

You need both layers for actual safety.

Re: eWASM / Costed Opcodes

This is something we're thinking about for resource metering. Treating gas/budget as policy variables:

  WHEN gas_used > 800000

  THEN complexity_score <= 50  // throttle expensive ops

It's closer to metered execution than sandboxing, but fits the same formal verification approach.

Current status:

We're Python-only right now (Alpha v0.2). WASM compilation is Q2-Q3. Planning to:

1. Compile CSL to .wasm (no Python runtime needed)

2. Integrate as Wasmtime host function

3. Expose via Component Model interfaces

If you're open to it, I'd love to pick your brain on the Component Model side when we get there. Your syscall isolation work + our semantic policy layer seem pretty complementary.

Jeffrey Epstein's digital cleanup crew

https://www.theverge.com/report/876081/jeffrey-epstein-files-seo-google-digital-footprint-emails
2•imartin2k•1m ago•0 comments

Real-time Reddit sentiment tracker for stock trading

https://www.wsbsentiment.com/
1•shawnmfarnum•1m ago•1 comments

Trump's War on History

https://www.motherjones.com/politics/2026/02/america-freedom-task-force-250-trump-anniversary-his...
1•leotravis10•1m ago•0 comments

Quitting .NET after 22 years

https://www.thatsoftwaredude.com/content/14253/quitting-dot-net-after-22-years
1•Waltz1•2m ago•0 comments

Is human collaboration the answer to the skill formation risks by AI?

https://www.gethopp.app/blog/pair-prompting
1•iparaskev•6m ago•0 comments

Microsoft Should Watch the Expanse

https://idiallo.com/blog/microsoft-should-watch-the-expanse
1•nomdep•6m ago•0 comments

Show HN: Cosmic CLI – Build, deploy, and manage apps from your terminal with AI

https://github.com/cosmicjs/cli
1•tonyspiro•6m ago•0 comments

AgentLogs: Open-source observability for AI coding agents

https://github.com/agentlogs/agentlogs
1•tosh•7m ago•0 comments

WordCatcher

https://wordwalker.ca/games/word-catcher/
1•petedrinnan•7m ago•0 comments

Breakthrough pancreatic cancer therapy blocks tumor resistance in mice

https://www.pnas.org/doi/10.1073/pnas.2523039122
1•DpdC•8m ago•0 comments

Show HN: Multimodal perception system for real-time conversation

https://raven.tavuslabs.org
2•mert_gerdan•9m ago•1 comments

Heuristics for lab robotics, and where its future may go

https://www.owlposting.com/p/heuristics-for-lab-robotics-and-where
1•abhishaike•10m ago•0 comments

Show HN: Traction – Security readiness framework for scaling SaaS teams

https://traction.fyi
1•ERROR_0x06•11m ago•0 comments

Crossview v3.5.0 – New auth modes (header / none), no DB required for proxy auth

https://github.com/corpobit/crossview
1•moeidheidari•11m ago•1 comments

Show HN: Tasty A.F. – Turn Any Online Recipe into a 3x5 Notecard

https://tastyaf.recipes
1•adammfrank•12m ago•0 comments

Photoswitching for chromocontrol of TRPC4/5 channel functions in live tissues

https://www.nature.com/articles/s41589-025-02085-x
2•PaulHoule•12m ago•0 comments

This feels so reminiscent of the whimsical times in tech

https://www.tryroro.com/code
2•songqipu•14m ago•1 comments

Hello, Dada

https://smallcultfollowing.com/babysteps/blog/2026/02/09/hello-dada/
2•ibobev•14m ago•0 comments

Expectation and Copysets

https://buttondown.com/jaffray/archive/expectation-and-copysets/
2•ibobev•15m ago•0 comments

LLMCode Lab – Compare up to 5 LLMs side-by-side, then fuse the best answers

https://LLMCode.ai
2•cmeshare•15m ago•2 comments

BurgerDisk Tests

https://www.colino.net/wordpress/archives/2026/02/08/burgerdisk-tests/
2•ibobev•15m ago•0 comments

In praise of the dad joke (2023)

https://wit.substack.com/p/the-familiar-patter-of-the-paterfamilias
2•NaOH•16m ago•0 comments

Looking for feedback from someone who hired technical freelancers earlier

2•yusufhgmail•17m ago•0 comments

Update on Update [video]

https://www.youtube.com/watch?v=M-ZLz8Wg34s
2•tosh•17m ago•0 comments

USDA's reputation suffers after revisions in US corn acres

https://www.reuters.com/business/usdas-reputation-suffers-after-massive-revisions-us-corn-acres-2...
3•DustinEchoes•18m ago•0 comments

Updating the Expiring Secure Boot Certificates Is Sure to Go Without a Hitch

https://pcper.com/2026/02/updating-the-expiring-secure-boot-certificates-is-sure-to-go-without-a-...
2•speckx•18m ago•0 comments

'We feel it in our bones': Can a machine ever love you?

https://www.bbc.com/future/article/20260209-can-a-machine-ever-love-you
4•devonnull•20m ago•0 comments

Google hit by European publishers' complaint to EU over AI Overviews

https://www.reuters.com/world/european-publishers-council-files-eu-antitrust-complaint-about-goog...
3•thm•21m ago•0 comments

Writing RSS reader in 80 lines of bash

https://yobibyte.github.io/yr.html
3•sharjeelsayed•21m ago•0 comments

Simulated phishing test f#%k off

https://github.com/orsifrancesco/simulated-phishing-test-list
2•orsifrancesco•21m ago•1 comments