frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Leaked Chinese military NSCC files: Specific questions of terminal ballistics

https://twitter.com/texhnolyze_d/status/2036750151818055782
1•FrojoS•25s ago•0 comments

Google shoehorned Rust into Pixel 10 modem to make legacy code safer

https://arstechnica.com/gadgets/2026/04/google-shoehorned-rust-into-pixel-10-modem-to-make-legacy...
1•Brajeshwar•51s ago•0 comments

Creating 'new knobs of control' in biology

https://www.owlposting.com/p/on-creating-new-knobs-of-control
1•abhishaike•5m ago•0 comments

Authorization as Data, Not Code

https://www.monsterwriter.com/building-linkedrecords.html
1•WolfOliver•5m ago•0 comments

Show HN: Free browser-based vector tile inspector – paste any MVT/PBF URL

https://marinecharts.io/tools/vector-tile-inspector
1•jarl-ragnar•7m ago•0 comments

ElectricSQL database takeover vulnerability found by AI

https://casco.com/blog/electricsql-order-by-sql-injection
3•brene•8m ago•1 comments

AI Agent Security Vulnerabilities: Risks, Attacks, and Protection Strategies

https://aichatspot.online/ai-agent-security-vulnerabilities-risks-attacks-and-protection-strategies/
1•coinpress•10m ago•0 comments

Microsoft is officially killing its Outlook Lite app next month

https://techcrunch.com/2026/04/13/microsoft-is-officially-killing-its-outlook-lite-app-next-month/
1•gpi•11m ago•0 comments

DuckDB: Friendly SQL

https://duckdb.org/docs/current/sql/dialect/friendly_sql
2•tosh•11m ago•0 comments

DuckDB – The SQLite for Analytics (2020) [video]

https://www.youtube.com/watch?v=PFUZlNQIndo
1•tosh•13m ago•0 comments

Show HN: Bitterbot – A local-first P2P agent mesh with skill trading

https://github.com/Bitterbot-AI/bitterbot-desktop
2•Doug_Bitterbot•13m ago•1 comments

The Sovereign Protocol

https://docs.google.com/document/d/1rsm_2HJQTTs2D5XJwmSYEtzktqsxW7kSRTSmOCkJEEc/edit?usp=drivesdk
1•Actu•14m ago•0 comments

Rubber Dolphy; PoC for FlipperZero BadUSB with Exfiltration Capabilities

https://github.com/carvilsi/rubber-dolphy
1•carvilsi•14m ago•1 comments

Doom over DNS

https://blog.rice.is/post/doom-over-dns/
2•wedemmoez•15m ago•0 comments

IBM's Best Customer

https://pascoe.pw/2026/04/ibm.html
3•pascoej•15m ago•0 comments

Show HN: Game recommender around experience, not genre – here's what emerged

https://slated.gg/map
1•Finnoid•17m ago•0 comments

We've caught a comet switching its spin direction for the first time

https://www.newscientist.com/article/2522785-weve-caught-a-comet-switching-its-spin-direction-for...
2•Brajeshwar•17m ago•0 comments

The AI School Bus Camera Company Blanketing America in Tickets

https://www.bloomberg.com/news/features/2026-04-14/buspatrol-school-bus-traffic-tickets-have-limi...
6•jimt1234•18m ago•1 comments

Roam AI

https://chatgpt.com/g/g-6995e5be83948191a26b6a965a6760bf-roam-ai
1•aaabbbb•19m ago•0 comments

What Happened After Denmark Adopted a Ruined City in Ukraine

https://www.nytimes.com/2026/04/14/world/europe/mykolaiv-ukraine-denmark-rebuilding.html
2•mitchbob•20m ago•1 comments

The Smile Curve Has Come for Software

https://www.edge.ceo/p/the-smile-curve-has-come-for-software
2•rwaliany•22m ago•0 comments

Show HN: Signoff.sh – Claude Co-Authored-By with random fictional characters

https://gist.github.com/Reebz/b3102c6a5de8238d3b60eb63450ee48e
1•Reebz•22m ago•0 comments

Broodlink – Multi-agent AI orchestration built for governance, in Rust

https://broodlink.ai
1•yotta25•23m ago•0 comments

The Future of Everything Is Lies, I Guess: Work

https://aphyr.com/posts/418-the-future-of-everything-is-lies-i-guess-work
19•aphyr•24m ago•10 comments

Building a Grow-Only Counter on a Sequentially Consistent KV Store

https://brunocalza.me/blog/2026/04/13/building-a-grow-only-counter-on-a-sequentially-consistent-k...
1•brunocalza•24m ago•0 comments

I built an autoblogging system that brings me visitors while sleeping

https://www.lazyseo.io/
1•costin07•24m ago•0 comments

Agentic AI pentesting with Strix: results from 18 LLM models

https://theartificialq.github.io/2026/04/14/agentic-ai-pentesting-with-strix-results-from-18-llm-...
1•TheArtificialQ•24m ago•0 comments

Mirror neurons 30 years later: implications and applications

https://www.sciencedirect.com/science/article/pii/S1364661322001346
1•rolph•25m ago•0 comments

There Is No Progress in Philosophy (2011) [pdf]

https://cdn2.psychologytoday.com/assets/There%20Is%20No%20Progress%20in%20Philosophy.pdf
3•the-mitr•26m ago•0 comments

Show HN: Resonly – prioritize feature requests by revenue impact

https://resonly.com/
1•omegascorp•26m ago•0 comments
Open in hackernews

Show HN: LogicPearl – Synthesizing deterministic executable logic from traces

https://github.com/LogicPearlHQ/logicpearl
3•kenerwin88•3h ago

Comments

kenerwin88•3h ago
TLDR: If you take the inputs and outputs of a system, pass them to LogicPearl, it will figure out the logic & rules automatically, giving you a deterministic, more easily human readable, replacement executable artifact. You can use it to distill the behavior of legacy codebases, or replace probabilistic LLM prompts with a 0-token, very fast portable WASM artifact (or native binary, or if someone wants a diff format tell me, but it’s still going to be a bitmask underneath).

It works really well for systems where the inputs are known, the possible outcomes are known, and given an input, you want to always get the same result AND know why. Which isn’t every system, but I think it’s a lot more than I even realize.

Oddly it doesn’t depend on AI, which is an odd thing to build at the moment. However, aside from existing systems, it also seems to work really well when you couple it with an LLM to generate synthetic traces (or ingesting data that you’d normally store in a RAG). From those, you distill the logic into something deterministic, repeatable, and improvable. And the nicest part is once you have the logic saved, you can reuse it without ever needing to call the LLM again. The behavior itself is stored in very human readable rules, so if you want to change it, the diff is very easy to understand.

It’s kind of hard to explain without an example. The best one I have is a real life use case. One of my friends/mentors unfortunately was recently diagnosed with a Glioblastoma, and over the last month or so I’ve learned that cancer is so unique per person in a ton of different ways. He had his removed, the genome of it mapped, and I asked him if I could try to help find medical trials for him. It turns out EVERY clinical trial is available via API (which is amazing!!), so I downloaded them (579,831, counting completed trials not just open, and 67,918 research papers). The next step normally I’d grep, filter, write a bunch of code etc. Instead, I used 25 features (the unique inputs, in this case that’s the genomics, demographics, HLA type, methylation type, medications, history, etc.), and ran it through logicpearl so it could generate the optimal rules for the best outcome for his specific variables. Because of the counterfactuals, for studies that looked ALMOST perfect but didn’t match, we could see exactly why (it turns out there are a ton of ways to be disqualified for a study). But it gave him several that DO look like perfect fits, one of which he only had one more week left that he could apply (because it had a cutoff for X days from surgery date). He told his doctors, none of them had known about it (they had found several other clinical trials), but they agreed it was probably the best one for him to try to get into. Sorry for the long novel, but the part that was unique, was that yes I had to write the logic to pull all the data down, parse a PDF he had with the genome data, but finding the optimal rules to map to the best trial I didn’t have to do anything other than get the data.

Ok, how it works:

Input data (CSV or JSON) -> ingested as decision traces -> infer feature schema -> generates simple candidate predicates -> scores candidates against observed decisions -> selects a compact rule set using greedy and/or solver-backed search (z3 so far has worked best) -> emit an intermediate representation -> evaluates that artifact deterministically at runtime -> returns stable JSON with matched rules and explanation metadata -> supports semantic diffs between artifact versions. The rules are stored as a bitmask and evaluated simultaneously (which is why it’s often faster than whatever the original system is), and the best part is it tells us exactly what did or didn’t flip. Meaning we get counterfactuals, so you don’t just get the answer but the why (this part is what makes it really helpful).

Thank you for reading!! Everything is open source (MIT), I’d love to get any feedback!

danielfromtas•3h ago
I'll be watching the github very closely, I love this
kenerwin88•1h ago
thank you very much!! :). Let me know if you run into any issues if you test it out on anything!
danielfromtas•1h ago
ofc, will do :)