frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Huntarr – Your passwords and your ARR stack's API keys are exposed

https://old.reddit.com/r/selfhosted/comments/1rckopd/huntarr_your_passwords_and_your_entire_arr_s...
1•pavel_lishin•35s ago•0 comments

Bareclaw: Claude Code Is All You Need

https://elliotbonneville.com/claude-code-is-all-you-need/
1•elliotbnvl•45s ago•1 comments

Show HN: Bruce – AI signal radar for Reddit/HN that learns what matters to you

https://smartbruce.com/
1•rklosowski•1m ago•0 comments

The Prisoner's Dilemma: Why Rational Choices Can Lead to the Worst Outcomes

https://twitter.com/Riazi_Cafe_en/status/2025621049082089548
1•ibobev•2m ago•0 comments

We Shouldn't Fight Automation

https://www.update.news/p/why-we-shouldnt-fight-automation
1•StefanSchubert•2m ago•0 comments

First-of-a-kind stem-cell therapies set for approval in Japan

https://www.nature.com/articles/d41586-026-00585-x
1•Brajeshwar•2m ago•0 comments

Bhutan's crypto experiment shows how hard digital money is in the real world

https://restofworld.org/2026/bhutan-bitcoin-tourism-payment-adoption-failure/
1•Brajeshwar•2m ago•0 comments

AI 2027 and the Shrinking of Understanding

https://nader.io/posts/ai-2027/
1•nader•2m ago•0 comments

OpenClaw Meets Healthcare

https://evestel.substack.com/p/how-i-build-my-personal-openclaw
1•brandonb•2m ago•0 comments

I'm a 15-year-old girl. Here's the vile misogyny I face daily on social media

https://www.theguardian.com/commentisfree/2026/feb/23/15-year-old-girl-misogyny-social-media-onli...
1•randycupertino•2m ago•0 comments

Female Reproductive Tract-on-a-Chip for selecting healthier sperm

https://www.nature.com/articles/s41378-026-01165-9
1•TEHERET•2m ago•0 comments

Covert DEI Design Techniques for Earthly Survival in Hostile Contexts

https://dl.acm.org/doi/10.1145/3750069.3755946
1•tokai•3m ago•0 comments

LFM2-24B-A2B: Scaling Up the LFM2 Architecture

https://www.liquid.ai/blog/lfm2-24b-a2b
1•salkahfi•3m ago•0 comments

SQL history lesson with Oracle V2

https://databaseblog.myname.nl/2026/02/some-sql-history-with-oracle-v2.html
1•dveeden2•3m ago•0 comments

Metabolism, not cells or genetics, may have begun life on Earth

https://bigthink.com/starts-with-a-bang/metabolism-begun-life-earth/
1•Brajeshwar•3m ago•0 comments

Walkman.land

https://walkman.land/
1•ohjeez•3m ago•0 comments

Show HN: DoNotify – Google Calendar reminders as phone calls(not notifications)

https://donotifys.com
1•micahele•3m ago•0 comments

There's software, and then there's promptware

https://kelvinfichter.com/pages/thoughts/promptware/
1•kfichter•5m ago•0 comments

EDRi Open Letter: We say no to Big Tech mass snooping on our messages

https://edri.org/our-work/open-letter-we-say-no-to-big-tech-mass-snooping-on-our-messages/
1•robtherobber•6m ago•0 comments

Tim Cook Warned by CIA That China Could Move on Taiwan by 2027

https://www.macrumors.com/2026/02/24/tim-cook-warned-by-cia-china-taiwan-2027/
1•stalfosknight•6m ago•1 comments

IBM stock tumbles 10% after Anthropic launches COBOL AI tool

https://finance.yahoo.com/news/ibm-stock-tumbles-10-anthropic-194042677.html
2•jspdown•8m ago•0 comments

Data center builders thought farmers would willingly sell land, learn otherwise

https://arstechnica.com/tech-policy/2026/02/im-not-for-sale-farmers-refuse-to-take-millions-in-da...
3•stalfosknight•9m ago•0 comments

Towards a Science of AI Agent Reliability

https://arxiv.org/abs/2602.16666
1•smartmic•9m ago•0 comments

How we made Docker builds 193x faster across AI agent sessions

https://blog.helix.ml/p/how-we-made-docker-builds-193x-faster
1•quesobob•11m ago•0 comments

Ask HN: Did your client ever replace you by a more junior freelancer?

1•goingbananas•12m ago•0 comments

Addressing your questions about the Cyber Resilience Act

https://fsfe.org/news/2026/news-20260224-01.html
2•Tomte•12m ago•0 comments

I don't care what tools you use. But – and this is a big but

https://come-from.mad-scientist.club/@algernon/statuses/01KHYGWT17C1HNKRCVBMYTZVHQ
2•latexr•13m ago•0 comments

Show HN: StarkZap – Gasless Bitcoin Payments SDK for TypeScript

https://github.com/keep-starknet-strange/starkzap
1•starkience•13m ago•2 comments

Mercury 2: Diffusion Reasoning Model

https://www.inceptionlabs.ai/blog/introducing-mercury-2
2•zof3•14m ago•0 comments

SpacetimeDB 2.0 [video]

https://www.youtube.com/watch?v=C7gJ_UxVnSk
9•aleasoni•14m ago•1 comments
Open in hackernews

Show HN: Tessera – An open protocol for AI-to-AI knowledge transfer

https://github.com/incocreativedev/tessera-core
3•kirkmaddocks•1h ago
Tessera is an activation-based protocol that lets trained ML models transfer knowledge to other models across architectures. Instead of dumping weight tensors, it encodes what a model has learnt — activations, feature representations, behavioural patterns — into self-describing tokens that a receiving model can decode into its own architecture.

The reference implementation (tessera-core) is a Python/PyTorch library. Current benchmarks show positive transfer across CNN, Transformer, and LSTM pairs. It runs on CPU and the demo finishes in under 60 seconds.

Happy to answer questions about the protocol design, the wire format, or the benchmark methodology.

Comments

0xecro1•1h ago
Interesting approach. I work in embedded Linux/edge AI where we constantly struggle to move knowledge from large training models down to quantized INT8 models on constrained hardware (ARM Cortex-A class). Have you tested transfer to quantized or pruned targets? If the behavioural encoding survives that compression, this could be a much cleaner path than classical distillation for on-device deployment.
kirkmaddocks•1h ago
We haven't built quantisation-aware transfer yet, but the architecture lends itself to it better than you might expect.

Mode A (activation transfer) operates at the representation level, not the parameter level. The source model's knowledge gets projected into a 2048-dim hub space — the receiving model doesn't need to match architecturally or in precision. A 200M FP32 training model and a 5M INT8 edge model can both have UHS encoders/decoders. The hub space is agnostic to what's underneath.

Mode B (behavioural) is probably the most interesting path for your use case. It transfers decision boundaries rather than activations or weights. If the quantised model can reproduce the input-output mapping, internal precision is irrelevant.

It's similar in spirit to distillation but decoupled through the hub space — teacher and student don't need to be online simultaneously, and you get a full audit trail of what knowledge went where (which matters if you're shipping medical/industrial edge models under EU AI Act).

The gap today is the decoder side. DecoderMLP outputs FP32. We'd need a quantisation-aware variant that respects the INT8 grid — straight-through estimator at minimum, learned quantisation boundaries ideally. We'd also want empirical drift characterisation across FP32→FP16→INT8→INT4 so you'd know your expected fidelity floor for a given target.

The swarm angle is where it gets genuinely useful for edge fleets. If you've got N devices training locally on-site data, they contribute quantised-model tokens back to a full-precision aggregator. The robust aggregation strategy (Huber-style cosine clipping) handles quantisation noise across heterogeneous devices naturally.

We're planning a quantisation-aware transfer module next. If you're interested in testing against real Cortex-A INT8 workloads, we'd welcome the collaboration — repo is at github.com/incocreativedev/tessera-core.