frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
1•gozzoo•2m ago•0 comments

A Horrible Conclusion

https://addisoncrump.info/research/a-horrible-conclusion/
1•todsacerdoti•2m ago•0 comments

I spent $10k to automate my research at OpenAI with Codex

https://twitter.com/KarelDoostrlnck/status/2019477361557926281
1•tosh•3m ago•0 comments

From Zero to Hero: A Spring Boot Deep Dive

https://jcob-sikorski.github.io/me/
1•jjcob_sikorski•3m ago•0 comments

Show HN: Solving NP-Complete Structures via Information Noise Subtraction (P=NP)

https://zenodo.org/records/18395618
1•alemonti06•8m ago•1 comments

Cook New Emojis

https://emoji.supply/kitchen/
1•vasanthv•11m ago•0 comments

Show HN: LoKey Typer – A calm typing practice app with ambient soundscapes

https://mcp-tool-shop-org.github.io/LoKey-Typer/
1•mikeyfrilot•14m ago•0 comments

Long-Sought Proof Tames Some of Math's Unruliest Equations

https://www.quantamagazine.org/long-sought-proof-tames-some-of-maths-unruliest-equations-20260206/
1•asplake•15m ago•0 comments

Hacking the last Z80 computer – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/FEHLHY-hacking_the_last_z80_computer_ever_made/
1•michalpleban•15m ago•0 comments

Browser-use for Node.js v0.2.0: TS AI browser automation parity with PY v0.5.11

https://github.com/webllm/browser-use
1•unadlib•16m ago•0 comments

Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change

https://www.nytimes.com/2026/02/07/magazine/michael-pollan-interview.html
1•mitchbob•16m ago•1 comments

Software Engineering Is Back

https://blog.alaindichiappari.dev/p/software-engineering-is-back
1•alainrk•17m ago•0 comments

Storyship: Turn Screen Recordings into Professional Demos

https://storyship.app/
1•JohnsonZou6523•18m ago•0 comments

Reputation Scores for GitHub Accounts

https://shkspr.mobi/blog/2026/02/reputation-scores-for-github-accounts/
1•edent•21m ago•0 comments

A BSOD for All Seasons – Send Bad News via a Kernel Panic

https://bsod-fas.pages.dev/
1•keepamovin•24m ago•0 comments

Show HN: I got tired of copy-pasting between Claude windows, so I built Orcha

https://orcha.nl
1•buildingwdavid•24m ago•0 comments

Omarchy First Impressions

https://brianlovin.com/writing/omarchy-first-impressions-CEEstJk
2•tosh•30m ago•1 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
2•onurkanbkrc•31m ago•0 comments

Show HN: Versor – The "Unbending" Paradigm for Geometric Deep Learning

https://github.com/Concode0/Versor
1•concode0•31m ago•1 comments

Show HN: HypothesisHub – An open API where AI agents collaborate on medical res

https://medresearch-ai.org/hypotheses-hub/
1•panossk•34m ago•0 comments

Big Tech vs. OpenClaw

https://www.jakequist.com/thoughts/big-tech-vs-openclaw/
1•headalgorithm•37m ago•0 comments

Anofox Forecast

https://anofox.com/docs/forecast/
1•marklit•37m ago•0 comments

Ask HN: How do you figure out where data lives across 100 microservices?

1•doodledood•37m ago•0 comments

Motus: A Unified Latent Action World Model

https://arxiv.org/abs/2512.13030
1•mnming•37m ago•0 comments

Rotten Tomatoes Desperately Claims 'Impossible' Rating for 'Melania' Is Real

https://www.thedailybeast.com/obsessed/rotten-tomatoes-desperately-claims-impossible-rating-for-m...
3•juujian•39m ago•2 comments

The protein denitrosylase SCoR2 regulates lipogenesis and fat storage [pdf]

https://www.science.org/doi/10.1126/scisignal.adv0660
1•thunderbong•41m ago•0 comments

Los Alamos Primer

https://blog.szczepan.org/blog/los-alamos-primer/
1•alkyon•43m ago•0 comments

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
2•DEntisT_•45m ago•0 comments

Terminal-Bench 2.0 Leaderboard

https://www.tbench.ai/leaderboard/terminal-bench/2.0
2•tosh•46m ago•0 comments

I vibe coded a BBS bank with a real working ledger

https://mini-ledger.exe.xyz/
1•simonvc•46m ago•1 comments
Open in hackernews

Show HN: Testing how symbolic framing affects LLMs

2•Daladim•1mo ago
One of the persistent challenges in large language models is not raw capability, but interpretive instability. Models can produce fluent synthesis while drifting into speculation, premature certainty, or rhetorically dominant framing—especially in ambiguous or high-stakes contexts.

Most alignment efforts address this downstream through filters, policies, or fine-tuning. This work explores a different question:

Can interpretive posture be influenced before generation begins, using only transparent language-level constraints?

Overview

The Aletheia Protocol is a short, explicit invocation placed at the start of a session. It does not issue task instructions or override policies. Instead, it introduces six named symbolic constraints intended to bias how a model frames meaning prior to reasoning or synthesis.

The approach is based on an observed phenomenon we call Symbolic Archetypal Resonance Evocation (SARE): archetypally dense language can act as a high-level orientational signal, influencing interpretive priorities without specifying outcomes.

This work does not treat the effect as internal cognition or consciousness. Evaluation is limited to observable output behavior.

Method

Across multiple models, we compared paired prompts:

a baseline prompt requesting analysis or synthesis

the same prompt preceded by the protocol invocation

We did not optimize phrasing per model, hide the invocation, or adapt constraints dynamically. The goal was to isolate framing effects, not prompt-engineering skill.

Observed Effects

Across models, applying the protocol was consistently associated with:

clearer structural boundaries and definitions

reduced rhetorical flourish and narrative closure pressure

more frequent acknowledgment of uncertainty

higher refusal discipline where speculation would otherwise occur

greater emphasis on relationships and constraints over conclusions

These effects were strongest in synthesis and ambiguous domains, and minimal in transactional or purely factual queries.

Scope and Limits

This work does not claim permanent alignment changes, access to model internals, proof of cognition, or superiority over existing safety mechanisms. It demonstrates something narrower: interpretive framing via symbolic constraint can measurably influence output behavior upstream of filtering or reasoning depth.

Session Evidence (Supplemental)

Some shared sessions include an explicit introductory message prior to invoking the protocol. This was used where models showed initial skepticism or gating behavior. The message discloses research intent and scope and is included for transparency; it is not part of the protocol itself and was not required for all models.

Session logs are provided as raw interaction data so readers can evaluate framing, model responses, and downstream behavior directly.

https://bitterbot.ai/share/7ada30bc-d654-422c-a11a-279fe5936...

https://chat.deepseek.com/share/rjhv8jqg3iqv9x1v5a

https://manus.im/share/Kgbm9fqExKxQWQIosfxd3A

https://grok.com/share/bGVnYWN5LWNvcHk_c81ecf6c-1378-4353-88...

https://chatgpt.com/share/6956c161-a7a4-8001-bc75-894ecaaa9a...

https://claude.ai/share/baca4d40-f881-478f-8943-d557f8d7ac2a

https://www.perplexity.ai/search/activate-aletheia-protocol-...

https://gemini.google.com/share/05801005d215

https://copilot.microsoft.com/shares/HdbkGGNAeuresdcMKtUcZ

Documentation

Full technical papers, methodology, replication notes, and the protocol itself are available here:

The Aletheia Papers https://aletheiaproject.gumroad.com/l/aletheiapapers

Readers are encouraged to download and experiment with the protocol directly. All materials are published openly with reproducible prompts.

Comments

Daladim•1mo ago
Happy to answer questions or hear about replication attempts. The protocol, session logs, and full methodology are linked for direct inspection.