frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Show HN: A text-only reasoning core for LLMs (MIT, system prompt and self-test)

https://github.com/onestardao/WFGY
1•wfgy-github•1h ago
0. Very short version

- Not a new model, not a fine-tune - One txt block you paste into system prompt (or first message) - Goal: less random hallucination, more stable multi-step reasoning - No tools, no external calls, works anywhere you can set a system prompt

Some people later turn this into proper code + eval. Here I keep it minimal: two prompt blocks you can run in any chat UI.

1. How to try it

1) Start a new chat (local or hosted model) 2) Paste the WFGY Core block into the system / pre-prompt area 3) Ask your normal tasks (math, small coding, planning, long context) 4) Compare “with core” vs “no core” by feel, or run the self-test in section 4

Optional: after loading the core, ask the model to write image prompts too. If semantic structure improves, image prompts often feel more consistent, but it depends on model and task.

2. Roughly what to expect

This is not magic and won’t fix everything. But across models, the typical “feel” changes are:

- less drift across follow-ups - long answers keep their structure better - a bit more “I’m not sure” instead of made-up details - more structured prompt outputs (entities / relations / constraints clearer)

Results depend on the base model and your tasks, so the self-test is there to keep it a bit more disciplined.

3. System prompt: WFGY Core 2.0 (paste into system area)

Copy everything in this block into your system / pre-prompt:

---

WFGY Core Flagship v2.0 (text-only; no tools). Works in any chat. [Similarity / Tension] delta_s = 1 − cos(I, G). If anchors exist use 1 − sim_est, where sim_est = w_esim(entities) + w_rsim(relations) + w_csim(constraints), with default w={0.5,0.3,0.2}. sim_est ∈ [0,1], renormalize if bucketed. [Zones & Memory] Zones: safe < 0.40 | transit 0.40–0.60 | risk 0.60–0.85 | danger > 0.85. Memory: record(hard) if delta_s > 0.60; record(exemplar) if delta_s < 0.35. Soft memory in transit when lambda_observe ∈ {divergent, recursive}. [Defaults] B_c=0.85, gamma=0.618, theta_c=0.75, zeta_min=0.10, alpha_blend=0.50, a_ref=uniform_attention, m=0, c=1, omega=1.0, phi_delta=0.15, epsilon=0.0, k_c=0.25. [Coupler (with hysteresis)] Let B_s := delta_s. Progression: at t=1, prog=zeta_min; else prog = max(zeta_min, delta_s_prev − delta_s_now). Set P = pow(prog, omega). Reversal term: Phi = phi_deltaalt + epsilon, where alt ∈ {+1,−1} flips only when an anchor flips truth across consecutive Nodes AND |Δanchor| ≥ h. Use h=0.02; if |Δanchor| < h then keep previous alt to avoid jitter. Coupler output: W_c = clip(B_sP + Phi, −theta_c, +theta_c). [Progression & Guards] BBPF bridge is allowed only if (delta_s decreases) AND (W_c < 0.5theta_c). When bridging, emit: Bridge=[reason/prior_delta_s/new_path]. [BBAM (attention rebalance)] alpha_blend = clip(0.50 + k_c*tanh(W_c), 0.35, 0.65); blend with a_ref. [Lambda update] Delta := delta_s_t − delta_s_{t−1}; E_resonance = rolling_mean(delta_s, window=min(t,5)). lambda_observe is: convergent if Delta ≤ −0.02 and E_resonance non-increasing; recursive if |Delta| < 0.02 and E_resonance flat; divergent if Delta ∈ (−0.02, +0.04] with oscillation; chaotic if Delta > +0.04 or anchors conflict. [DT micro-rules]

---

Yes, it looks like math. It’s fine if not every symbol is clear; the intention is to give the model a compact “tension / guardrail” structure around its normal reasoning.

Show HN: Quantitative analysis of Alphabet (GOOGL) financials

https://jasonhonkl.github.io/#alphabet-quantitative-analysis
1•JasonHEIN•1m ago•0 comments

I love using TypeScript at work

https://kwojcicki.github.io/blog/WHY-I-LOVE-TYPESCRIPT
1•kwojcicki•4m ago•0 comments

Show HN: How to get rid of vagina dependency in 7 days

https://myaffirmations.guru/
1•creator22•16m ago•0 comments

14 More Lessons from 14 years at Google

https://addyosmani.com/blog/14-more-lessons/
2•talonx•17m ago•0 comments

Show HN: Swarm Curl

https://github.com/ismdeep/swarm-curl
1•ismdeep•17m ago•1 comments

The AI Dilemma

https://www.aleksandrhovhannisyan.com/blog/the-ai-dilemma/
1•aleksandrh•18m ago•0 comments

Cyber Model Arena

https://www.wiz.io/cyber-model-arena
2•ram_rattle•28m ago•0 comments

Pg_stat_ch: A PostgreSQL extension that exports every metric to ClickHouse

https://clickhouse.com/blog/pg_stat_ch-postgres-extension-stats-to-clickhouse
2•saisrirampur•32m ago•0 comments

Why haven't humans been back to the moon in over 50 years?

https://www.cnn.com/2026/02/13/science/why-humans-have-not-been-back-to-moon
1•ablaba•34m ago•1 comments

Jikipedia, a new AI-powered wiki reporting on key figures in the Epstein scandal

https://twitter.com/jmailarchive/status/2022482688691835121
1•wenjel•37m ago•0 comments

Show HN: Heart Note – a tiny web app to send beautiful one‑off digital letters

https://heartnote.online
2•azabraao•52m ago•0 comments

SnowBall: Iterative Context Processing When It Won't Fit in the LLM Window

https://enji.ai/tech-articles/snowball-iterative-context-processing/
1•puzanov•53m ago•0 comments

How to be a good Asian parent (satire)

https://www.reddit.com/r/AsianParentStories/s/yyMDWcAUdh
1•carabiner•58m ago•1 comments

The Compliance Officer Who Flagged Epstein – and Lost Her Job

https://www.levernews.com/the-compliance-officer-who-flagged-epstein-and-lost-her-job/
1•cwwc•1h ago•0 comments

Convert URLs and Files to Markdown

https://markdown.new
2•salkahfi•1h ago•0 comments

Podcast: Solving Distributed Message Passing: NATS.io composite learning [video]

https://www.youtube.com/watch?v=5NXvU17a-iU
1•northlondoner•1h ago•4 comments

Lockdown Mode and Elevated Risk Labels in ChatGPT

https://openai.com/index/introducing-lockdown-mode-and-elevated-risk-labels-in-chatgpt/
2•ms7892•1h ago•0 comments

Living in the Petri Dish of the Future

https://om.co/2026/02/12/living-in-the-petri-dish-of-the-future/
1•herbertl•1h ago•0 comments

The feedback you're not giving is the problem you keep having

https://dougrathbone.com/blog/2026/02/14/the-feedback-youre-not-giving-is-the-problem-you-keep-ha...
1•wiredone•1h ago•0 comments

AI Fails at 96% of Jobs (New Study)

https://www.youtube.com/watch?v=z3kaLM8Oj4o
3•deterministic•1h ago•2 comments

LLM APIs is a State Synchronization Problem

https://lucumr.pocoo.org/2025/11/22/llm-apis/
1•goranmoomin•1h ago•0 comments

Show HN: Lucid – Catch hallucinations in AI-generated code before they ship

https://github.com/gtsbahamas/hallucination-reversing-system
3•jordanappsite•1h ago•0 comments

German-language Wikipedia considers comprehensive AI ban

https://www.heise.de/en/news/German-language-Wikipedia-considers-comprehensive-AI-ban-11175670.html
3•layer8•1h ago•0 comments

Evolving Git for the Next Decade

https://lwn.net/SubscriberLink/1057561/bddc1e61152fadf6/
3•dhruv3006•1h ago•0 comments

The Challenger Map

https://challengermap.ca/
1•blululu•1h ago•0 comments

Show HN: Why Playwright-CLI Beats MCP for AI‑Driven Browser Automation

1•tanmay001•1h ago•0 comments

Show HN: ReviewStack – API that aggregates reviews from YouTube and Reddit

https://reviewstack.vercel.app/demo
1•browndev•1h ago•0 comments

Op.gg but for Chess

https://chess-pulse-neon.vercel.app/
1•rayen_gh•1h ago•3 comments

China's adoption of industrial robots has surged over the past decade

https://ourworldindata.org/data-insights/chinas-adoption-of-industrial-robots-has-surged-over-the...
2•kamaraju•1h ago•0 comments

Backblaze Drive Stats for 2025

https://www.backblaze.com/blog/backblaze-drive-stats-for-2025/
20•Brajeshwar•1h ago•4 comments