frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Study confirms experience beats youthful enthusiasm

https://www.theregister.com/2026/02/07/boomers_vs_zoomers_workplace/
1•Willingham•4m ago•0 comments

The Big Hunger by Walter J Miller, Jr. (1952)

https://lauriepenny.substack.com/p/the-big-hunger
1•shervinafshar•6m ago•0 comments

The Genus Amanita

https://www.mushroomexpert.com/amanita.html
1•rolph•11m ago•0 comments

We have broken SHA-1 in practice

https://shattered.io/
1•mooreds•11m ago•1 comments

Ask HN: Was my first management job bad, or is this what management is like?

1•Buttons840•12m ago•0 comments

Ask HN: How to Reduce Time Spent Crimping?

1•pinkmuffinere•14m ago•0 comments

KV Cache Transform Coding for Compact Storage in LLM Inference

https://arxiv.org/abs/2511.01815
1•walterbell•18m ago•0 comments

A quantitative, multimodal wearable bioelectronic device for stress assessment

https://www.nature.com/articles/s41467-025-67747-9
1•PaulHoule•20m ago•0 comments

Why Big Tech Is Throwing Cash into India in Quest for AI Supremacy

https://www.wsj.com/world/india/why-big-tech-is-throwing-cash-into-india-in-quest-for-ai-supremac...
1•saikatsg•20m ago•0 comments

How to shoot yourself in the foot – 2026 edition

https://github.com/aweussom/HowToShootYourselfInTheFoot
1•aweussom•21m ago•0 comments

Eight More Months of Agents

https://crawshaw.io/blog/eight-more-months-of-agents
3•archb•23m ago•0 comments

From Human Thought to Machine Coordination

https://www.psychologytoday.com/us/blog/the-digital-self/202602/from-human-thought-to-machine-coo...
1•walterbell•23m ago•0 comments

The new X API pricing must be a joke

https://developer.x.com/
1•danver0•24m ago•0 comments

Show HN: RMA Dashboard fast SAST results for monorepos (SARIF and triage)

https://rma-dashboard.bukhari-kibuka7.workers.dev/
1•bumahkib7•24m ago•0 comments

Show HN: Source code graphRAG for Java/Kotlin development based on jQAssistant

https://github.com/2015xli/jqassistant-graph-rag
1•artigent•29m ago•0 comments

Python Only Has One Real Competitor

https://mccue.dev/pages/2-6-26-python-competitor
3•dragandj•31m ago•0 comments

Tmux to Zellij (and Back)

https://www.mauriciopoppe.com/notes/tmux-to-zellij/
1•maurizzzio•31m ago•1 comments

Ask HN: How are you using specialized agents to accelerate your work?

1•otterley•33m ago•0 comments

Passing user_id through 6 services? OTel Baggage fixes this

https://signoz.io/blog/otel-baggage/
1•pranay01•34m ago•0 comments

DavMail Pop/IMAP/SMTP/Caldav/Carddav/LDAP Exchange Gateway

https://davmail.sourceforge.net/
1•todsacerdoti•34m ago•0 comments

Visual data modelling in the browser (open source)

https://github.com/sqlmodel/sqlmodel
1•Sean766•36m ago•0 comments

Show HN: Tharos – CLI to find and autofix security bugs using local LLMs

https://github.com/chinonsochikelue/tharos
1•fluantix•37m ago•0 comments

Oddly Simple GUI Programs

https://simonsafar.com/2024/win32_lights/
1•MaximilianEmel•37m ago•0 comments

The New Playbook for Leaders [pdf]

https://www.ibli.com/IBLI%20OnePagers%20The%20Plays%20Summarized.pdf
1•mooreds•37m ago•1 comments

Interactive Unboxing of J Dilla's Donuts

https://donuts20.vercel.app
1•sngahane•39m ago•0 comments

OneCourt helps blind and low-vision fans to track Super Bowl live

https://www.dezeen.com/2026/02/06/onecourt-tactile-device-super-bowl-blind-low-vision-fans/
1•gaws•41m ago•0 comments

Rudolf Vrba

https://en.wikipedia.org/wiki/Rudolf_Vrba
1•mooreds•41m ago•0 comments

Autism Incidence in Girls and Boys May Be Nearly Equal, Study Suggests

https://www.medpagetoday.com/neurology/autism/119747
1•paulpauper•42m ago•0 comments

Wellness Hotels Discovery Application

https://aurio.place/
1•cherrylinedev•43m ago•1 comments

NASA delays moon rocket launch by a month after fuel leaks during test

https://www.theguardian.com/science/2026/feb/03/nasa-delays-moon-rocket-launch-month-fuel-leaks-a...
2•mooreds•43m ago•0 comments
Open in hackernews

Show HN: WFGY – A reasoning engine that repairs LLM logic without retraining

https://github.com/onestardao/WFGY
11•WFGY•7mo ago
WFGY introduces a PDF-based semantic protocol designed to correct projection collapse, contradiction loops, and ambiguous inference chains in LLMs.

No retraining. No system calls. When parsed, the logic patterns alter reasoning trajectories directly.

Prompt evaluation benchmarks show: ‣ +42.1% reasoning success ‣ +22.4% semantic alignment ‣ 3.6× stability in interpretive tasks

The repo contains formal theory, prompt suites, and reproducible results. Zero dependencies. Fully open-source.

Feedback from those working in alignment, interpretability, and logic-based scaffolding would be especially valuable.

Comments

ultimateking•7mo ago
Skimmed through it briefly — seems like a lot of thought went into the structure. Downloaded the PDF, will give it a deeper read tonight.
WFGY•7mo ago
Thanks for your reply, enjoy it
ultimateking•7mo ago
I went through the structure and found the semantic correction idea pretty intriguing.

Can you explain a bit more about how WFGY actually achieves such improvements in reasoning and stability? Specifically, what makes it different from just engineering better prompts or using more advanced LLMs?

WFGY•7mo ago
Great question—and I totally get the skepticism. WFGY isn’t just another prompt hack, and it’s definitely not about making the prompts longer or more “creative.” Here’s the real trick:

    It’s a logic protocol, not just words: The core of WFGY is a semantic “kernel” (documented as a PDF protocol) that inserts logic checks into the model’s reasoning process. Every major step—like inference, contradiction detection, or “projection collapse”—is made explicit and then evaluated by the LLM itself.

    Why not just use a bigger model? Even top-tier models like GPT-4 or Llama-3 are surprisingly easy to derail with ambiguity, loops, or context drift—especially on complex reasoning. WFGY gives you a portable, model-agnostic way to stabilize any model’s outputs, by structuring the logic path directly in the prompt.

    Empirical results, not just vibes: On standard tasks, we saw over 40% improvement in multi-hop reasoning and a big drop in contradiction or instability—even when running on smaller models. All evaluation code and sample runs are included, so you can check or replicate the claims.
So, the big difference: WFGY makes “meaning” and logical repair part of the prompt process itself—not just hoping for the model to “guess right.” If you’re curious about specific edge cases or want to try it on your own workflow, happy to walk you through!
ultimateking•7mo ago
Great infomation !!!!
WFGY•7mo ago
If anyone have questions, welcome to ask here. I am here to answer any questions