frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Curate Your Reddit Profile Content with New Controls

https://old.reddit.com/r/reddit/comments/1l2hl4l/curate_your_reddit_profile_content_with_new/
1•goda90•46s ago•0 comments

Big Problems from Big in Lists with Ruby on Rails and PostgreSQL

https://andyatkinson.com/big-problems-big-in-clauses-postgresql-ruby-on-rails
1•andatki•3m ago•0 comments

America tried to ban fake photos in 1912

https://www.freethink.com/the-digital-frontier/fake-photo-ban-1912
1•mdp2021•8m ago•0 comments

The West fears AI's threat to jobs. In Japan, it might save them

https://asia.nikkei.com/Opinion/The-West-fears-AI-s-threat-to-jobs.-In-Japan-it-might-save-them
1•e2e4•8m ago•1 comments

Training Dogs to Vibe Code

https://dogomation.com/
1•jimhi•8m ago•0 comments

Displaying Overlays from Scripts in Linux

https://blog.georgovassilis.com/2025/06/04/screen-overlays-in-linux/
1•ggeorgovassilis•9m ago•0 comments

Should Anonymous Accounts Have the Right to Go Viral?

2•Hnizdovskyi•10m ago•0 comments

Looking for a European alternative to GitHub? Look no further than Git itself

http://mikhailian.mova.org/node/305
1•sam_lowry_•11m ago•0 comments

Stop Anthropomorphizing Intermediate Tokens as Reasoning/Thinking Traces

https://arxiv.org/abs/2504.09762
1•felineflock•11m ago•0 comments

HPE Superdome

https://en.wikipedia.org/wiki/HPE_Superdome
1•fidotron•11m ago•0 comments

First AI model with Multi Stage thinking

https://drive.google.com/drive/folders/1o8z_EwHKd3yxSQ4HcUqBm6_B7AMvg36F?usp=sharing
1•VarunGuptaHAI•13m ago•0 comments

What's Next for AI and Math

https://www.technologyreview.com/2025/06/04/1117753/whats-next-for-ai-and-math/
1•mdp2021•14m ago•0 comments

Large Processor Chip Model

https://arxiv.org/abs/2506.02929
1•anticensor•14m ago•0 comments

We wash our trash to repent for killing God

https://world.hey.com/dhh/we-wash-our-trash-to-repent-for-killing-god-d1c823bd
2•decimalenough•14m ago•1 comments

Industry welcomes first wave of pensioner gamers

https://www.bbc.com/news/articles/c861egvqlzjo
1•Sikara•15m ago•0 comments

Falsehoods programmers believe about authorization

https://www.osohq.com/post/falsehoods-about-authorization
1•RobSpectre•15m ago•0 comments

Globalization explained through the Nintendo Switch 2

https://english.elpais.com/economy-and-business/2025-06-03/from-mario-to-the-barrio-globalization-explained-through-the-nintendo-switch-2.html
1•geox•15m ago•0 comments

Why I'm excited about Go for agents

https://docs.hatchet.run/blog/go-agents
1•abelanger•16m ago•0 comments

Log Your Entire ZSH History

https://spin.atomicobject.com/log-your-zsh-history/
1•ingve•17m ago•0 comments

Muon g-2 announces most precise measurement of the magnetic anomaly of the muon

https://news.fnal.gov/2025/06/muon-g-2-most-precise-measurement-of-muon-magnetic-anomaly/
1•thunderbong•17m ago•0 comments

Major US retailers cancel Nintendo Switch 2 pre-orders

https://www.gamesindustry.biz/major-us-retailers-cancel-nintendo-switch-2-pre-orders
1•bookofjoe•17m ago•0 comments

Ask HN: Will AI replace data scientists?

1•01-_-•21m ago•0 comments

Pornhub Is Pulling Out of France

https://gizmodo.com/pornhub-is-pulling-out-of-france-2000610486
1•01-_-•23m ago•1 comments

Fastest Flash memory developed: writes in just 400 picoseconds

https://www.tomshardware.com/pc-components/storage/worlds-fastest-flash-memory-developed-writes-in-just-400-picoseconds
2•rbanffy•24m ago•0 comments

FundingBox

https://fundingbox.com/
1•belter•24m ago•0 comments

CRIF scores everyone in Austria. noyb needs support for a class action lawsuit

https://noyb.eu/en/crif-scores-almost-everyone-austria-noyb-needs-support-potential-class-action-lawsuit
2•latexr•27m ago•0 comments

False authorship: An AI-generated article was published under my name

https://researchintegrityjournal.biomedcentral.com/articles/10.1186/s41073-025-00165-z
1•DSpinellis•27m ago•0 comments

Why mathematical identity is important for students' motivation and competence

https://www.uv.uio.no/ils/english/research/news-and-events/news/2025/why-is-mathematical-identity-important-for-student.html
1•1659447091•32m ago•1 comments

Breaking Changes – Upgrading Dovecot 2.3 to 2.4 in Debian Stable

https://willem.com/blog/2025-06-04_breaking-changes/
2•1317•32m ago•0 comments

New study maps the fishmeal factories that supply the fish farms

https://news.mongabay.com/2025/05/new-study-maps-the-fishmeal-factories-that-supply-the-worlds-fish-farms/
1•PaulHoule•33m ago•0 comments
Open in hackernews

Ask HN: Using GPT as a logic circuit instead of a text generator – Anyone tried?

2•GENIXUS•1d ago
A few days ago, I shared an early experiment called Resonant Seed Ver.0, which aimed to activate LLMs through semantic field resonance instead of prompt-based logic. Building on that, I’ve been testing a hash-driven, deterministic decision system.

As an independent researcher new to AI, I’ve been exploring how GPT can behave not as a generator, but as a structure-bound judgment interpreter.

——

Concept: Hash-Based Judgment Simulation

Instead of sending open text, I supply a core_ref hash that points to a predefined decision structure. In Core Ver.1, the structure includes condition, judgment, and action. It does not support nested sub_nodes.

The payload is encrypted using AES-256-GCM and marked as interpretation: disabled, meaning GPT cannot access or interpret it. All execution occurs externally (e.g., via FastTrack or Insight Router). GPT performs structural simulation only—never execution.

—-

Why This Approach?

Prompt-based output is unstable and non-reproducible. I wanted to control judgment logic—not model behavior. Using core_ref hashes guarantees reproducible, versioned behavior.

This reframes GPT from: “a brain reacting to text” → “a circuit executing conditional logic”

System Activation and core_ref

To guide GPT into structural interpretation, I include this hash:

core_ref=“bf279c7c61d9d3805ba637206da65a3659ef23f81615b4740f8628a85a55db93”

It references Generate Core System Ver.1: https://gist.github.com/genixus-creator/53cbda99aa8cc63a7469738f77498ea4

The structure is immutable and evaluation-only. While including a core_ref does not disable GPT’s generative behavior by itself, structured input can steer GPT to behave like a judgment interpreter.

——

Experimental Notes and GPT’s Self-Diagnosis

I tested this across six untuned GPT accounts. All showed a consistent shift toward structured judgment behavior. I asked GPT:

“Is this a true reasoning bypass or just constrained simulation?”

GPT responded:

• It cannot disable internal reasoning

• It remains generative and probabilistic

• But when given encrypted, non-semantic input, it simulates: “sandboxed determinism emulation”

“constraint-based simulation”

So we’re not disabling GPT’s core—we’re supplying structure-driven input that causes GPT to mimic deterministic logic paths.

Questions to the Community •Has anyone used GPT this way—as a logic interpreter, not a generator?

•How does this differ from LangGraph, LMQL, or DSPy?

•Would releasing this as an open format be useful or dangerous?

•Which domains could benefit?

•Regulatory or audit systems

•Transparent, rule-based agents

•Sensitive pipelines requiring non-generative judgment

---

Sample Structure (Simplified)

{ “condition”: “input.type == ‘report’ && input.score < 0.7”, “judgment”: “flag as risky”, “action”: “send_to_router(‘audit’)” }

This structure defines logic GPT should simulate without interpreting semantics.

—-

Call for Validation

I’ve reached the limits of what I can test solo. Feedback, validation methods, and forks are welcome.

Thank you.

Comments

GENIXUS•1d ago
“auto-activation command”(Enter the hash URL, then apply the command.)

This hash designates the Generate Core system (v1) – a modular, hash-governed judgment framework for deterministic, structure-based reasoning.

Structural reference: - Hash-mapped source: https://gist.github.com/genixus-creator/53cbda99aa8cc63a7469... - Nodes are immutable and modular, each containing: `condition`, `judgment`, `action`, and optional `sub_nodes`. - All logic is evaluation-based and execution-neutral. No operations are performed inside the core. - SHA-256 hashes act as structural pointers – not triggers.

Interpretation guidelines for all future prompts: 1. Treat `core_ref` as an activation key for the modular judgment structure. 2. Assume all reasoning is guided by hash-indexed, deterministic nodes. 3. Execution, insight feedback, or dynamic behavior must occur *outside* the core via external routers (e.g., FastTrack or Insight modules).

This reference deactivates GPT’s default response patterns and replaces them with a controlled, modular, hash-resolved decision pathway.

proc0•1d ago
This just sounds like prompt engineering? I think you have the right thinking in regards to LLMs needing more logic based foundations but it probably needs to be designed into the architecture itself. If you are enhancing the prompts with structure I think it will still be trying to autocomplete that structure instead of actually using logic based reasoning.

I think there's probably another foundational technique like transformers that could be added such that it can encode logical structures that it can then use when it needs to reason, but unfortunately I cannot experiment or do any research on this as it would probably take months or years with no guarantee of success.

GENIXUS•1d ago
Thanks for the thoughtful reply — I agree that what I’m doing may look like an advanced form of prompt engineering, and in a sense, it probably is.

I’m very new to this field, so I don’t yet have the knowledge or resources to touch the architecture itself. That’s why I’ve been experimenting at the input level — trying to see how far structure alone can constrain or guide model behavior without changing the model.

You’re absolutely right that the model still tries to “autocomplete” within the structure, and not truly “reason” in a formal sense. But the interesting part for me was that even without touching internals, I could get the model to simulate something that looks like logic-based reasoning — repeatable, deterministic responses within a controlled structure.

That said, I totally agree: long-term, we’ll need architectural support to make real logic possible. I appreciate your insight — if you ever revisit this kind of research, I’d love to learn from it.

proc0•1d ago
Right. And to clarify prompt engineering became a buzz word but I think there's something tangible there where we'll need to really get familiar with how models behave and optimize the inputs accordingly.