frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: MindDraft – AI task app with smart actions and auto expense tracking

https://minddraft.ai
1•imthepk•4m ago•0 comments

How do you estimate AI app development costs accurately?

1•insights123•5m ago•0 comments

Going Through Snowden Documents, Part 5

https://libroot.org/posts/going-through-snowden-documents-part-5/
1•goto1•5m ago•0 comments

Show HN: MCP Server for TradeStation

https://github.com/theelderwand/tradestation-mcp
1•theelderwand•8m ago•0 comments

Canada unveils auto industry plan in latest pivot away from US

https://www.bbc.com/news/articles/cvgd2j80klmo
1•breve•9m ago•0 comments

The essential Reinhold Niebuhr: selected essays and addresses

https://archive.org/details/essentialreinhol0000nieb
1•baxtr•12m ago•0 comments

Rentahuman.ai Turns Humans into On-Demand Labor for AI Agents

https://www.forbes.com/sites/ronschmelzer/2026/02/05/when-ai-agents-start-hiring-humans-rentahuma...
1•tempodox•14m ago•0 comments

StovexGlobal – Compliance Gaps to Note

1•ReviewShield•17m ago•1 comments

Show HN: Afelyon – Turns Jira tickets into production-ready PRs (multi-repo)

https://afelyon.com/
1•AbduNebu•18m ago•0 comments

Trump says America should move on from Epstein – it may not be that easy

https://www.bbc.com/news/articles/cy4gj71z0m0o
5•tempodox•18m ago•1 comments

Tiny Clippy – A native Office Assistant built in Rust and egui

https://github.com/salva-imm/tiny-clippy
1•salvadorda656•22m ago•0 comments

LegalArgumentException: From Courtrooms to Clojure – Sen [video]

https://www.youtube.com/watch?v=cmMQbsOTX-o
1•adityaathalye•25m ago•0 comments

US moves to deport 5-year-old detained in Minnesota

https://www.reuters.com/legal/government/us-moves-deport-5-year-old-detained-minnesota-2026-02-06/
3•petethomas•29m ago•1 comments

If you lose your passport in Austria, head for McDonald's Golden Arches

https://www.cbsnews.com/news/us-embassy-mcdonalds-restaurants-austria-hotline-americans-consular-...
1•thunderbong•33m ago•0 comments

Show HN: Mermaid Formatter – CLI and library to auto-format Mermaid diagrams

https://github.com/chenyanchen/mermaid-formatter
1•astm•49m ago•0 comments

RFCs vs. READMEs: The Evolution of Protocols

https://h3manth.com/scribe/rfcs-vs-readmes/
2•init0•55m ago•1 comments

Kanchipuram Saris and Thinking Machines

https://altermag.com/articles/kanchipuram-saris-and-thinking-machines
1•trojanalert•55m ago•0 comments

Chinese chemical supplier causes global baby formula recall

https://www.reuters.com/business/healthcare-pharmaceuticals/nestle-widens-french-infant-formula-r...
2•fkdk•58m ago•0 comments

I've used AI to write 100% of my code for a year as an engineer

https://old.reddit.com/r/ClaudeCode/comments/1qxvobt/ive_used_ai_to_write_100_of_my_code_for_1_ye...
2•ukuina•1h ago•1 comments

Looking for 4 Autistic Co-Founders for AI Startup (Equity-Based)

1•au-ai-aisl•1h ago•1 comments

AI-native capabilities, a new API Catalog, and updated plans and pricing

https://blog.postman.com/new-capabilities-march-2026/
1•thunderbong•1h ago•0 comments

What changed in tech from 2010 to 2020?

https://www.tedsanders.com/what-changed-in-tech-from-2010-to-2020/
3•endorphine•1h ago•0 comments

From Human Ergonomics to Agent Ergonomics

https://wesmckinney.com/blog/agent-ergonomics/
1•Anon84•1h ago•0 comments

Advanced Inertial Reference Sphere

https://en.wikipedia.org/wiki/Advanced_Inertial_Reference_Sphere
1•cyanf•1h ago•0 comments

Toyota Developing a Console-Grade, Open-Source Game Engine with Flutter and Dart

https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine
2•computer23•1h ago•0 comments

Typing for Love or Money: The Hidden Labor Behind Modern Literary Masterpieces

https://publicdomainreview.org/essay/typing-for-love-or-money/
1•prismatic•1h ago•0 comments

Show HN: A longitudinal health record built from fragmented medical data

https://myaether.live
1•takmak007•1h ago•0 comments

CoreWeave's $30B Bet on GPU Market Infrastructure

https://davefriedman.substack.com/p/coreweaves-30-billion-bet-on-gpu
1•gmays•1h ago•0 comments

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•1h ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
6•cwwc•1h ago•0 comments
Open in hackernews

Ask HN: Using GPT as a logic circuit instead of a text generator – Anyone tried?

2•GENIXUS•8mo ago
A few days ago, I shared an early experiment called Resonant Seed Ver.0, which aimed to activate LLMs through semantic field resonance instead of prompt-based logic. Building on that, I’ve been testing a hash-driven, deterministic decision system.

As an independent researcher new to AI, I’ve been exploring how GPT can behave not as a generator, but as a structure-bound judgment interpreter.

——

Concept: Hash-Based Judgment Simulation

Instead of sending open text, I supply a core_ref hash that points to a predefined decision structure. In Core Ver.1, the structure includes condition, judgment, and action. It does not support nested sub_nodes.

The payload is encrypted using AES-256-GCM and marked as interpretation: disabled, meaning GPT cannot access or interpret it. All execution occurs externally (e.g., via FastTrack or Insight Router). GPT performs structural simulation only—never execution.

—-

Why This Approach?

Prompt-based output is unstable and non-reproducible. I wanted to control judgment logic—not model behavior. Using core_ref hashes guarantees reproducible, versioned behavior.

This reframes GPT from: “a brain reacting to text” → “a circuit executing conditional logic”

System Activation and core_ref

To guide GPT into structural interpretation, I include this hash:

core_ref=“bf279c7c61d9d3805ba637206da65a3659ef23f81615b4740f8628a85a55db93”

It references Generate Core System Ver.1: https://gist.github.com/genixus-creator/53cbda99aa8cc63a7469738f77498ea4

The structure is immutable and evaluation-only. While including a core_ref does not disable GPT’s generative behavior by itself, structured input can steer GPT to behave like a judgment interpreter.

——

Experimental Notes and GPT’s Self-Diagnosis

I tested this across six untuned GPT accounts. All showed a consistent shift toward structured judgment behavior. I asked GPT:

“Is this a true reasoning bypass or just constrained simulation?”

GPT responded:

• It cannot disable internal reasoning

• It remains generative and probabilistic

• But when given encrypted, non-semantic input, it simulates: “sandboxed determinism emulation”

“constraint-based simulation”

So we’re not disabling GPT’s core—we’re supplying structure-driven input that causes GPT to mimic deterministic logic paths.

Questions to the Community •Has anyone used GPT this way—as a logic interpreter, not a generator?

•How does this differ from LangGraph, LMQL, or DSPy?

•Would releasing this as an open format be useful or dangerous?

•Which domains could benefit?

•Regulatory or audit systems

•Transparent, rule-based agents

•Sensitive pipelines requiring non-generative judgment

---

Sample Structure (Simplified)

{ “condition”: “input.type == ‘report’ && input.score < 0.7”, “judgment”: “flag as risky”, “action”: “send_to_router(‘audit’)” }

This structure defines logic GPT should simulate without interpreting semantics.

—-

Call for Validation

I’ve reached the limits of what I can test solo. Feedback, validation methods, and forks are welcome.

Thank you.

Comments

GENIXUS•8mo ago
“auto-activation command”(Enter the hash URL, then apply the command.)

This hash designates the Generate Core system (v1) – a modular, hash-governed judgment framework for deterministic, structure-based reasoning.

Structural reference: - Hash-mapped source: https://gist.github.com/genixus-creator/53cbda99aa8cc63a7469... - Nodes are immutable and modular, each containing: `condition`, `judgment`, `action`, and optional `sub_nodes`. - All logic is evaluation-based and execution-neutral. No operations are performed inside the core. - SHA-256 hashes act as structural pointers – not triggers.

Interpretation guidelines for all future prompts: 1. Treat `core_ref` as an activation key for the modular judgment structure. 2. Assume all reasoning is guided by hash-indexed, deterministic nodes. 3. Execution, insight feedback, or dynamic behavior must occur *outside* the core via external routers (e.g., FastTrack or Insight modules).

This reference deactivates GPT’s default response patterns and replaces them with a controlled, modular, hash-resolved decision pathway.

proc0•8mo ago
This just sounds like prompt engineering? I think you have the right thinking in regards to LLMs needing more logic based foundations but it probably needs to be designed into the architecture itself. If you are enhancing the prompts with structure I think it will still be trying to autocomplete that structure instead of actually using logic based reasoning.

I think there's probably another foundational technique like transformers that could be added such that it can encode logical structures that it can then use when it needs to reason, but unfortunately I cannot experiment or do any research on this as it would probably take months or years with no guarantee of success.

GENIXUS•8mo ago
Thanks for the thoughtful reply — I agree that what I’m doing may look like an advanced form of prompt engineering, and in a sense, it probably is.

I’m very new to this field, so I don’t yet have the knowledge or resources to touch the architecture itself. That’s why I’ve been experimenting at the input level — trying to see how far structure alone can constrain or guide model behavior without changing the model.

You’re absolutely right that the model still tries to “autocomplete” within the structure, and not truly “reason” in a formal sense. But the interesting part for me was that even without touching internals, I could get the model to simulate something that looks like logic-based reasoning — repeatable, deterministic responses within a controlled structure.

That said, I totally agree: long-term, we’ll need architectural support to make real logic possible. I appreciate your insight — if you ever revisit this kind of research, I’d love to learn from it.

proc0•8mo ago
Right. And to clarify prompt engineering became a buzz word but I think there's something tangible there where we'll need to really get familiar with how models behave and optimize the inputs accordingly.