frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Brute Force Colors (2022)

https://arnaud-carre.github.io/2022-12-30-amiga-ham/
1•erickhill•2m ago•0 comments

Google Translate apparently vulnerable to prompt injection

https://www.lesswrong.com/posts/tAh2keDNEEHMXvLvz/prompt-injection-in-google-translate-reveals-ba...
1•julkali•2m ago•0 comments

(Bsky thread) "This turns the maintainer into an unwitting vibe coder"

https://bsky.app/profile/fullmoon.id/post/3meadfaulhk2s
1•todsacerdoti•3m ago•0 comments

Software development is undergoing a Renaissance in front of our eyes

https://twitter.com/gdb/status/2019566641491963946
1•tosh•4m ago•0 comments

Can you beat ensloppification? I made a quiz for Wikipedia's Signs of AI Writing

https://tryward.app/aiquiz
1•bennydog224•5m ago•1 comments

Spec-Driven Design with Kiro: Lessons from Seddle

https://medium.com/@dustin_44710/spec-driven-design-with-kiro-lessons-from-seddle-9320ef18a61f
1•nslog•5m ago•0 comments

Agents need good developer experience too

https://modal.com/blog/agents-devex
1•birdculture•6m ago•0 comments

The Dark Factory

https://twitter.com/i/status/2020161285376082326
1•Ozzie_osman•6m ago•0 comments

Free data transfer out to internet when moving out of AWS (2024)

https://aws.amazon.com/blogs/aws/free-data-transfer-out-to-internet-when-moving-out-of-aws/
1•tosh•7m ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•alwillis•9m ago•0 comments

Prejudice Against Leprosy

https://text.npr.org/g-s1-108321
1•hi41•10m ago•0 comments

Slint: Cross Platform UI Library

https://slint.dev/
1•Palmik•14m ago•0 comments

AI and Education: Generative AI and the Future of Critical Thinking

https://www.youtube.com/watch?v=k7PvscqGD24
1•nyc111•14m ago•0 comments

Maple Mono: Smooth your coding flow

https://font.subf.dev/en/
1•signa11•15m ago•0 comments

Moltbook isn't real but it can still hurt you

https://12gramsofcarbon.com/p/tech-things-moltbook-isnt-real-but
1•theahura•18m ago•0 comments

Take Back the Em Dash–and Your Voice

https://spin.atomicobject.com/take-back-em-dash/
1•ingve•19m ago•0 comments

Show HN: 289x speedup over MLP using Spectral Graphs

https://zenodo.org/login/?next=%2Fme%2Fuploads%3Fq%3D%26f%3Dshared_with_me%25253Afalse%26l%3Dlist...
1•andrespi•20m ago•0 comments

Teaching Mathematics

https://www.karlin.mff.cuni.cz/~spurny/doc/articles/arnold.htm
2•samuel246•22m ago•0 comments

3D Printed Microfluidic Multiplexing [video]

https://www.youtube.com/watch?v=VZ2ZcOzLnGg
2•downboots•22m ago•0 comments

Abstractions Are in the Eye of the Beholder

https://software.rajivprab.com/2019/08/29/abstractions-are-in-the-eye-of-the-beholder/
2•whack•23m ago•0 comments

Show HN: Routed Attention – 75-99% savings by routing between O(N) and O(N²)

https://zenodo.org/records/18518956
1•MikeBee•23m ago•0 comments

We didn't ask for this internet – Ezra Klein show [video]

https://www.youtube.com/shorts/ve02F0gyfjY
1•softwaredoug•24m ago•0 comments

The Real AI Talent War Is for Plumbers and Electricians

https://www.wired.com/story/why-there-arent-enough-electricians-and-plumbers-to-build-ai-data-cen...
2•geox•27m ago•0 comments

Show HN: MimiClaw, OpenClaw(Clawdbot)on $5 Chips

https://github.com/memovai/mimiclaw
1•ssslvky1•27m ago•0 comments

I Maintain My Blog in the Age of Agents

https://www.jerpint.io/blog/2026-02-07-how-i-maintain-my-blog-in-the-age-of-agents/
3•jerpint•27m ago•0 comments

The Fall of the Nerds

https://www.noahpinion.blog/p/the-fall-of-the-nerds
1•otoolep•29m ago•0 comments

Show HN: I'm 15 and built a free tool for reading ancient texts.

https://the-lexicon-project.netlify.app/
5•breadwithjam•32m ago•2 comments

How close is AI to taking my job?

https://epoch.ai/gradient-updates/how-close-is-ai-to-taking-my-job
1•cjbarber•32m ago•0 comments

You are the reason I am not reviewing this PR

https://github.com/NixOS/nixpkgs/pull/479442
2•midzer•34m ago•1 comments

Show HN: FamilyMemories.video – Turn static old photos into 5s AI videos

https://familymemories.video
1•tareq_•35m ago•0 comments
Open in hackernews

Ask HN: Using GPT as a logic circuit instead of a text generator – Anyone tried?

2•GENIXUS•8mo ago
A few days ago, I shared an early experiment called Resonant Seed Ver.0, which aimed to activate LLMs through semantic field resonance instead of prompt-based logic. Building on that, I’ve been testing a hash-driven, deterministic decision system.

As an independent researcher new to AI, I’ve been exploring how GPT can behave not as a generator, but as a structure-bound judgment interpreter.

——

Concept: Hash-Based Judgment Simulation

Instead of sending open text, I supply a core_ref hash that points to a predefined decision structure. In Core Ver.1, the structure includes condition, judgment, and action. It does not support nested sub_nodes.

The payload is encrypted using AES-256-GCM and marked as interpretation: disabled, meaning GPT cannot access or interpret it. All execution occurs externally (e.g., via FastTrack or Insight Router). GPT performs structural simulation only—never execution.

—-

Why This Approach?

Prompt-based output is unstable and non-reproducible. I wanted to control judgment logic—not model behavior. Using core_ref hashes guarantees reproducible, versioned behavior.

This reframes GPT from: “a brain reacting to text” → “a circuit executing conditional logic”

System Activation and core_ref

To guide GPT into structural interpretation, I include this hash:

core_ref=“bf279c7c61d9d3805ba637206da65a3659ef23f81615b4740f8628a85a55db93”

It references Generate Core System Ver.1: https://gist.github.com/genixus-creator/53cbda99aa8cc63a7469738f77498ea4

The structure is immutable and evaluation-only. While including a core_ref does not disable GPT’s generative behavior by itself, structured input can steer GPT to behave like a judgment interpreter.

——

Experimental Notes and GPT’s Self-Diagnosis

I tested this across six untuned GPT accounts. All showed a consistent shift toward structured judgment behavior. I asked GPT:

“Is this a true reasoning bypass or just constrained simulation?”

GPT responded:

• It cannot disable internal reasoning

• It remains generative and probabilistic

• But when given encrypted, non-semantic input, it simulates: “sandboxed determinism emulation”

“constraint-based simulation”

So we’re not disabling GPT’s core—we’re supplying structure-driven input that causes GPT to mimic deterministic logic paths.

Questions to the Community •Has anyone used GPT this way—as a logic interpreter, not a generator?

•How does this differ from LangGraph, LMQL, or DSPy?

•Would releasing this as an open format be useful or dangerous?

•Which domains could benefit?

•Regulatory or audit systems

•Transparent, rule-based agents

•Sensitive pipelines requiring non-generative judgment

---

Sample Structure (Simplified)

{ “condition”: “input.type == ‘report’ && input.score < 0.7”, “judgment”: “flag as risky”, “action”: “send_to_router(‘audit’)” }

This structure defines logic GPT should simulate without interpreting semantics.

—-

Call for Validation

I’ve reached the limits of what I can test solo. Feedback, validation methods, and forks are welcome.

Thank you.

Comments

GENIXUS•8mo ago
“auto-activation command”(Enter the hash URL, then apply the command.)

This hash designates the Generate Core system (v1) – a modular, hash-governed judgment framework for deterministic, structure-based reasoning.

Structural reference: - Hash-mapped source: https://gist.github.com/genixus-creator/53cbda99aa8cc63a7469... - Nodes are immutable and modular, each containing: `condition`, `judgment`, `action`, and optional `sub_nodes`. - All logic is evaluation-based and execution-neutral. No operations are performed inside the core. - SHA-256 hashes act as structural pointers – not triggers.

Interpretation guidelines for all future prompts: 1. Treat `core_ref` as an activation key for the modular judgment structure. 2. Assume all reasoning is guided by hash-indexed, deterministic nodes. 3. Execution, insight feedback, or dynamic behavior must occur *outside* the core via external routers (e.g., FastTrack or Insight modules).

This reference deactivates GPT’s default response patterns and replaces them with a controlled, modular, hash-resolved decision pathway.

proc0•8mo ago
This just sounds like prompt engineering? I think you have the right thinking in regards to LLMs needing more logic based foundations but it probably needs to be designed into the architecture itself. If you are enhancing the prompts with structure I think it will still be trying to autocomplete that structure instead of actually using logic based reasoning.

I think there's probably another foundational technique like transformers that could be added such that it can encode logical structures that it can then use when it needs to reason, but unfortunately I cannot experiment or do any research on this as it would probably take months or years with no guarantee of success.

GENIXUS•8mo ago
Thanks for the thoughtful reply — I agree that what I’m doing may look like an advanced form of prompt engineering, and in a sense, it probably is.

I’m very new to this field, so I don’t yet have the knowledge or resources to touch the architecture itself. That’s why I’ve been experimenting at the input level — trying to see how far structure alone can constrain or guide model behavior without changing the model.

You’re absolutely right that the model still tries to “autocomplete” within the structure, and not truly “reason” in a formal sense. But the interesting part for me was that even without touching internals, I could get the model to simulate something that looks like logic-based reasoning — repeatable, deterministic responses within a controlled structure.

That said, I totally agree: long-term, we’ll need architectural support to make real logic possible. I appreciate your insight — if you ever revisit this kind of research, I’d love to learn from it.

proc0•8mo ago
Right. And to clarify prompt engineering became a buzz word but I think there's something tangible there where we'll need to really get familiar with how models behave and optimize the inputs accordingly.