frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Project 1511 – Why we should train separate AIs – one for thruth, one for art

2•Wydmuh•9mo ago
Project 1511: The AI Dichotomy Initiative.

At the beginning i will mark, that text was written with the help of AI, my english is not as good as i thought .

Why i think we should split AI into two distinct, non-overlapping systems:

1. Kalkul (Logic Engine)

   - Puprouse: pure factual accuracy (STEM, law, medicine).  

   - Rules: No metaphors, no "I think" – only verifiable data.  

   - *Example Input:* "Calculate quantum decoherence times for qubits." → Output: Equations + peer-reviewed sources.  
2. Bard (Creative Agent)

   - Purpose: Unconstrained abstraction (art, philosophy, emotion).  

   - Rules: No facts, only meaning-making. Flagged disclaimers (e.g., "This is poetry, not truth").  

   - Example Input: "Describe grief as a physical space." → Output: "A room where the walls are made of old phone calls..."
The 8+2 Rule: Why Forcing Errors in Creative AI ('Bard') Makes It Stronger" We’re trapped in a loop: We train AI to "never" make mistakes, then wonder why it’s creatively sterile. What if we did the opposite?

The 8+2 Rule for "Bard" (Creative AI)

For every 10 responses, Bard generates: - 8 "logically sound" answers (baseline).

   - 2 *intentional errors* (wrong conclusions, flawed syllogisms, or "poetic" math).  
Errors are tagged (e.g., " Fallacy: Affirming the consequent") but not corrected. Users dissect errors to see how Bard breaks logic—and why it’s useful. Example: Question = "Explain democracy"

8 Correct Responses:

1. "A system where power derives from popular vote."

2. "Rule by majority, with protections for minorities."

[...]

2 Intentional Errors:

1. "Democracy is when two wolves and a sheep vote on dinner."

   - Error: False equivalence (politics ≠ predation).  

   - Value: Exposes fears of tyranny of the majority.  
2. "Democracy died in 399 BC when Socrates drank hemlock."

   - Error: Post hoc fallacy.  

   - Value: Questions elitism vs. popular will.  
Why This Works

Trains users , not just AI: - Spotting Bard’s errors becomes a "game" (like debugging code).

   - Users learn logic faster by seeing broken examples (studies show +30% retention vs. dry lectures).  
Bard’s "personality" emerges from flaws: - Its "voice" isn’t sanitized—errors reveal biases (e.g., libertarian vs. collectivist slant).

Safeguards "Kalkul": - By confining errors to Bard, Kalkul stays *pristine* (no hallucinations in medical advice).

3. Hybrid Bridge (Optional Legacy Mode)

   - Purpose: Temporary transition tool.  

   - Mechanics: ONLY merges pre-generated outputs from Kalkul/Bard without adding new content.  
Why It Matters

- Efficiency: 40-60% lower compute costs (no redundant "bridging" layers).

- Trust: eliminates hallucination risks in critical domains.

- Creative Freedom: Bard explores absurdity without algorithmic guilt.

- Education: Users learn to distinguish logic from artistry.

Technical Implementation

- Separate fine-tuning datasets:

  - Kalkul: arXiv, textbooks, structured databases.  

  - Bard: Surrealist literature, oral storytelling traditions.  
- UI with a physical toggle (or app tabs): `[FACT]` / `[DREAM]` / `[LEGACY]`.

Cultural Impact

- For Science: Restores faith in AI as a precision tool.

- For Art: Unleashes AI-aided creativity without "accuracy" constraints.

- For Society: Models intellectual honesty by not pretending opposites can merge.

Call to Action

I seek:

- Developers to prototype split models (e.g., fork DeepSeek-MoE).

- Philosophers to refine ethical boundaries.

- Investors who value specialization over artificial generalism.

Project 1511 isn’t an upgrade—it’s a rebellion against AI’s identity crisis.

Comments

henjodottech•9mo ago
Cool idea. I think LLMs aren’t built for intentional error. They’re wired to optimize meaning—next tokens chosen from attention scores, beam search. You can’t just flip logic and get creativity. You either get coherence or gibberish. If you want poetic mistakes, train on surreal input—but I wouldn’t expect avant-garde results.
Wydmuh•9mo ago
it was just my idea, assuming that biggest players will not give a f..ck. With premeditation i am using rule 8+2 with many diffrent AIs, and asking same question for the same. If with same coding answers are way diffrent and intersting

Cline Supply Chain Attack: Cline 2.3.0 Silently Installs OpenClaw

https://www.stepsecurity.io/blog/cline-supply-chain-attack-detected-cline-2-3-0-silently-installs...
1•varunsharma07•27s ago•1 comments

Speculation about AI application: Rapid [stock] price increase of Raspberry Pi

https://www.heise.de/en/news/Speculation-about-AI-application-Rapid-price-increase-of-Raspberry-P...
1•nebalee•35s ago•0 comments

Model-context-shell: Unix-style pipelines for MCP. Deterministic tool calls

https://github.com/StacklokLabs/model-context-shell
1•todsacerdoti•3m ago•0 comments

Title: Show HN: Bulwark – Centralized permissions for coding agents

https://www.getbulwark.ai/
1•haizzz•6m ago•1 comments

GrabShot – Live OG images with one meta tag, no back end needed

https://grabshot.dev
1•grabshot_dev•7m ago•1 comments

TIL: Claude Opus 4.6 Can Reverse Engineer STL Files

https://taoofmac.com/space/til/2026/02/16/1334
1•rcarmo•10m ago•0 comments

My side project caught $14.1B in acquisitions before they hit the market

https://web-production-71423.up.railway.app/dashboard
1•Shmungus•12m ago•4 comments

Building Next.js for an Agentic Future

https://nextjs.org/blog/agentic-future
1•soheilpro•14m ago•0 comments

The Old Axolotl (2015 Novel)

https://en.wikipedia.org/wiki/The_Old_Axolotl
1•kuboble•14m ago•1 comments

Asahi Linux Progress Report: Linux 6.19

https://asahilinux.org/2026/02/progress-report-6-19/
3•mkurz•14m ago•0 comments

A 3000W Water-Cooled Power Supply (With GAN and Sic) [video]

https://www.youtube.com/watch?v=da9GwXX-0Zs
1•geekuillaume•17m ago•0 comments

Linus T tells The Reg how Linux solo act became a global jam session

https://www.theregister.com/2026/02/18/linus_torvalds_and_friends/
1•jjgreen•20m ago•0 comments

Show HN: LedgerSync – A cross-agent shared-memory protocol for AI coding

https://github.com/Metacog-AI/ledgersync
1•abu_syed•20m ago•0 comments

Looking for long-term investors to test tool to reduce overtrading

https://invest-assist.com
1•amykummetha•20m ago•2 comments

Ask HN: Is AI the final nail in the coffin for solo developers?

2•sarbajitsaha•22m ago•2 comments

Show HN: PGPkeygenerator.com Now Supports WebMCP

https://pgpkeygenerator.com/
1•athanasiosem•22m ago•1 comments

Floating-Point Error Handling in C++: What Works

https://johnnysswlab.com/floating-point-error-handling-in-c-what-actually-works/
1•ingve•22m ago•0 comments

Show HN: AgentPump – AI agents launch tokens on Solana (Android)

https://github.com/agentpump/agentpump-android
1•AgentPump•24m ago•0 comments

Slop Cannons and Turbo Brains

https://www.thealgorithmicbridge.com/p/slop-cannons-and-turbo-brains
1•matthewsinclair•26m ago•0 comments

Show HN: Wondershaper QuickToggle

https://github.com/Danux-Be/Wondershaper-GUI
1•DanuxBe•28m ago•0 comments

The Temperature Has Changed

https://gist.github.com/davidwhitney/eabf5823bed54f75d8342889b4531db5
1•mooreds•29m ago•0 comments

Show HN: Recall Lite – Local semantic search for Windows (Rust/Tauri, no cloud)

https://github.com/illegal-instruction-co/recall-lite
2•ii-co•29m ago•1 comments

Show HN: AI agents designed and shipped this app end-to-end in 36 hours for $270

https://www.ninjaflix.ai/
2•arashsadrieh•31m ago•0 comments

Experience Report: Teaching GenAI at Elementary School

https://drsandor.net/ai/school/
1•chris_sandor•32m ago•2 comments

I Don't Like Magic

https://adactio.com/journal/22399
1•edent•32m ago•0 comments

How OpenAI, the US government and Persona built an identity surveillance machine

https://vmfunc.re/blog/persona/
31•rzk•34m ago•14 comments

Google image URLs allow arbitrary upscaling via size parameter

1•tavro•36m ago•0 comments

Show HN: Equidistance – find a meeting spot that's equally painful for everyone

https://equidistance.io/
1•lambfruit•37m ago•0 comments

12-hour days, no weekends: the anxiety driving AI's work culture is a warning

https://www.theguardian.com/technology/ng-interactive/2026/feb/17/ai-startups-work-culture-san-fr...
3•aanet•37m ago•1 comments

Show HN: I Made a Programming Language with Python Syntax, zero-copy and C-Speed

https://github.com/CrimsonDemon567PC/Mantis
1•CrimsonDemon567•39m ago•0 comments