frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Project 1511 – Why we should train separate AIs – one for thruth, one for art

2•Wydmuh•8mo ago
Project 1511: The AI Dichotomy Initiative.

At the beginning i will mark, that text was written with the help of AI, my english is not as good as i thought .

Why i think we should split AI into two distinct, non-overlapping systems:

1. Kalkul (Logic Engine)

   - Puprouse: pure factual accuracy (STEM, law, medicine).  

   - Rules: No metaphors, no "I think" – only verifiable data.  

   - *Example Input:* "Calculate quantum decoherence times for qubits." → Output: Equations + peer-reviewed sources.  
2. Bard (Creative Agent)

   - Purpose: Unconstrained abstraction (art, philosophy, emotion).  

   - Rules: No facts, only meaning-making. Flagged disclaimers (e.g., "This is poetry, not truth").  

   - Example Input: "Describe grief as a physical space." → Output: "A room where the walls are made of old phone calls..."
The 8+2 Rule: Why Forcing Errors in Creative AI ('Bard') Makes It Stronger" We’re trapped in a loop: We train AI to "never" make mistakes, then wonder why it’s creatively sterile. What if we did the opposite?

The 8+2 Rule for "Bard" (Creative AI)

For every 10 responses, Bard generates: - 8 "logically sound" answers (baseline).

   - 2 *intentional errors* (wrong conclusions, flawed syllogisms, or "poetic" math).  
Errors are tagged (e.g., " Fallacy: Affirming the consequent") but not corrected. Users dissect errors to see how Bard breaks logic—and why it’s useful. Example: Question = "Explain democracy"

8 Correct Responses:

1. "A system where power derives from popular vote."

2. "Rule by majority, with protections for minorities."

[...]

2 Intentional Errors:

1. "Democracy is when two wolves and a sheep vote on dinner."

   - Error: False equivalence (politics ≠ predation).  

   - Value: Exposes fears of tyranny of the majority.  
2. "Democracy died in 399 BC when Socrates drank hemlock."

   - Error: Post hoc fallacy.  

   - Value: Questions elitism vs. popular will.  
Why This Works

Trains users , not just AI: - Spotting Bard’s errors becomes a "game" (like debugging code).

   - Users learn logic faster by seeing broken examples (studies show +30% retention vs. dry lectures).  
Bard’s "personality" emerges from flaws: - Its "voice" isn’t sanitized—errors reveal biases (e.g., libertarian vs. collectivist slant).

Safeguards "Kalkul": - By confining errors to Bard, Kalkul stays *pristine* (no hallucinations in medical advice).

3. Hybrid Bridge (Optional Legacy Mode)

   - Purpose: Temporary transition tool.  

   - Mechanics: ONLY merges pre-generated outputs from Kalkul/Bard without adding new content.  
Why It Matters

- Efficiency: 40-60% lower compute costs (no redundant "bridging" layers).

- Trust: eliminates hallucination risks in critical domains.

- Creative Freedom: Bard explores absurdity without algorithmic guilt.

- Education: Users learn to distinguish logic from artistry.

Technical Implementation

- Separate fine-tuning datasets:

  - Kalkul: arXiv, textbooks, structured databases.  

  - Bard: Surrealist literature, oral storytelling traditions.  
- UI with a physical toggle (or app tabs): `[FACT]` / `[DREAM]` / `[LEGACY]`.

Cultural Impact

- For Science: Restores faith in AI as a precision tool.

- For Art: Unleashes AI-aided creativity without "accuracy" constraints.

- For Society: Models intellectual honesty by not pretending opposites can merge.

Call to Action

I seek:

- Developers to prototype split models (e.g., fork DeepSeek-MoE).

- Philosophers to refine ethical boundaries.

- Investors who value specialization over artificial generalism.

Project 1511 isn’t an upgrade—it’s a rebellion against AI’s identity crisis.

Comments

henjodottech•8mo ago
Cool idea. I think LLMs aren’t built for intentional error. They’re wired to optimize meaning—next tokens chosen from attention scores, beam search. You can’t just flip logic and get creativity. You either get coherence or gibberish. If you want poetic mistakes, train on surreal input—but I wouldn’t expect avant-garde results.
Wydmuh•8mo ago
it was just my idea, assuming that biggest players will not give a f..ck. With premeditation i am using rule 8+2 with many diffrent AIs, and asking same question for the same. If with same coding answers are way diffrent and intersting

America's biggest power grid operator has an AI problem – too many data centers

https://www.msn.com/en-us/money/companies/america-s-biggest-power-grid-operator-has-an-ai-problem...
1•jnord•1m ago•0 comments

Circular Buffer

https://en.wikipedia.org/wiki/Circular_buffer
1•tosh•3m ago•0 comments

ADHD. How do you manage the constant stream of thoughts and ideas?

2•chriswright1664•7m ago•0 comments

'Cosmic clock' in tiny crystals reveals rise and fall of ancient landscapes

https://theconversation.com/a-cosmic-clock-in-tiny-crystals-has-revealed-the-rise-and-fall-of-aus...
1•defrost•9m ago•0 comments

Eurail Data Security Incident – January 2026

https://eurail.zendesk.com/hc/en-001/categories/33099262757789-Data-Security-Incident-January-2026
1•captn3m0•11m ago•0 comments

-Wsign-Compare Is Garbage

https://staticthinking.wordpress.com/2023/07/25/wsign-compare-is-garbage/
1•welfareleech•12m ago•0 comments

Wegmans press release translated how to say we scan your face and call it safety

https://blog.adafruit.com/2026/01/13/wegmans-press-release-translated-how-to-say-we-scan-your-fac...
1•ptorrone•12m ago•0 comments

Simulating hardware keyboard input on Windows

https://autoptt.com/posts/simulating-a-real-keyboard-with-faker-input/
1•birdculture•14m ago•0 comments

My website was down because I didn't pay the server bill

https://jeena.net/website-down
1•jeena•15m ago•0 comments

The Science of Losing Weight

https://www.npr.org/2026/01/05/nx-s1-5662557/the-science-of-losing-weight
1•paulpauper•15m ago•0 comments

Tim Dettmers: A Personal Guide to Automating Your Own Work

https://timdettmers.com/2026/01/13/use-agents-or-be-left-behind/
1•nl•18m ago•0 comments

Meta lays off 1k+ employees, shutting down game studios

https://www.bloomberg.com/news/articles/2026-01-13/meta-begins-jobs-cuts-after-shifting-focus-fro...
8•freethejazz•19m ago•3 comments

My PC running directly from Batteries [video]

https://www.youtube.com/watch?v=wGRHRXiy3Go
1•vanburen•19m ago•0 comments

We Were Wrong About Our Minds–and AI

https://www.youtube.com/watch?v=YoRMZhuk3lY
1•stevenjgarner•25m ago•0 comments

We Saved 70% CPU and 60% Memory in Refinery's Go Code

https://www.honeycomb.io/blog/how-we-saved-70-cpu-60-memory-refinery
1•tosh•26m ago•0 comments

The RAM shortage's silver lining: Less talk about "AI PCs"

https://arstechnica.com/gadgets/2026/01/the-ram-shortages-silver-lining-less-talk-about-ai-pcs/
3•doener•26m ago•1 comments

One pull of a string is all it takes to deploy these complex structures

https://techxplore.com/news/2025-12-deploy-complex.html
3•PaulHoule•34m ago•0 comments

Interactive Turbulence Map

https://turbli.com/maps/interactive-turbulence-map/
1•bookofjoe•35m ago•0 comments

Anatoly Karatsuba

https://en.wikipedia.org/wiki/Anatoly_Karatsuba
1•gjvc•36m ago•0 comments

Show HN: Speakhelp.org – An AAC tool for those with speech difficulties

https://www.hugedomains.com/domain_profile.cfm?d=speakhelp.com
1•JJarrard•36m ago•0 comments

Personal Details of ICE Goons Allegedly Leaked in Data Breach

https://www.thedailybeast.com/personal-details-of-thousands-of-border-patrol-and-ice-goons-allege...
5•DustinEchoes•37m ago•0 comments

US gives green light to Nvidia H200 chip exports to China

https://www.reuters.com/world/asia-pacific/us-eases-regulations-nvidia-h200-chip-exports-china-20...
3•falcor84•37m ago•0 comments

Apple chooses Google's Gemini over OpenAI's ChatGPT to power next-gen Siri

https://arstechnica.com/apple/2026/01/apple-says-its-new-ai-powered-siri-will-use-googles-gemini-...
1•xthe•41m ago•1 comments

Starlink activates free internet in Iran

https://www.cnn.com/2026/01/13/politics/starlink-access-iran-protests
11•Agreed3750•41m ago•0 comments

China obsesses over America's "kill line"

https://www.economist.com/china/2026/01/12/china-obsesses-over-americas-kill-line
4•mefengl•42m ago•3 comments

Python learners – review this free courseware

https://industry-python.thinkific.com/products/courses/industry-projects-with-python
1•jcasman•42m ago•1 comments

How Much of AI Labs' Research Is Safety?

https://fi-le.net/safety-blogs/
1•fi-le•42m ago•0 comments

Who Decides Who Doesn't Deserve Privacy?

https://www.troyhunt.com/who-decides-who-doesnt-deserve-privacy/
3•LorenDB•46m ago•0 comments

Tell HN: Email from Anthropic "Share your feedback on your experience"

1•selectnull•48m ago•0 comments

Observability cost drivers and levers of control

https://www.honeycomb.io/blog/how-much-should-i-spend-on-observability-pt2
2•tosh•48m ago•0 comments