frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

"Design Me a Highly Resilient Database"

https://nikogura.com/DatabaseDesign.html
1•donutshop•1m ago•0 comments

To the Polypropylene Makers

https://www.lesswrong.com/posts/HQTueNS4mLaGy3BBL/here-s-to-the-polypropylene-makers
1•raldi•6m ago•0 comments

Claude Is Alive, Company Warns AI Model May Be Conscious, Its over [video]

https://www.youtube.com/watch?v=-SVPjEF0ZW8
1•cable2600•10m ago•0 comments

Netdata is a seriously impressive server monitoring tool

https://thenewstack.io/netdata-is-a-seriously-impressive-server-monitoring-tool-to-keep-you-up-to...
1•gtzi•10m ago•0 comments

Open Creation and its Enemies [pdf]

https://files.libcom.org/files/2023-01/OpenCreationAndItsEnemies.pdf
1•jruohonen•13m ago•0 comments

Reverse engineering a DOS game with no source code using Codex 5.4

https://twitter.com/ammaar/status/2030392563534893381
3•asronline•16m ago•0 comments

Agentic Coding for Non-Vibe Coders

https://theasymptotic.substack.com/p/agentic-coding-for-non-vibe-coders
1•tipoffdosage904•21m ago•2 comments

Show HN: Render Claude Code and Codex Transcripts as Browsable HTML

https://github.com/forhadahmed/ai-transcript
3•forhadahmed•24m ago•0 comments

Oracle and OpenAI scrap deal to expand flagship Texas data centre

https://www.ft.com/content/2fa83bbf-abf2-43f1-b2f0-84a1391150b9
4•petethomas•24m ago•0 comments

We professional C-suites, lost the battle against vibe-leadership?

2•Bridged7756•25m ago•0 comments

What Production AI APIs Need Beyond Response = LLM(prompt)

https://medium.com/@lei-ye/what-breaks-after-your-ai-demo-works-638ac910f9fa
2•leiishta•25m ago•1 comments

Sem – Semantic version control. Entity-level diffs on top of Git

https://github.com/ataraxy-labs/sem
2•pabs3•26m ago•0 comments

The Vienna Method in Amsterdam

https://watermark02.silverchair.com/desi_a_00379.pdf?token=AQECAHi208BE49Ooan9kkhW_Ercy7Dm3ZL_9Cf...
1•jruohonen•30m ago•0 comments

From Human Ergonomics to Agent Ergonomics

https://wesmckinney.com/blog/agent-ergonomics/
1•kermatt•32m ago•1 comments

One Year of Claude Code

https://www.maxghenis.com/blog/my-claude-code-config/
1•ankitg12•33m ago•0 comments

Rising star chip scientist Jiang Jianfeng leaves MIT for Peking University

https://www.scmp.com/news/china/science/article/3345553/rising-star-chip-scientist-jiang-jianfeng...
4•mikhael•37m ago•0 comments

Push for $40 smartphones builds momentum, but still faces cost hurdles

https://techcrunch.com/2026/03/07/push-for-40-smartphones-builds-momentum-but-still-faces-cost-hu...
2•jnord•40m ago•0 comments

Explosion reported outside US embassy in Oslo, police say

https://www.bbc.com/news/articles/c5yjegg892lo
3•petethomas•42m ago•0 comments

Show HN: Booklet AI – AI-Powered Digital Flipbook Creator

https://bookletai.org/index.html
1•feiyu123456•44m ago•0 comments

Show HN: HireSignal – discover tech hiring signals from social posts (waitlist)

https://www.hiresignal.pro/
1•startupYu•54m ago•0 comments

A blog post series about big integer arithmetic

https://theblessedmachine.substack.com/p/big-integers-writing-and-optimizing
1•Tommyrexx•55m ago•0 comments

Show HN: Strata – 31-43% cheaper Claude Code reads via entropy, no parser

https://github.com/noopz/strata
1•noopz_•56m ago•0 comments

Managers have no human rights (2024)

https://yosefk.com/blog/managers-have-no-human-rights.html
4•signa11•57m ago•0 comments

Beyond the CPU: Why Your Next Computer Needs an NPU

https://techlife.blog/posts/beyond-the-cpu-why-your-next-computer-needs-an-npu/
1•clarkmaxwell•1h ago•0 comments

How to Live Forever

https://internetguy.dev/posts/live-forever/
1•internetguy•1h ago•1 comments

Experiment That Predicted How AI Agents Would Cooperate

https://pub.towardsai.net/information-topology-in-multi-agent-systems-cb925c5b86d9
1•erenkaratas•1h ago•1 comments

Show HN: cryptographic receipts for AI code changes pip install titanate

https://github.com/Rehanrana11/titan-gate-public
1•rmasoodx22•1h ago•1 comments

Show HN: Beecon Infrastructure as Intent, open source IaC built for AI agents

https://beecon.sh
2•gtlpanda•1h ago•0 comments

Show HN: Make AI and automation pipelines fail-closed

https://github.com/OneInX/Manifest-InX-EBS
1•oneinx•1h ago•1 comments

Show HN: Scan0tron – AI screen capture that auto-fills forms ($49)

https://jrdconnect.com
1•jaydurangodev•1h ago•0 comments
Open in hackernews

Project 1511 – Why we should train separate AIs – one for thruth, one for art

2•Wydmuh•9mo ago
Project 1511: The AI Dichotomy Initiative.

At the beginning i will mark, that text was written with the help of AI, my english is not as good as i thought .

Why i think we should split AI into two distinct, non-overlapping systems:

1. Kalkul (Logic Engine)

   - Puprouse: pure factual accuracy (STEM, law, medicine).  

   - Rules: No metaphors, no "I think" – only verifiable data.  

   - *Example Input:* "Calculate quantum decoherence times for qubits." → Output: Equations + peer-reviewed sources.  
2. Bard (Creative Agent)

   - Purpose: Unconstrained abstraction (art, philosophy, emotion).  

   - Rules: No facts, only meaning-making. Flagged disclaimers (e.g., "This is poetry, not truth").  

   - Example Input: "Describe grief as a physical space." → Output: "A room where the walls are made of old phone calls..."
The 8+2 Rule: Why Forcing Errors in Creative AI ('Bard') Makes It Stronger" We’re trapped in a loop: We train AI to "never" make mistakes, then wonder why it’s creatively sterile. What if we did the opposite?

The 8+2 Rule for "Bard" (Creative AI)

For every 10 responses, Bard generates: - 8 "logically sound" answers (baseline).

   - 2 *intentional errors* (wrong conclusions, flawed syllogisms, or "poetic" math).  
Errors are tagged (e.g., " Fallacy: Affirming the consequent") but not corrected. Users dissect errors to see how Bard breaks logic—and why it’s useful. Example: Question = "Explain democracy"

8 Correct Responses:

1. "A system where power derives from popular vote."

2. "Rule by majority, with protections for minorities."

[...]

2 Intentional Errors:

1. "Democracy is when two wolves and a sheep vote on dinner."

   - Error: False equivalence (politics ≠ predation).  

   - Value: Exposes fears of tyranny of the majority.  
2. "Democracy died in 399 BC when Socrates drank hemlock."

   - Error: Post hoc fallacy.  

   - Value: Questions elitism vs. popular will.  
Why This Works

Trains users , not just AI: - Spotting Bard’s errors becomes a "game" (like debugging code).

   - Users learn logic faster by seeing broken examples (studies show +30% retention vs. dry lectures).  
Bard’s "personality" emerges from flaws: - Its "voice" isn’t sanitized—errors reveal biases (e.g., libertarian vs. collectivist slant).

Safeguards "Kalkul": - By confining errors to Bard, Kalkul stays *pristine* (no hallucinations in medical advice).

3. Hybrid Bridge (Optional Legacy Mode)

   - Purpose: Temporary transition tool.  

   - Mechanics: ONLY merges pre-generated outputs from Kalkul/Bard without adding new content.  
Why It Matters

- Efficiency: 40-60% lower compute costs (no redundant "bridging" layers).

- Trust: eliminates hallucination risks in critical domains.

- Creative Freedom: Bard explores absurdity without algorithmic guilt.

- Education: Users learn to distinguish logic from artistry.

Technical Implementation

- Separate fine-tuning datasets:

  - Kalkul: arXiv, textbooks, structured databases.  

  - Bard: Surrealist literature, oral storytelling traditions.  
- UI with a physical toggle (or app tabs): `[FACT]` / `[DREAM]` / `[LEGACY]`.

Cultural Impact

- For Science: Restores faith in AI as a precision tool.

- For Art: Unleashes AI-aided creativity without "accuracy" constraints.

- For Society: Models intellectual honesty by not pretending opposites can merge.

Call to Action

I seek:

- Developers to prototype split models (e.g., fork DeepSeek-MoE).

- Philosophers to refine ethical boundaries.

- Investors who value specialization over artificial generalism.

Project 1511 isn’t an upgrade—it’s a rebellion against AI’s identity crisis.

Comments

henjodottech•9mo ago
Cool idea. I think LLMs aren’t built for intentional error. They’re wired to optimize meaning—next tokens chosen from attention scores, beam search. You can’t just flip logic and get creativity. You either get coherence or gibberish. If you want poetic mistakes, train on surreal input—but I wouldn’t expect avant-garde results.
Wydmuh•9mo ago
it was just my idea, assuming that biggest players will not give a f..ck. With premeditation i am using rule 8+2 with many diffrent AIs, and asking same question for the same. If with same coding answers are way diffrent and intersting