frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Allzonefiles.io – download 307M registered domain names

https://allzonefiles.io
1•iryndin•1m ago•0 comments

Llama.cpp: Deterministic Inference Mode (CUDA): RMSNorm, MatMul, Attention

https://github.com/ggml-org/llama.cpp/pull/16016
2•diwank•5m ago•0 comments

The 12-Month Bug: Why Your Side Project Is Still Just an Idea

https://zakelfassi.com/blog/2025/2025-09-15-twelve-month-bug-side-project-operating-system
2•zakelfassi•6m ago•0 comments

Setting Up a Professional C++ Development Environment on Ubuntu

https://naurislinde.dev/blog/setting-up-a-professional-c-development-environment-on-ubuntu
1•sref•8m ago•0 comments

Stringwa.rs on GPUs: Databases and Bioinformatics

https://ashvardanian.com/posts/stringwars-on-gpus/
1•ashvardanian•10m ago•0 comments

Mirai Variant "Gayfemboy" Infecting 15K+ Devices Daily – Mitigation Ideas?

3•garduno_AA•10m ago•2 comments

Sleep strengthens muscle and bone by boosting growth hormone levels

https://news.berkeley.edu/2025/09/08/sleep-strengthens-muscle-and-bone-by-boosting-growth-hormone...
2•gmays•11m ago•0 comments

Why you should care about the JDBC fetch size

https://in.relation.to/2025/01/24/jdbc-fetch-size/
1•thunderbong•12m ago•0 comments

The 4p Developer

https://www.davidpoll.com/2025/09/the-4p-developer-the-missing-layer-in-platform-thinking/
1•depoll•13m ago•0 comments

Minimalist Minecraft server for memory-restrictive embedded systems

https://github.com/p2r3/bareiron
2•gatane•14m ago•0 comments

Quantum Motion Installs First CMOS-Fabricated Quantum Computer at UK NQCC

https://www.hpcwire.com/off-the-wire/quantum-motion-installs-1st-cmos-fabricated-quantum-computer...
1•rbanffy•15m ago•0 comments

Mint

https://mint-lang.com/
1•8s2ngy•15m ago•0 comments

Is This a Scam Project?

https://github.com/Olow304/memvid/issues/28
1•fenced_load•16m ago•0 comments

China is quietly saving the world from climate change

https://www.noahpinion.blog/p/china-is-quietly-saving-the-world
9•colonCapitalDee•18m ago•0 comments

A Slotted Hash Cons for Alpha Invariance

https://www.philipzucker.com/slotted_hash_cons/
1•g0xA52A2A•19m ago•0 comments

Show HN: Daestro – cloud agnostic compute workload orchestrator

https://daestro.com/
1•thevivekshukla•20m ago•0 comments

Filtering After Shading with Stochastic Texture Filtering

https://research.nvidia.com/publication/2024-05_filtering-after-shading-stochastic-texture-filtering
1•blakepelton•20m ago•1 comments

Chromebook SuzyQ cable open hardware: simple closed debugging cable breakout PCB

https://github.com/erichVK5/erichVK5-suzy-Q-cable-v1
1•transpute•22m ago•0 comments

Show HN: Ads-free Win98 minesweeper game vibe coded with Cursor

https://www.justminesweeper.com/
1•mnfjorge•24m ago•0 comments

10,500 tokens/SEC per request on Nvidia hardware

https://morphllm.com/blog/morph-breaks-10k-barrier
1•bhaktatejas922•24m ago•1 comments

Linking to text fragments with a bookmarklet – alexwlchan

https://alexwlchan.net/2025/text-fragments-bookmarklet/
1•ulrischa•26m ago•0 comments

Defiant nuns flee care home for their abandoned convent in the Alps

https://www.bbc.com/news/articles/c5y8r2gk0vyo
2•DemocracyFTW2•26m ago•1 comments

Canonical announces it will support and distribute Nvidia CUDA in Ubuntu

https://canonical.com/blog/canonical-announces-it-will-support-and-distribute-nvidia-cuda-in-ubuntu
1•giuliomagnifico•28m ago•0 comments

A Millennial's DVD Collection: I'm Returning to Physical Discs – NN/G

https://www.nngroup.com/articles/physical-discs-streaming-experience/
4•ulrischa•29m ago•0 comments

RFS for AI Alignment

https://www.fiftyyears.com/rfs
1•sethbannon•29m ago•0 comments

SGS-1 – A SOTA foundation model for engineering CAD

https://www.spectrallabs.ai/research/SGS-1
6•rohansp•30m ago•0 comments

Show HN: Indie Alternative to iOS 26 Call Screening

https://apps.apple.com/us/app/ghosty-your-ai-voicemail/id6749208610
1•jstorm31•30m ago•0 comments

You can get Nvidia's CUDA on three popular enterprise Linux distros now

https://www.zdnet.com/article/you-can-get-nvidias-cuda-on-three-popular-enterprise-linux-distros-...
2•CrankyBear•30m ago•1 comments

Kioxia Developing 100M IOPS SSD for Nvidia – Blocks and Files

https://blocksandfiles.com/2025/09/15/kioxia-100-million-iops-ssd-nvidia/
1•rbanffy•30m ago•0 comments

The Age of the Super IC

https://hvpandya.com/super-ic
2•keyraycheck•31m ago•0 comments
Open in hackernews

Project 1511 – Why we should train separate AIs – one for thruth, one for art

2•Wydmuh•4mo ago
Project 1511: The AI Dichotomy Initiative.

At the beginning i will mark, that text was written with the help of AI, my english is not as good as i thought .

Why i think we should split AI into two distinct, non-overlapping systems:

1. Kalkul (Logic Engine)

   - Puprouse: pure factual accuracy (STEM, law, medicine).  

   - Rules: No metaphors, no "I think" – only verifiable data.  

   - *Example Input:* "Calculate quantum decoherence times for qubits." → Output: Equations + peer-reviewed sources.  
2. Bard (Creative Agent)

   - Purpose: Unconstrained abstraction (art, philosophy, emotion).  

   - Rules: No facts, only meaning-making. Flagged disclaimers (e.g., "This is poetry, not truth").  

   - Example Input: "Describe grief as a physical space." → Output: "A room where the walls are made of old phone calls..."
The 8+2 Rule: Why Forcing Errors in Creative AI ('Bard') Makes It Stronger" We’re trapped in a loop: We train AI to "never" make mistakes, then wonder why it’s creatively sterile. What if we did the opposite?

The 8+2 Rule for "Bard" (Creative AI)

For every 10 responses, Bard generates: - 8 "logically sound" answers (baseline).

   - 2 *intentional errors* (wrong conclusions, flawed syllogisms, or "poetic" math).  
Errors are tagged (e.g., " Fallacy: Affirming the consequent") but not corrected. Users dissect errors to see how Bard breaks logic—and why it’s useful. Example: Question = "Explain democracy"

8 Correct Responses:

1. "A system where power derives from popular vote."

2. "Rule by majority, with protections for minorities."

[...]

2 Intentional Errors:

1. "Democracy is when two wolves and a sheep vote on dinner."

   - Error: False equivalence (politics ≠ predation).  

   - Value: Exposes fears of tyranny of the majority.  
2. "Democracy died in 399 BC when Socrates drank hemlock."

   - Error: Post hoc fallacy.  

   - Value: Questions elitism vs. popular will.  
Why This Works

Trains users , not just AI: - Spotting Bard’s errors becomes a "game" (like debugging code).

   - Users learn logic faster by seeing broken examples (studies show +30% retention vs. dry lectures).  
Bard’s "personality" emerges from flaws: - Its "voice" isn’t sanitized—errors reveal biases (e.g., libertarian vs. collectivist slant).

Safeguards "Kalkul": - By confining errors to Bard, Kalkul stays *pristine* (no hallucinations in medical advice).

3. Hybrid Bridge (Optional Legacy Mode)

   - Purpose: Temporary transition tool.  

   - Mechanics: ONLY merges pre-generated outputs from Kalkul/Bard without adding new content.  
Why It Matters

- Efficiency: 40-60% lower compute costs (no redundant "bridging" layers).

- Trust: eliminates hallucination risks in critical domains.

- Creative Freedom: Bard explores absurdity without algorithmic guilt.

- Education: Users learn to distinguish logic from artistry.

Technical Implementation

- Separate fine-tuning datasets:

  - Kalkul: arXiv, textbooks, structured databases.  

  - Bard: Surrealist literature, oral storytelling traditions.  
- UI with a physical toggle (or app tabs): `[FACT]` / `[DREAM]` / `[LEGACY]`.

Cultural Impact

- For Science: Restores faith in AI as a precision tool.

- For Art: Unleashes AI-aided creativity without "accuracy" constraints.

- For Society: Models intellectual honesty by not pretending opposites can merge.

Call to Action

I seek:

- Developers to prototype split models (e.g., fork DeepSeek-MoE).

- Philosophers to refine ethical boundaries.

- Investors who value specialization over artificial generalism.

Project 1511 isn’t an upgrade—it’s a rebellion against AI’s identity crisis.

Comments

henjodottech•4mo ago
Cool idea. I think LLMs aren’t built for intentional error. They’re wired to optimize meaning—next tokens chosen from attention scores, beam search. You can’t just flip logic and get creativity. You either get coherence or gibberish. If you want poetic mistakes, train on surreal input—but I wouldn’t expect avant-garde results.
Wydmuh•3mo ago
it was just my idea, assuming that biggest players will not give a f..ck. With premeditation i am using rule 8+2 with many diffrent AIs, and asking same question for the same. If with same coding answers are way diffrent and intersting