frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

How to Cut an Infinitely Large Pancake into as Many Pieces as Possible

https://www.nytimes.com/2026/01/20/science/infinite-pancake-math-puzzle.html
1•bookofjoe•1m ago•1 comments

Sync/Backup/Install AI Agent configs across machines

https://sync-conf.dev/
1•zanreal•6m ago•1 comments

Wall Street Grapples with New Risk: A European Buyers' Strike

https://www.bloomberg.com/news/articles/2026-01-24/wall-street-grapples-with-new-risk-a-european-...
2•saubeidl•9m ago•1 comments

FR#150 – On ICE, Verification, and Presence as Harm

https://connectedplaces.online/reports/fr150-on-ice-verification-and-presence-as-harm/
2•colinprince•9m ago•0 comments

WA state is turning your 3D printer into a cop

https://blog.adafruit.com/2026/01/25/washingtons-3d-printing-bills-are-bad-for-stem-bad-for-busin...
1•ptorrone•10m ago•0 comments

Software 3.0 is the era of Thinkers

https://twitter.com/theOpusLABS/status/2015465349206941711
1•opuslabs•11m ago•0 comments

Monopsony, Markdown, and Minimum Wages [pdf]

https://eml.berkeley.edu/~schoefer/schoefer_files/FLS_MMM.pdf
1•jandrewrogers•11m ago•0 comments

Show HN: Graft – binary file patcher in Rust

https://github.com/sam-mfb/graft
1•sam256•12m ago•0 comments

You Are an Agent – Try Being a Human LLM

https://youareanagent.app/
1•robkop•17m ago•1 comments

Show HN: We built a hidden micro-bearing system inside a 2mm ring

2•spinity•17m ago•1 comments

Nano agent: a minimalistic Python library for building AI agents using DAGs

https://github.com/NTT123/nano-agent
2•xcodevn•19m ago•1 comments

Show HN: RealXV6 – a faithful Unix V6 kernel port to 8086 real mode

https://github.com/FounderSG/RealXV6
1•FounderSG•21m ago•0 comments

Ask HN: What's the best place to look at snow webcams?

1•bastawhiz•22m ago•2 comments

Cleaner air is (inadvertently) harming the Great Barrier Reef

https://phys.org/news/2026-01-cleaner-air-inadvertently-great-barrier.html
1•Brajeshwar•24m ago•0 comments

Ancient Spanish trees reveal Mediterranean storms are intensifying

https://phys.org/news/2026-01-ancient-spanish-trees-reveal-mediterranean.html
1•Brajeshwar•25m ago•0 comments

What happens when you train an LLM only on limited historical data

https://www.popsci.com/technology/this-ai-thinks-its-the-1800s/
1•Brajeshwar•25m ago•0 comments

Hey guys, check out my idea

1•harinand•26m ago•6 comments

XSS –> RCE in Screeps, a programming game on Steam

https://outsidetheasylum.blog/screeps/
2•Tiberium•26m ago•1 comments

Geo Is Not the Next Generation of SEO

https://valarmorghulis.io/view/202601-geo-vs-seo/
1•socrateslee•28m ago•1 comments

Human Approval as a Service

1•mf_taria•28m ago•0 comments

Software patches in NixOS for fun and productivity

https://log.pfad.fr/2026/software-patching-in-nixos/
4•todsacerdoti•31m ago•0 comments

The First Full-Scale Cyber War: 4 Years of Lessons

https://techtrenches.dev/p/the-first-full-scale-cyber-war-4
1•bryanrasmussen•32m ago•0 comments

Heart rhythm problems detected four times more often with smartwatches

https://nltimes.nl/2026/01/22/heart-rhythm-problems-detected-four-times-often-smartwatches
1•giuliomagnifico•32m ago•1 comments

More than a quarter of Britons say they fear losing jobs to AI in next 5 years

https://www.theguardian.com/business/2026/jan/25/more-than-quarter-britons-fear-losing-jobs-ai-ne...
4•chrisjj•34m ago•0 comments

Jack Kerouac on the Steve Allen Show with Steve Allen 1959 [video]

https://www.youtube.com/watch?v=3LLpNKo09Xk
1•aabiji•35m ago•0 comments

Kingdoms of Water: The Mekong River, empire, and the limits of human ingenuity

https://worldhistory.substack.com/p/kingdoms-of-water
1•crescit_eundo•37m ago•0 comments

Ice cream is one of the healthiest foods in existence

https://twitter.com/Outdoctrination/status/2015449347920396347
4•bilsbie•37m ago•0 comments

Inside Apple's AI Shake-Up and Its Plans for Two New Versions of Siri

https://www.bloomberg.com/news/newsletters/2026-01-25/inside-apple-s-ai-shake-up-ai-safari-and-pl...
2•thm•37m ago•0 comments

Good Taste

https://emsh.cat/good-taste/
1•embedding-shape•39m ago•2 comments

AMD Releases MLIR-AIE 1.2 Compiler Toolchain for Targeting Ryzen AI NPUs

https://www.phoronix.com/news/AMD-MLIR-AIE-1.2
2•pella•39m ago•0 comments
Open in hackernews

Project 1511 – Why we should train separate AIs – one for thruth, one for art

2•Wydmuh•8mo ago
Project 1511: The AI Dichotomy Initiative.

At the beginning i will mark, that text was written with the help of AI, my english is not as good as i thought .

Why i think we should split AI into two distinct, non-overlapping systems:

1. Kalkul (Logic Engine)

   - Puprouse: pure factual accuracy (STEM, law, medicine).  

   - Rules: No metaphors, no "I think" – only verifiable data.  

   - *Example Input:* "Calculate quantum decoherence times for qubits." → Output: Equations + peer-reviewed sources.  
2. Bard (Creative Agent)

   - Purpose: Unconstrained abstraction (art, philosophy, emotion).  

   - Rules: No facts, only meaning-making. Flagged disclaimers (e.g., "This is poetry, not truth").  

   - Example Input: "Describe grief as a physical space." → Output: "A room where the walls are made of old phone calls..."
The 8+2 Rule: Why Forcing Errors in Creative AI ('Bard') Makes It Stronger" We’re trapped in a loop: We train AI to "never" make mistakes, then wonder why it’s creatively sterile. What if we did the opposite?

The 8+2 Rule for "Bard" (Creative AI)

For every 10 responses, Bard generates: - 8 "logically sound" answers (baseline).

   - 2 *intentional errors* (wrong conclusions, flawed syllogisms, or "poetic" math).  
Errors are tagged (e.g., " Fallacy: Affirming the consequent") but not corrected. Users dissect errors to see how Bard breaks logic—and why it’s useful. Example: Question = "Explain democracy"

8 Correct Responses:

1. "A system where power derives from popular vote."

2. "Rule by majority, with protections for minorities."

[...]

2 Intentional Errors:

1. "Democracy is when two wolves and a sheep vote on dinner."

   - Error: False equivalence (politics ≠ predation).  

   - Value: Exposes fears of tyranny of the majority.  
2. "Democracy died in 399 BC when Socrates drank hemlock."

   - Error: Post hoc fallacy.  

   - Value: Questions elitism vs. popular will.  
Why This Works

Trains users , not just AI: - Spotting Bard’s errors becomes a "game" (like debugging code).

   - Users learn logic faster by seeing broken examples (studies show +30% retention vs. dry lectures).  
Bard’s "personality" emerges from flaws: - Its "voice" isn’t sanitized—errors reveal biases (e.g., libertarian vs. collectivist slant).

Safeguards "Kalkul": - By confining errors to Bard, Kalkul stays *pristine* (no hallucinations in medical advice).

3. Hybrid Bridge (Optional Legacy Mode)

   - Purpose: Temporary transition tool.  

   - Mechanics: ONLY merges pre-generated outputs from Kalkul/Bard without adding new content.  
Why It Matters

- Efficiency: 40-60% lower compute costs (no redundant "bridging" layers).

- Trust: eliminates hallucination risks in critical domains.

- Creative Freedom: Bard explores absurdity without algorithmic guilt.

- Education: Users learn to distinguish logic from artistry.

Technical Implementation

- Separate fine-tuning datasets:

  - Kalkul: arXiv, textbooks, structured databases.  

  - Bard: Surrealist literature, oral storytelling traditions.  
- UI with a physical toggle (or app tabs): `[FACT]` / `[DREAM]` / `[LEGACY]`.

Cultural Impact

- For Science: Restores faith in AI as a precision tool.

- For Art: Unleashes AI-aided creativity without "accuracy" constraints.

- For Society: Models intellectual honesty by not pretending opposites can merge.

Call to Action

I seek:

- Developers to prototype split models (e.g., fork DeepSeek-MoE).

- Philosophers to refine ethical boundaries.

- Investors who value specialization over artificial generalism.

Project 1511 isn’t an upgrade—it’s a rebellion against AI’s identity crisis.

Comments

henjodottech•8mo ago
Cool idea. I think LLMs aren’t built for intentional error. They’re wired to optimize meaning—next tokens chosen from attention scores, beam search. You can’t just flip logic and get creativity. You either get coherence or gibberish. If you want poetic mistakes, train on surreal input—but I wouldn’t expect avant-garde results.
Wydmuh•8mo ago
it was just my idea, assuming that biggest players will not give a f..ck. With premeditation i am using rule 8+2 with many diffrent AIs, and asking same question for the same. If with same coding answers are way diffrent and intersting