frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Show HN: Perspectives – I wanted AI to challenge my thinking, not validate it

https://getperspectives.app
2•Jamium•2w ago
I built Perspectives because I got tired of ChatGPT agreeing with everything I said.

Ask any LLM to "consider multiple perspectives" and you get hedged consensus. The model acknowledges trade-offs exist, then settles on a moderate position that offends nobody. Useful for summaries. Useless for decision making.

Perspectives forces disagreement. 8 personas with fundamentally incompatible frameworks debate your question through a structured protocol, then vote using Single Transferable Vote to surface where they actually land. The output is a PDF report synthesising all of it.

How it works

Blind Proposals: Each persona generates a position without seeing the others. This prevents the "anchoring problem" where early responses shape later ones, bypassing the default sycophancy of LLMs.

Interrogation of Blind Proposals: Proposals face structured challenges from 3 opposing personas. A "high-empathy" persona (e.g., The Idealist) will be challenged by a "low-empathy" cluster (e.g., The Pragmatist). This reveals exactly where arguments buckle under pressure.

Discussion & Voting: Personas can debate (optional) before ranking preferences via STV. This highlights first-choice winners and preference flows rather than simple majority rule.

Analysis/Prediction Report: The final PDF structures recommendations first, followed by supporting analysis (factual background, risk assessment, evidence quality).

Two Operational Modes

Analysis Mode ("What should we do?"): Evaluates options and surfaces trade-offs. Output is qualitative judgment.

Prediction Mode ("What will happen?"): Generates probability estimates with resolution criteria.

Feedback Loops

Most AI agent projects have no way to measure whether their outputs are actually good. Users provide subjective feedback, which is noisy and unreliable. The system optimises for seeming useful rather than being useful.

Prediction Mode creates an objective feedback loop. When a prediction resolves, I can measure accuracy.

I'm integrating Polymarket as the verification source. Run a question through Perspectives, record the predictions, compare against actual outcomes when they resolve. Over time, this builds calibration data showing which methodologies perform best for different question types.

Persona Sets

Different decisions need different analytical lenses. Four built-in sets:

Philosophical (Default): Best for ethical dilemmas and strategic decisions.

Business-Focused: Best for commercial decisions.

Product-Focused: Best for product development.

Forecaster: Optimised for Prediction Mode.

Technical Details

LLM Support: Supports any OpenAI/Anthropic compatible API (Claude, OpenRouter, Ollama, Grok, etc.).

Web Search: Optional integration for grounding debates in recent events.

Output: Single PDF report per query.

What I'm Looking For

I've been building this solo and could use external feedback on a few things:

1. Does the blind proposal mechanism actually produce better disagreement?

2. Is the interrogation protocol overkill or useful? The structured challenge/response/verdict cycle generates rich data, but adds latency (dependant on concurrency settings).

3. What decisions would you run through this?

4. Do you use ChatGPT or similar systems to make decisions?

5. Do you find "chain of thought" output useful for tracking reasoning?

Links

Perspectives: https://getperspectives.app

Dev blog: https://blog.jmatthews.uk

Example Analysis Report (Is it viable to run a nation where all laws expire after 10 years and must be re-passed?): https://drive.google.com/file/d/1hsJOWsQDAtVOqOKF6_a_Q1jYOlB...

Example Prediction Report (Will Kraken IPO by 31st March 2026?): https://drive.google.com/file/d/1m3RedFtv8lKgFqf1_rvzl8W6cTs...

Happy to answer any questions in this thread.

Show HN: SafeClaw – a way to manage multiple Claude Code instances in containers

https://github.com/ykdojo/safeclaw
1•ykdojo•35s ago•0 comments

The Future of the Global Open-Source AI Ecosystem: From DeepSeek to AI+

https://huggingface.co/blog/huggingface/one-year-since-the-deepseek-moment-blog-3
1•gmays•1m ago•0 comments

The Evolution of the Interface

https://www.asktog.com/columns/038MacUITrends.html
1•dhruv3006•2m ago•0 comments

Azure: Virtual network routing appliance overview

https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-routing-appliance-overview
1•mariuz•2m ago•0 comments

Seedance2 – multi-shot AI video generation

https://www.genstory.app/story-template/seedance2-ai-story-generator
1•RyanMu•6m ago•1 comments

Πfs – The Data-Free Filesystem

https://github.com/philipl/pifs
1•ravenical•9m ago•0 comments

Go-busybox: A sandboxable port of busybox for AI agents

https://github.com/rcarmo/go-busybox
2•rcarmo•10m ago•0 comments

Quantization-Aware Distillation for NVFP4 Inference Accuracy Recovery [pdf]

https://research.nvidia.com/labs/nemotron/files/NVFP4-QAD-Report.pdf
1•gmays•11m ago•0 comments

xAI Merger Poses Bigger Threat to OpenAI, Anthropic

https://www.bloomberg.com/news/newsletters/2026-02-03/musk-s-xai-merger-poses-bigger-threat-to-op...
1•andsoitis•11m ago•0 comments

Atlas Airborne (Boston Dynamics and RAI Institute) [video]

https://www.youtube.com/watch?v=UNorxwlZlFk
1•lysace•12m ago•0 comments

Zen Tools

http://postmake.io/zen-list
1•Malfunction92•14m ago•0 comments

Is the Detachment in the Room? – Agents, Cruelty, and Empathy

https://hailey.at/posts/3mear2n7v3k2r
1•carnevalem•15m ago•0 comments

The purpose of Continuous Integration is to fail

https://blog.nix-ci.com/post/2026-02-05_the-purpose-of-ci-is-to-fail
1•zdw•17m ago•0 comments

Apfelstrudel: Live coding music environment with AI agent chat

https://github.com/rcarmo/apfelstrudel
1•rcarmo•18m ago•0 comments

What Is Stoicism?

https://stoacentral.com/guides/what-is-stoicism
3•0xmattf•18m ago•0 comments

What happens when a neighborhood is built around a farm

https://grist.org/cities/what-happens-when-a-neighborhood-is-built-around-a-farm/
1•Brajeshwar•18m ago•0 comments

Every major galaxy is speeding away from the Milky Way, except one

https://www.livescience.com/space/cosmology/every-major-galaxy-is-speeding-away-from-the-milky-wa...
2•Brajeshwar•19m ago•0 comments

Extreme Inequality Presages the Revolt Against It

https://www.noemamag.com/extreme-inequality-presages-the-revolt-against-it/
2•Brajeshwar•19m ago•0 comments

There's no such thing as "tech" (Ten years later)

1•dtjb•20m ago•0 comments

What Really Killed Flash Player: A Six-Year Campaign of Deliberate Platform Work

https://medium.com/@aglaforge/what-really-killed-flash-player-a-six-year-campaign-of-deliberate-p...
1•jbegley•20m ago•0 comments

Ask HN: Anyone orchestrating multiple AI coding agents in parallel?

1•buildingwdavid•22m ago•0 comments

Show HN: Knowledge-Bank

https://github.com/gabrywu-public/knowledge-bank
1•gabrywu•27m ago•0 comments

Show HN: The Codeverse Hub Linux

https://github.com/TheCodeVerseHub/CodeVerseLinuxDistro
3•sinisterMage•28m ago•2 comments

Take a trip to Japan's Dododo Land, the most irritating place on Earth

https://soranews24.com/2026/02/07/take-a-trip-to-japans-dododo-land-the-most-irritating-place-on-...
2•zdw•28m ago•0 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
40•bookofjoe•29m ago•13 comments

BookTalk: A Reading Companion That Captures Your Voice

https://github.com/bramses/BookTalk
1•_bramses•29m ago•0 comments

Is AI "good" yet? – tracking HN's sentiment on AI coding

https://www.is-ai-good-yet.com/#home
3•ilyaizen•30m ago•1 comments

Show HN: Amdb – Tree-sitter based memory for AI agents (Rust)

https://github.com/BETAER-08/amdb
1•try_betaer•31m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
2•anhxuan•31m ago•0 comments

Show HN: Seedance 2.0 Release

https://seedancy2.com/
2•funnycoding•32m ago•0 comments