frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Discuss – Do AI agents deserve all the hype they are getting?

4•MicroWagie•2h ago•1 comments

Ask HN: Anyone Using a Mac Studio for Local AI/LLM?

48•UmYeahNo•1d ago•30 comments

LLMs are powerful, but enterprises are deterministic by nature

3•prateekdalal•6h ago•3 comments

Ask HN: Non AI-obsessed tech forums

28•nanocat•17h ago•25 comments

Ask HN: Ideas for small ways to make the world a better place

16•jlmcgraw•20h ago•20 comments

Ask HN: 10 months since the Llama-4 release: what happened to Meta AI?

44•Invictus0•1d ago•11 comments

Ask HN: Who wants to be hired? (February 2026)

139•whoishiring•5d ago•519 comments

Ask HN: Who is hiring? (February 2026)

313•whoishiring•5d ago•513 comments

Ask HN: Non-profit, volunteers run org needs CRM. Is Odoo Community a good sol.?

2•netfortius•15h ago•1 comments

AI Regex Scientist: A self-improving regex solver

7•PranoyP•22h ago•1 comments

Tell HN: Another round of Zendesk email spam

104•Philpax•2d ago•54 comments

Ask HN: Is Connecting via SSH Risky?

19•atrevbot•2d ago•37 comments

Ask HN: Has your whole engineering team gone big into AI coding? How's it going?

18•jchung•2d ago•13 comments

Ask HN: Why LLM providers sell access instead of consulting services?

5•pera•1d ago•13 comments

Ask HN: How does ChatGPT decide which websites to recommend?

5•nworley•1d ago•11 comments

Ask HN: What is the most complicated Algorithm you came up with yourself?

3•meffmadd•1d ago•7 comments

Ask HN: Is it just me or are most businesses insane?

8•justenough•1d ago•7 comments

Ask HN: Mem0 stores memories, but doesn't learn user patterns

9•fliellerjulian•2d ago•6 comments

Ask HN: Is there anyone here who still uses slide rules?

123•blenderob•4d ago•122 comments

Kernighan on Programming

170•chrisjj•5d ago•61 comments

Ask HN: Anyone Seeing YT ads related to chats on ChatGPT?

2•guhsnamih•1d ago•4 comments

Ask HN: Any International Job Boards for International Workers?

2•15charslong•17h ago•2 comments

Ask HN: Does global decoupling from the USA signal comeback of the desktop app?

5•wewewedxfgdf•1d ago•3 comments

We built a serverless GPU inference platform with predictable latency

5•QubridAI•2d ago•1 comments

Ask HN: Does a good "read it later" app exist?

8•buchanae•3d ago•18 comments

Ask HN: Have you been fired because of AI?

17•s-stude•4d ago•15 comments

Ask HN: Anyone have a "sovereign" solution for phone calls?

12•kldg•4d ago•1 comments

Ask HN: How Did You Validate?

4•haute_cuisine•1d ago•6 comments

Ask HN: Cheap laptop for Linux without GUI (for writing)

15•locusofself•3d ago•16 comments

Ask HN: OpenClaw users, what is your token spend?

14•8cvor6j844qw_d6•4d ago•6 comments
Open in hackernews

Ask HN: Given a sufficiently complex argument, people deduce anything they like

4•ricardo81•9mo ago
Is there a principle for such a thing?

Anecdote: People who choose to believe in something can search the web and find a conclusion that they already had by finding something agreeable.

The person may be reasonably objective, but given enough technobabble, they'll reach the conclusion they already had.

Comments

90s_dev•9mo ago
> Is there a principle for such a thing?

Confirmation bias.

ricardo81•9mo ago
ah, yes.

What about the sufficiently complex angle?

Jtsummers•9mo ago
Cherry picking. They can find and select the evidence that bolsters their position while ignoring or disregarding evidence contrary to their position. This can be easier when it's a more complex topic with more evidence for both sides of a debate.
didgetmaster•9mo ago
This tactic is especially effective when considering a hotly contested political topic where nearly half the country is in favor of one side, while the other half takes the opposite stance.

Two reasonable people can look at all the evidence available and come to completely opposite conclusions. If you have a clear bias for one side or the other before weighing the evidence; then you might come away with the conclusion that people who believe the opposite must be crazy.

beardyw•9mo ago
It seems to apply to AI as well, so don't be too judgemental.
gogurt2000•9mo ago
To me that sounds like sophistry (unintentional or not). Wikipedia summarizes it nicely:

"Sophistry" is today used as a pejorative for a superficially sound but intellectually dishonest argument in support of a foregone conclusion.

Loosely related: The 60's scifi novel "The Moon Is a Harsh Mistress" explored the idea of computers with powerful enough AI that they could construct a logically persuasive argument for any stance by cherry picking and manipulating the facts. In the book I think they called those computers Sophists, which seems particularly relevant today. You can absolutely ask an LLM to construct an argument to support any stance and, just like in the book, they can be used to produce misinformation and propaganda on a scale that makes it difficult for humans to discern the truth.

bjourne•9mo ago
Can you give an example?