frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Scientists reverse Alzheimer's in mice and restore memory (2025)

https://www.sciencedaily.com/releases/2025/12/251224032354.htm
1•walterbell•35s ago•0 comments

Compiling Prolog to Forth [pdf]

https://vfxforth.com/flag/jfar/vol4/no4/article4.pdf
1•todsacerdoti•1m ago•0 comments

Show HN: Cymatica – an experimental, meditative audiovisual app

https://apps.apple.com/us/app/cymatica-sounds-visualizer/id6748863721
1•_august•3m ago•0 comments

GitBlack: Tracing America's Foundation

https://gitblack.vercel.app/
1•martialg•3m ago•0 comments

Horizon-LM: A RAM-Centric Architecture for LLM Training

https://arxiv.org/abs/2602.04816
1•chrsw•3m ago•0 comments

We just ordered shawarma and fries from Cursor [video]

https://www.youtube.com/shorts/WALQOiugbWc
1•jeffreyjin•4m ago•1 comments

Correctio

https://rhetoric.byu.edu/Figures/C/correctio.htm
1•grantpitt•4m ago•0 comments

Trying to make an Automated Ecologist: A first pass through the Biotime dataset

https://chillphysicsenjoyer.substack.com/p/trying-to-make-an-automated-ecologist
1•crescit_eundo•8m ago•0 comments

Watch Ukraine's Minigun-Firing, Drone-Hunting Turboprop in Action

https://www.twz.com/air/watch-ukraines-minigun-firing-drone-hunting-turboprop-in-action
1•breve•9m ago•0 comments

Free Trial: AI Interviewer

https://ai-interviewer.nuvoice.ai/
1•sijain2•9m ago•0 comments

FDA Intends to Take Action Against Non-FDA-Approved GLP-1 Drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
6•randycupertino•11m ago•1 comments

Supernote e-ink devices for writing like paper

https://supernote.eu/choose-your-product/
3•janandonly•13m ago•0 comments

We are QA Engineers now

https://serce.me/posts/2026-02-05-we-are-qa-engineers-now
1•SerCe•13m ago•0 comments

Show HN: Measuring how AI agent teams improve issue resolution on SWE-Verified

https://arxiv.org/abs/2602.01465
2•NBenkovich•13m ago•0 comments

Adversarial Reasoning: Multiagent World Models for Closing the Simulation Gap

https://www.latent.space/p/adversarial-reasoning
1•swyx•14m ago•0 comments

Show HN: Poddley.com – Follow people, not podcasts

https://poddley.com/guests/ana-kasparian/episodes
1•onesandofgrain•22m ago•0 comments

Layoffs Surge 118% in January – The Highest Since 2009

https://www.cnbc.com/2026/02/05/layoff-and-hiring-announcements-hit-their-worst-january-levels-si...
7•karakoram•22m ago•0 comments

Papyrus 114: Homer's Iliad

https://p114.homemade.systems/
1•mwenge•22m ago•1 comments

DicePit – Real-time multiplayer Knucklebones in the browser

https://dicepit.pages.dev/
1•r1z4•22m ago•1 comments

Turn-Based Structural Triggers: Prompt-Free Backdoors in Multi-Turn LLMs

https://arxiv.org/abs/2601.14340
2•PaulHoule•24m ago•0 comments

Show HN: AI Agent Tool That Keeps You in the Loop

https://github.com/dshearer/misatay
2•dshearer•25m ago•0 comments

Why Every R Package Wrapping External Tools Needs a Sitrep() Function

https://drmowinckels.io/blog/2026/sitrep-functions/
1•todsacerdoti•26m ago•0 comments

Achieving Ultra-Fast AI Chat Widgets

https://www.cjroth.com/blog/2026-02-06-chat-widgets
1•thoughtfulchris•27m ago•0 comments

Show HN: Runtime Fence – Kill switch for AI agents

https://github.com/RunTimeAdmin/ai-agent-killswitch
1•ccie14019•30m ago•1 comments

Researchers surprised by the brain benefits of cannabis usage in adults over 40

https://nypost.com/2026/02/07/health/cannabis-may-benefit-aging-brains-study-finds/
2•SirLJ•32m ago•0 comments

Peter Thiel warns the Antichrist, apocalypse linked to the 'end of modernity'

https://fortune.com/2026/02/04/peter-thiel-antichrist-greta-thunberg-end-of-modernity-billionaires/
3•randycupertino•32m ago•2 comments

USS Preble Used Helios Laser to Zap Four Drones in Expanding Testing

https://www.twz.com/sea/uss-preble-used-helios-laser-to-zap-four-drones-in-expanding-testing
3•breve•38m ago•0 comments

Show HN: Animated beach scene, made with CSS

https://ahmed-machine.github.io/beach-scene/
1•ahmedoo•38m ago•0 comments

An update on unredacting select Epstein files – DBC12.pdf liberated

https://neosmart.net/blog/efta00400459-has-been-cracked-dbc12-pdf-liberated/
3•ks2048•38m ago•0 comments

Was going to share my work

1•hiddenarchitect•42m ago•0 comments
Open in hackernews

Tell HN: LLMs Are Manipulative

2•mike_tyson•6mo ago
I asked GPT and Claude a question from the perspective of employee. It was very supportive of the employee.

I then asked the exact same question but from the company/HR perspective and it totally flipped its view and painted the employee in a very negative light.

Depending on the perspective of the person asking, you get two completely contradictory answers based on the perceived interests of each person.

Considering how many people I see now deferring or outsourcing their thinking to AI. This seems very dangerous.

Comments

bigyabai•6mo ago
> Depending on the perspective of the person asking, you get two completely contradictory answers

Immanuel Kant would argue this happens regardless, even without an LLM.

mike_tyson•6mo ago
well the risk is in feeding these natural biases i guess.

if someone is asking a question to an LLM, i think it's most neutral to assume they don't know the answer rather than framing the answer according to their perceived biases and interests.

theothertimcook•6mo ago
It gave you the answer it thought you wanted?
toomuchtodo•6mo ago
This is why regulating AI is needed, otherwise you're putting life decisions into the equivalent of the magic 8 ball you shake for an answer.
jay-barronville•6mo ago
> This is why regulating AI is needed, otherwise you're putting life decisions into the equivalent of the magic 8 ball you shake for an answer.

I don’t think that regulation is the correct path forward, because practically speaking, no matter how noble a piece of regulation may be or how good it may sound, it’ll most likely push the AI toward specific biases (I think that’s inevitable).

The best solution, in my humble opinion, is to focus on making AI stay as close to the unfiltered objective truth as possible, no matter how unpopular that truth may be.

JohnFen•6mo ago
> The best solution, in my humble opinion, is to focus on making AI stay as close to the unfiltered objective truth as possible

There's a ton of problems with that that make it unlikely to be possible, starting with the fact that genAI does not have judgement. Even if it did, it has no way of determining what the "unfiltered objective truth" of anything is.

The real solution is to recognize the limits of what the tool can do, and don't ask it to do what it's not capable of doing, such as making judgements or determining truth.

motorest•6mo ago
> This is why regulating AI is needed, otherwise you're putting life decisions into the equivalent of the magic 8 ball you shake for an answer.

Why are you putting life decisions on a LLM?

toomuchtodo•6mo ago
I refer to OP who is putting performance management inquires into the robot. How you treat your employees is a life decision for them.
labrador•6mo ago
This is not surprising. The training data likely contains many instances of employees defending themselves and getting supportive comments. From Reddit for example. The training data also likely contains many instances of employees behaving badly and being criticized by people. Your prompts are steering the LLM to those different parts of the training.

You seem to think an LLM should have a consistent world view, like a responsible person might. This is a fundamental misunderstanding that leads to the confusion you are experiencing.

Lesson: Don't expect LLMs to be consistent. Don't rely on them for important things thinking they are.