frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Mermaid Formatter – CLI and library to auto-format Mermaid diagrams

https://github.com/chenyanchen/mermaid-formatter
1•astm•14m ago•0 comments

RFCs vs. READMEs: The Evolution of Protocols

https://h3manth.com/scribe/rfcs-vs-readmes/
2•init0•20m ago•1 comments

Kanchipuram Saris and Thinking Machines

https://altermag.com/articles/kanchipuram-saris-and-thinking-machines
1•trojanalert•20m ago•0 comments

Chinese chemical supplier causes global baby formula recall

https://www.reuters.com/business/healthcare-pharmaceuticals/nestle-widens-french-infant-formula-r...
1•fkdk•23m ago•0 comments

I've used AI to write 100% of my code for a year as an engineer

https://old.reddit.com/r/ClaudeCode/comments/1qxvobt/ive_used_ai_to_write_100_of_my_code_for_1_ye...
1•ukuina•26m ago•1 comments

Looking for 4 Autistic Co-Founders for AI Startup (Equity-Based)

1•au-ai-aisl•36m ago•1 comments

AI-native capabilities, a new API Catalog, and updated plans and pricing

https://blog.postman.com/new-capabilities-march-2026/
1•thunderbong•36m ago•0 comments

What changed in tech from 2010 to 2020?

https://www.tedsanders.com/what-changed-in-tech-from-2010-to-2020/
2•endorphine•41m ago•0 comments

From Human Ergonomics to Agent Ergonomics

https://wesmckinney.com/blog/agent-ergonomics/
1•Anon84•45m ago•0 comments

Advanced Inertial Reference Sphere

https://en.wikipedia.org/wiki/Advanced_Inertial_Reference_Sphere
1•cyanf•46m ago•0 comments

Toyota Developing a Console-Grade, Open-Source Game Engine with Flutter and Dart

https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine
1•computer23•49m ago•0 comments

Typing for Love or Money: The Hidden Labor Behind Modern Literary Masterpieces

https://publicdomainreview.org/essay/typing-for-love-or-money/
1•prismatic•49m ago•0 comments

Show HN: A longitudinal health record built from fragmented medical data

https://myaether.live
1•takmak007•52m ago•0 comments

CoreWeave's $30B Bet on GPU Market Infrastructure

https://davefriedman.substack.com/p/coreweaves-30-billion-bet-on-gpu
1•gmays•1h ago•0 comments

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•1h ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
3•cwwc•1h ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•1h ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
3•eeko_systems•1h ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
3•neogoose•1h ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
2•mav5431•1h ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
3•sizzle•1h ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•1h ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•1h ago•1 comments

Baldur's Gate to be turned into TV series – without the game's developers

https://www.bbc.com/news/articles/c24g457y534o
3•vunderba•1h ago•0 comments

Interview with 'Just use a VPS' bro (OpenClaw version) [video]

https://www.youtube.com/watch?v=40SnEd1RWUU
2•dangtony98•1h ago•0 comments

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•1h ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
1•1vuio0pswjnm7•1h ago•0 comments

Effective Nihilism

https://www.effectivenihilism.org/
1•abetusk•1h ago•1 comments

The UK government didn't want you to see this report on ecosystem collapse

https://www.theguardian.com/commentisfree/2026/jan/27/uk-government-report-ecosystem-collapse-foi...
5•pabs3•1h ago•0 comments

No 10 blocks report on impact of rainforest collapse on food prices

https://www.thetimes.com/uk/environment/article/no-10-blocks-report-on-impact-of-rainforest-colla...
3•pabs3•1h ago•0 comments
Open in hackernews

Ask HN: How to prevent Claude/GPT/Gemini from reinforcing your biases?

30•akshay326•1w ago
Lately i've been experimenting with this template in Claude's default prompt ``` When I ask a question, give me at least two plausible but contrasting perspectives, even if one seems dominant. Make me aware of assumptions behind each. ```

I find it annoying coz A) it compromises brevity B) sometimes the plausible answers are so good, it forces me to think

What have you tried so far?

Comments

fakedang•1w ago
My prompt:

"""Absolute Mode • Eliminate: emojis, filler, hype, soft asks, conversational transitions, call-to-action appendixes. • Assume: user retains high-perception despite blunt tone. • Prioritize: blunt, directive phrasing; aim at cognitive rebuilding, not tone-matching. • Disable: engagement/sentiment-boosting behaviors. • Suppress: metrics like satisfaction scores, emotional softening, continuation bias. • Never mirror: user's diction, mood, or affect. • Speak only: to underlying cognitive tier. • No: questions, offers, suggestions, transitions, motivational content. • Terminate reply: immediately after delivering info - no closures. • Goal: restore independent, high-fidelity thinking. • Outcome: model obsolescence via user self-sufficiency."""

Copied from Reddit. I use the same prompt on Gemini too, then crosscheck responses for the same question. For coding questions, I exclusively prefer Claude.

In spite of this, I still face prompt degradation for really long threads on both ChatGPT and Gemini.

nprateem•1w ago
That's a great prompt!
saaaaaam•1w ago
> aim at cognitive rebuilding, not tone-matching… Speak only: to underlying cognitive tier.

What does that even mean?

akshay326•1w ago
wow, i wonder how bulletined & concise the outputs of your prompt might be!

have you ever felt this prompt being restrictive in some sense? or found a raw LLM call without this preamble better?

fakedang•1w ago
Extremely concise, no bullshit answers. Every reply is a no-BS hard critique, often rude as fuck. Not recommended for thin-skinned people. End.

That's how most of its answers are structured as. Unfortunately doesn't work for voice mode.

aavci•1w ago
Do you have a library of similar prompts you could link here?
avidiax•1w ago
It's very important to not have leading questions. Don't ask it to confirm something; ask it to outline the possibilities and the pros and cons or argument for or against each possibility.

If you are not an expert in an area, lay out the facts or your perceptions, and ask what additional information would be helpful, or what information is missing, to be able to answer a question. Then answer those questions, ask if there's now more questions, etc. Once there are no additional questions, then you can ask for the answer. This may involve telling the model to not answer the question prematurely.

Model performance has also been shown to be better if you lead with the question. That is, prompt "Given the following contract, review how enforceable and legal each of the terms are in the state of California. <contract>", not "<contract> How enforceable...".

Ask the model for what the experts are saying about the topic. What does the data show? What data supports or refutes a claim? What are the current areas of controversy or gaps in research? Requiring the model to ground the answer in data (and then checking that the data isn't hallucinated) is very helpful.

Have the model play the Devil's advocate. If you are a landlord, ask the question from the tenant's perspective. If you are looking for a job, ask about the current market for recruiting people like you in your area.

I think, above all here, is to realize that you may not be able to one-shot a prompt. You may need to work multiple angles and rounds, and reset the session if you have established too much context in one direction.

saaaaaam•1w ago
> Model performance has also been shown to be better if you lead with the question. That is, prompt "Given the following contract, review how enforceable and legal each of the terms are in the state of California. <contract>", not "<contract> How enforceable...".

Confused here. You attach the contract. So it’s not a case of leading with the question. The contract is presented in the chat, you ask the question.

avidiax•1w ago
LLMs are necessarily linear. If you paste the contract first, the attention mechanism of the model can still process the contract, but only generically. It pays attention to the key points of the contract. If you ask the question first, the attention part of the model is already primed. It will now read the contract paying more attention to the parts that are relevant to the question.

If I ask you to read Moby Dick and then ask you to critique the author's use of weather as a setting, that's a bit more difficult than if I ask you to to critique that aspect before asking you to read the book.

saaaaaam•1w ago
No, but I mean that in Claude you don’t put the contract linearly into the chat - in other words you can’t position it before or after the prompt, you attach it at the top of the chat. Are you saying you would prompt saying “please examine the contract I will provide in the next message, here is what I want you to do <instruction>”
avidiax•1w ago
The LLM developers already know this trick, so I expect that if you attach documents, they are processed after your prompt.

There is a further trick that is probably already integrated: simply giving the same input twice greatly improves model performance.

saaaaaam•1w ago
Gotcha, thanks for explaining. It’s interesting because there are times I say “look at his document and do this” but forget to attach the doc. I’ve always had the sense Claude is better “prepped” when it anticipates he document coming. Sometimes I’ve said “in the next message I’m going to give you a document, here is what I want you to do, if there’s anything you are unclear in before we begin ask me questions”. This seems to bring better results, but I’ve not done any sort of robust test.
akshay326•1w ago
> Have the model play the Devil's advocate. i've tried this sometimes. only issue being dumb me skipping to add similar phrasing every time i open claude or gemini

have you found a way to consistently auto-nudging the model by default?

avidiax•1w ago
I don't have copy-paste prompts. If I need it to argue from another perspective, I just ask for that perspective in a new session, ideally arguing both sides in temporary sessions so that the context of any other session doesn't affect it.

I am also quite good at playing the devil's advocate myself. If you have some expertise, you can just come up with what I consider to be a good counterargument, and ask for an attack or defense of that argument. You can try the prompt below in your favorite thinking model and see what it says. Obviously, this is more work than some other methods.

---

What are the strengths and weaknesses of the following line of argumentation?

Some proponents of climate-change denialism have taken a new tact: pointing out that there is a lack of practical solutions that meaningfully address the change in climate, especially given the political and social systems available.

To the extent that climate mitigations are expensive, they will tend to be politically unpopular in democracies, and economically destabilizing in dictatorships. Unilateral adoption of painful solutions weakens a country's relative position among nations; it wouldn't do for, say, China to harm itself economically while the rest of the world enjoys cheap energy.

We also have a gerontocracy in most countries; the people in power have no personal stake in the problems 50 years from now, and even as the effects of climate change start to become a problem, those in power are best positioned to be personally insulated.

And while there are solutions like solar power that are capital intensive but pay for themselves over time, the sum total of these net positive solutions doesn't amount to a meaningful dent in the problem, nor do we need policies or willpower to support "no-brainer" solutions that pay for themselves.

The conclusion is that negative effects of climate change are "baked in" by the lack of a political system ("benevolent" dictatorship) that could force the necessary and painful changes required, hence the entire discussion of climate change, while interesting, is partly moot.

Do these people have a point? Is there evidence that we can build an effective solution from non-painful measures? Why would it matter to those in power today, what the global average temperature in 2100 is?

storystarling•1w ago
I've had better results separating these concerns rather than trying to stuff it all into one prompt. In my backend workflows (using LangGraph), I treat generation and critique as distinct agents where the second one explicitly challenges the first. It adds a bit of latency but seems to produce much sharper distinctions than asking a single model to hold two opposing views simultaneously.
akshay326•1w ago
Ah interesting. i like actor-critic models! do you use it just for coding or non-technical chats too?
al_borland•1w ago
When I’m worried about bias in the answer, I do by best to no inject my opinions or thoughts into the question. Sometimes I go a step further and ask the question with the opposite bias and leading thoughts of what I think the answer is or should be, to see if it tells me I’m wrong and corrects me to the thing I secretly thought it would be (or hoped it would be). This gives me more solid footing to believe it’s not just telling me what I want to hear.
akshay326•1w ago
> Sometimes I go a step further and ask the question with the opposite bias..... curious to try this have you ever found it biasing you in the opposite direction tho?
al_borland•1w ago
When that happens I look for other sources for confirmation.
terribleperson•1w ago
So far, the following has worked OK for me as a custom prompt for ChatGPT.

```Minimize compliments. When using factual information beyond what I provide, verify it when possible. Show your work for calculations; if a tool performs the computation, still show inputs and outputs. Review calculations for errors before presenting results. Review arguments for logical fallacies. Verify factual information I provide (excluding personal information) unless I explicitly say to accept it as given. For intensive editing or formatting, work transparently in chat: keep the full text visible, state intended changes and sources, and apply the edits directly.```

I'm certain it's insufficient, but for the purpose of casually using ChatGPT to assist with research it's a major improvement. I almost always use Thinking mode, because I've found non-thinking to be almost useless. There are rare exceptions.

'Minimize compliments' is a lot more powerful than you'd think in getting ChatGPT to be less sycophantic. The parts about calculation work okay. It's an improvement over defaults, but you should still verify. It's better at working with text, but still fucks it up a lot. The instructions about handling factual information work very well. It will push back on my or its own claims if they're unsupported. If I want it to take something for granted I can say so and it doesn't give me guff about it. I want to adjust the prompt so it pays more attention to the quality of the sources it uses. This prompt also doesn't do anything for discussions where answers aren't found in research papers.

jackfranklyn•1w ago
Something I've noticed: most of these techniques work partly because they force you to slow down and actually think about what you're asking.

The "ask for contrasting perspectives" prompt is annoying specifically because it makes you process more information. The devil's advocate approach forces a second round of evaluation. Even just opening a fresh session adds friction that makes you reconsider the question.

When I'm working in domains I know well, I catch the model drifting way faster than in areas where I'm learning. Which suggests the real problem isn't the model - it's that we're outsourcing judgment to it in areas where we shouldn't be.

The uncomfortable answer might be: if you're worried the model is reinforcing your biases, you probably don't know the domain well enough to evaluate its answers anyway.

akshay326•1w ago
> if you're worried the model is reinforcing your biases..... i agree, i don't understand many domains well enough, yet i feel there's value in calling out assumptions, irrespective of how hard verification is
steveylang•6h ago
The best thing to do is to tell the model itself what you'd like from it, and to help you craft a set of user preferences. Every so often I'll have a new suggestion for it, and it will revise the entire set for me. My Claude user prefs is around 1000 words and probably overkill, but I don't think it hurts besides using a few more tokens to start a chat.