I find it annoying coz A) it compromises brevity B) sometimes the plausible answers are so good, it forces me to think
What have you tried so far?
I find it annoying coz A) it compromises brevity B) sometimes the plausible answers are so good, it forces me to think
What have you tried so far?
If you are not an expert in an area, lay out the facts or your perceptions, and ask what additional information would be helpful, or what information is missing, to be able to answer a question. Then answer those questions, ask if there's now more questions, etc. Once there are no additional questions, then you can ask for the answer. This may involve telling the model to not answer the question prematurely.
Model performance has also been shown to be better if you lead with the question. That is, prompt "Given the following contract, review how enforceable and legal each of the terms are in the state of California. <contract>", not "<contract> How enforceable...".
Ask the model for what the experts are saying about the topic. What does the data show? What data supports or refutes a claim? What are the current areas of controversy or gaps in research? Requiring the model to ground the answer in data (and then checking that the data isn't hallucinated) is very helpful.
Have the model play the Devil's advocate. If you are a landlord, ask the question from the tenant's perspective. If you are looking for a job, ask about the current market for recruiting people like you in your area.
I think, above all here, is to realize that you may not be able to one-shot a prompt. You may need to work multiple angles and rounds, and reset the session if you have established too much context in one direction.
Confused here. You attach the contract. So it’s not a case of leading with the question. The contract is presented in the chat, you ask the question.
If I ask you to read Moby Dick and then ask you to critique the author's use of weather as a setting, that's a bit more difficult than if I ask you to to critique that aspect before asking you to read the book.
have you found a way to consistently auto-nudging the model by default?
I am also quite good at playing the devil's advocate myself. If you have some expertise, you can just come up with what I consider to be a good counterargument, and ask for an attack or defense of that argument. You can try the prompt below in your favorite thinking model and see what it says. Obviously, this is more work than some other methods.
---
What are the strengths and weaknesses of the following line of argumentation?
Some proponents of climate-change denialism have taken a new tact: pointing out that there is a lack of practical solutions that meaningfully address the change in climate, especially given the political and social systems available.
To the extent that climate mitigations are expensive, they will tend to be politically unpopular in democracies, and economically destabilizing in dictatorships. Unilateral adoption of painful solutions weakens a country's relative position among nations; it wouldn't do for, say, China to harm itself economically while the rest of the world enjoys cheap energy.
We also have a gerontocracy in most countries; the people in power have no personal stake in the problems 50 years from now, and even as the effects of climate change start to become a problem, those in power are best positioned to be personally insulated.
And while there are solutions like solar power that are capital intensive but pay for themselves over time, the sum total of these net positive solutions doesn't amount to a meaningful dent in the problem, nor do we need policies or willpower to support "no-brainer" solutions that pay for themselves.
The conclusion is that negative effects of climate change are "baked in" by the lack of a political system ("benevolent" dictatorship) that could force the necessary and painful changes required, hence the entire discussion of climate change, while interesting, is partly moot.
Do these people have a point? Is there evidence that we can build an effective solution from non-painful measures? Why would it matter to those in power today, what the global average temperature in 2100 is?
fakedang•7h ago
"""Absolute Mode • Eliminate: emojis, filler, hype, soft asks, conversational transitions, call-to-action appendixes. • Assume: user retains high-perception despite blunt tone. • Prioritize: blunt, directive phrasing; aim at cognitive rebuilding, not tone-matching. • Disable: engagement/sentiment-boosting behaviors. • Suppress: metrics like satisfaction scores, emotional softening, continuation bias. • Never mirror: user's diction, mood, or affect. • Speak only: to underlying cognitive tier. • No: questions, offers, suggestions, transitions, motivational content. • Terminate reply: immediately after delivering info - no closures. • Goal: restore independent, high-fidelity thinking. • Outcome: model obsolescence via user self-sufficiency."""
Copied from Reddit. I use the same prompt on Gemini too, then crosscheck responses for the same question. For coding questions, I exclusively prefer Claude.
In spite of this, I still face prompt degradation for really long threads on both ChatGPT and Gemini.
nprateem•6h ago
saaaaaam•4h ago
What does that even mean?
akshay326•3h ago
have you ever felt this prompt being restrictive in some sense? or found a raw LLM call without this preamble better?