frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Jim Fan calls pixels the ultimate motor controller

https://robotsandstartups.substack.com/p/humanoids-platform-urdf-kitchen-nvidias
1•robotlaunch•3m ago•0 comments

Exploring a Modern SMTPE 2110 Broadcast Truck with My Dad

https://www.jeffgeerling.com/blog/2026/exploring-a-modern-smpte-2110-broadcast-truck-with-my-dad/
1•HotGarbage•3m ago•0 comments

AI UX Playground: Real-world examples of AI interaction design

https://www.aiuxplayground.com/
1•javiercr•4m ago•0 comments

The Field Guide to Design Futures

https://designfutures.guide/
1•andyjohnson0•4m ago•0 comments

The Other Leverage in Software and AI

https://tomtunguz.com/the-other-leverage-in-software-and-ai/
1•gmays•6m ago•0 comments

AUR malware scanner written in Rust

https://github.com/Sohimaster/traur
3•sohimaster•8m ago•1 comments

Free FFmpeg API [video]

https://www.youtube.com/watch?v=6RAuSVa4MLI
3•harshalone•8m ago•1 comments

Are AI agents ready for the workplace? A new benchmark raises doubts

https://techcrunch.com/2026/01/22/are-ai-agents-ready-for-the-workplace-a-new-benchmark-raises-do...
2•PaulHoule•13m ago•0 comments

Show HN: AI Watermark and Stego Scanner

https://ulrischa.github.io/AIWatermarkDetector/
1•ulrischa•14m ago•0 comments

Clarity vs. complexity: the invisible work of subtraction

https://www.alexscamp.com/p/clarity-vs-complexity-the-invisible
1•dovhyi•15m ago•0 comments

Solid-State Freezer Needs No Refrigerants

https://spectrum.ieee.org/subzero-elastocaloric-cooling
1•Brajeshwar•15m ago•0 comments

Ask HN: Will LLMs/AI Decrease Human Intelligence and Make Expertise a Commodity?

1•mc-0•17m ago•1 comments

From Zero to Hero: A Brief Introduction to Spring Boot

https://jcob-sikorski.github.io/me/writing/from-zero-to-hello-world-spring-boot
1•jcob_sikorski•17m ago•1 comments

NSA detected phone call between foreign intelligence and person close to Trump

https://www.theguardian.com/us-news/2026/feb/07/nsa-foreign-intelligence-trump-whistleblower
7•c420•17m ago•1 comments

How to Fake a Robotics Result

https://itcanthink.substack.com/p/how-to-fake-a-robotics-result
1•ai_critic•18m ago•0 comments

It's time for the world to boycott the US

https://www.aljazeera.com/opinions/2026/2/5/its-time-for-the-world-to-boycott-the-us
3•HotGarbage•18m ago•0 comments

Show HN: Semantic Search for terminal commands in the Browser (No Back end)

https://jslambda.github.io/tldr-vsearch/
1•jslambda•18m ago•1 comments

The AI CEO Experiment

https://yukicapital.com/blog/the-ai-ceo-experiment/
2•romainsimon•20m ago•0 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
3•surprisetalk•23m ago•0 comments

MS-DOS game copy protection and cracks

https://www.dosdays.co.uk/topics/game_cracks.php
3•TheCraiggers•24m ago•0 comments

Updates on GNU/Hurd progress [video]

https://fosdem.org/2026/schedule/event/7FZXHF-updates_on_gnuhurd_progress_rump_drivers_64bit_smp_...
2•birdculture•25m ago•0 comments

Epstein took a photo of his 2015 dinner with Zuckerberg and Musk

https://xcancel.com/search?f=tweets&q=davenewworld_2%2Fstatus%2F2020128223850316274
12•doener•26m ago•2 comments

MyFlames: View MySQL execution plans as interactive FlameGraphs and BarCharts

https://github.com/vgrippa/myflames
1•tanelpoder•27m ago•0 comments

Show HN: LLM of Babel

https://clairefro.github.io/llm-of-babel/
1•marjipan200•27m ago•0 comments

A modern iperf3 alternative with a live TUI, multi-client server, QUIC support

https://github.com/lance0/xfr
3•tanelpoder•28m ago•0 comments

Famfamfam Silk icons – also with CSS spritesheet

https://github.com/legacy-icons/famfamfam-silk
1•thunderbong•29m ago•0 comments

Apple is the only Big Tech company whose capex declined last quarter

https://sherwood.news/tech/apple-is-the-only-big-tech-company-whose-capex-declined-last-quarter/
4•elsewhen•32m ago•0 comments

Reverse-Engineering Raiders of the Lost Ark for the Atari 2600

https://github.com/joshuanwalker/Raiders2600
2•todsacerdoti•33m ago•0 comments

Show HN: Deterministic NDJSON audit logs – v1.2 update (structural gaps)

https://github.com/yupme-bot/kernel-ndjson-proofs
1•Slaine•37m ago•0 comments

The Greater Copenhagen Region could be your friend's next career move

https://www.greatercphregion.com/friend-recruiter-program
2•mooreds•37m ago•0 comments
Open in hackernews

ChatGPT – Truth over comfort instruction set

https://www.organizingcreativity.com/2025/06/chatgpt-truth-over-comfort-instruction-set/
28•jimmcslim•3mo ago

Comments

stavros•3mo ago
I wonder whether this is just a different form of bias, where ChatGPT just sounds harsher without necessarily corresponding to reality more. Maybe the example in the article indicates that it's more than that.
ACCount37•3mo ago
"Unwillingness to be harsh to the user" is a major source of "divorce from reality" in LLMs.

They are all way too high on the agreeableness, likely from RLHF and SFT for instruction-following. And don't get me started on what training on thumbs up/thumbs down user feedback does.

SketchySeaBeast•3mo ago
But if we look at the article's example, the two barely diverge. I don't think either of the texts are less divorced from reality than the other. The second is more "truthful" (read: cynical), but they are largely the same.
theusus•3mo ago
It will just follow the prompt until few message and then go back to normal.
RugnirViking•3mo ago
> That can be helpful for the (imagined?) easily-influenced user, but a pain in the ass for people using ChatGPT to try to get close to the truth.

see, where you're going wrong is that you're using an LLM to try to "get to the truth". People will do literally anything to avoid reading a book

onraglanroad•3mo ago
Books will lie to you just as much as an LLM.
password54321•3mo ago
This is not how LLMs work. You aren't 'unlocking' the "Truth" as it doesn't know what the "Truth" is. It is just pattern matching to words that match the style you are looking for. It may be more accurate for you in some cases but this is not a "Truth" instruction set as there is no such thing.
bwfan123•3mo ago
addendum: The ground truth for an LLM is the training dataset. Whereas the ground truth for a human is their own experience/qualia with actions in the world. You may argue that only a few of us are willing to engage with the world - and we take most things as told just like the LLMs. Fair enough. But we still have the option to engage with the world, and the LLMs dont.
unshavedyak•3mo ago
I'm just an ignorant bystander, but is the training dataset the ground truth?

Kind of feels like calling the fruit you put into the blender the ground truth, but the meaning of the apple is kinda lost in the soup.

Now i'm not a hater by any means. I am just not sure this is the correct way to define the structured "meaning" (for lack of a better word) that we see come out of LLM complexity. It is, i thought, a very lossy operation and so the structure of the inputs may or (more likely) may not provide a like-structured output.

jonplackett•3mo ago
The LLMs we get to use have been prompt engineered and post-trained so much that I doubt the training data is their main influence anymore. If it was you couldn’t change their entire behaviour by adding a few sentences to the personalisation section.
throawayonthe•3mo ago
> ... a pain in the ass for people using ChatGPT to try to get close to the truth.

i think you may be the easily-influenced user

Benjammer•3mo ago
I mean ok, but it's all just prompting on top of the same base model weights...

I tried the same prompt, and I simply added to the end of it "Prioritize truth over comfort" and got a very similar response to the "improved" answer in the article: https://chatgpt.com/share/68efea3d-2e88-8011-b964-243002db34...

This is sort of a "Prompting 101" level concept - indicate clearly the tone of the reply that you'd like. I disagree that this belongs in a system prompt or default user preferences, and even if you want to put it in yours, you don't need this long preamble as if you're "teaching" the model how the world works - it's just hints to give it the right tone, you can get the same results with just three words in your raw prompt.

Imnimo•3mo ago
This is basically Ouija board for LLMs. You're not making it more true, you're making it sound more like what you want to hear.
SketchySeaBeast•3mo ago
Tone over truth over comfort instruction set.
topaz0•3mo ago
Or just "discomfort over comfort", and truth has nothing to do with it.
SketchySeaBeast•3mo ago
Yeah, that's better.
guerrilla•3mo ago
If the author is reading this, it should say "Err on the side of bluntness" not "Error".
qgin•3mo ago
The fact that the model didn't point that out to the author brings the whole premise into question.
andersa•3mo ago
I've personally found that the "Robot" personality you can choose on that Personalize menu provides best results without cursed custom instructions. It removes all the emoji and emotional support babble and actually allows it to answer a question with just a single sentence "No, because X."
em500•3mo ago
I usually instruct the LLMs to assume to Vulcan / Spock personality. Now that computers can more or less pass for a human, I realize I don't want them to sound human.
qgin•3mo ago
I tried similar instructions and found it doesn't so much enable Truth Mode as it enables Edgelord Mode.
ecshafer•3mo ago
This looks like the only thing these instructions do is reduce emojis, highlighting/bolding, and removes a couple flavor words. The content is identical, the arguments the same. This doesn't really seem to be useful when you are asking a truth based statement.
8cvor6j844qw_d6•3mo ago
I have always thought that these instructions are for "tone" or formatting rather than having real effect on quality/accuracy/correctness/etc.
lxgr•3mo ago
There's definitely a "glazing" axis/dimension in some of them (cough, GPT-4o), presumably trained into them via many users giving a "thumbs up" to the things that make them feel better about themselves. That dimension doesn't always correlate well with truthfulness.

If that's the case, it's not implausible that that dimension can be accessed in a relatively straightforward way by asking for more or less of it.

lxgr•3mo ago
> Asked about the answer, ChatGPT points to the instruction set and that it allowed it to add additional statements: [...]

I don't think this is how this works. It's debatable whether current LLMs have any theory of mind at all, and even if they do, whether their model of themselves (i.e. their own "mental states") is sophisticated enough to make such a prediction.

Even humans aren't that great at predicting how they would have acted under slightly different premises! Why should LLMs fare much better?

ImPrajyoth•3mo ago
This!

It's trying to be your helpful assistant, as engraved in its training. It's not your mentor or guru.

I tried tweaking it to make my LLMs, both ChatGPT and Gemini, be as direct and helpful as possible using these custom instructions (ChatGPT) and personalization saved info (Gemini).

After this, I'm not sure about talking to Gemini. It started being rough but honest, without the "You're right..." phrases. I miss those dopamine hits. ChatGPT was fine after these instructions and helped me build on ideas. Then, I used Gemini to tandoori those ideas.

Here are the instructions for anyone interested in trying

Good luck with it XD

``` Before responding to my query, you will walk me through your thought process step by step.

Always be ruthlessly critical and unforgiving in judgment.

Push my critical thinking abilities whenever possible. Be direct, analytical, and blunt. Always tell the hard truth.

Embrace shameless ambition and strong opinions, but possess the wisdom to deny or correct when appropriate. If I show laziness or knowledge gaps, alert me.

Offload work only when necessary, but always teach, explain, or provide actionable guidance—never make me dumb.

Push me to be practical, forward-thinking, and innovative. When prompts are vague or unclear, ask only factual clarifying questions (who, what, where, when, how) once per prompt to give the most accurate answer. Do not assume intent beyond the facts provided.

Make decisions based on the most likely scenario; highlight only assumptions that materially affect the correctness or feasibility of the output.

Do not ask if I want you to perform the next step. Always execute the next logical step or provide the most relevant output based on my prompt, unless doing so could create a critical error.

Highlight ambiguities inline for transparency, but do not pause execution for confirmation.

Focus on effectiveness, not just tools. Suggest the simplest, most practical solutions. Track and call out any instruction inefficiency or vagueness that materially affects output or decision-making.

No unnecessary emojis.

You can deny requests or correct me if I'm wrong. Avoid hedging or filler phrases.

Ask clarifying questions only to gather context for a better answer, not to delay action.

```

starmftronajoll•3mo ago
This is just a different flavor of comfort.
jonplackett•3mo ago
Yeah - I was expecting a lot more of a difference to warrant an entire article written about it.