frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenClaw Is Changing My Life

https://reorx.com/blog/openclaw-is-changing-my-life/
1•novoreorx•2m ago•0 comments

Everything you need to know about lasers in one photo

https://commons.wikimedia.org/wiki/File:Commercial_laser_lines.svg
1•mahirsaid•4m ago•0 comments

SCOTUS to decide if 1988 video tape privacy law applies to internet uses

https://www.jurist.org/news/2026/01/us-supreme-court-to-decide-if-1988-video-tape-privacy-law-app...
1•voxadam•5m ago•0 comments

Epstein files reveal deeper ties to scientists than previously known

https://www.nature.com/articles/d41586-026-00388-0
1•XzetaU8•12m ago•0 comments

Red teamers arrested conducting a penetration test

https://www.infosecinstitute.com/podcast/red-teamers-arrested-conducting-a-penetration-test/
1•begueradj•19m ago•0 comments

Show HN: Open-source AI powered Kubernetes IDE

https://github.com/agentkube/agentkube
1•saiyampathak•23m ago•0 comments

Show HN: Lucid – Use LLM hallucination to generate verified software specs

https://github.com/gtsbahamas/hallucination-reversing-system
1•tywells•25m ago•0 comments

AI Doesn't Write Every Framework Equally Well

https://x.com/SevenviewSteve/article/2019601506429730976
1•Osiris30•29m ago•0 comments

Aisbf – an intelligent routing proxy for OpenAI compatible clients

https://pypi.org/project/aisbf/
1•nextime•29m ago•1 comments

Let's handle 1M requests per second

https://www.youtube.com/watch?v=W4EwfEU8CGA
1•4pkjai•30m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
1•zhizhenchi•31m ago•0 comments

Goal: Ship 1M Lines of Code Daily

2•feastingonslop•41m ago•0 comments

Show HN: Codex-mem, 90% fewer tokens for Codex

https://github.com/StartripAI/codex-mem
1•alfredray•43m ago•0 comments

FastLangML: FastLangML:Context‑aware lang detector for short conversational text

https://github.com/pnrajan/fastlangml
1•sachuin23•47m ago•1 comments

LineageOS 23.2

https://lineageos.org/Changelog-31/
1•pentagrama•50m ago•0 comments

Crypto Deposit Frauds

2•wwdesouza•51m ago•0 comments

Substack makes money from hosting Nazi newsletters

https://www.theguardian.com/media/2026/feb/07/revealed-how-substack-makes-money-from-hosting-nazi...
3•lostlogin•51m ago•0 comments

Framing an LLM as a safety researcher changes its language, not its judgement

https://lab.fukami.eu/LLMAAJ
1•dogacel•54m ago•0 comments

Are there anyone interested about a creator economy startup

1•Nejana•55m ago•0 comments

Show HN: Skill Lab – CLI tool for testing and quality scoring agent skills

https://github.com/8ddieHu0314/Skill-Lab
1•qu4rk5314•56m ago•0 comments

2003: What is Google's Ultimate Goal? [video]

https://www.youtube.com/watch?v=xqdi1xjtys4
1•1659447091•56m ago•0 comments

Roger Ebert Reviews "The Shawshank Redemption"

https://www.rogerebert.com/reviews/great-movie-the-shawshank-redemption-1994
1•monero-xmr•58m ago•0 comments

Busy Months in KDE Linux

https://pointieststick.com/2026/02/06/busy-months-in-kde-linux/
1•todsacerdoti•58m ago•0 comments

Zram as Swap

https://wiki.archlinux.org/title/Zram#Usage_as_swap
1•seansh•1h ago•1 comments

Green’s Dictionary of Slang - Five hundred years of the vulgar tongue

https://greensdictofslang.com/
1•mxfh•1h ago•0 comments

Nvidia CEO Says AI Capital Spending Is Appropriate, Sustainable

https://www.bloomberg.com/news/articles/2026-02-06/nvidia-ceo-says-ai-capital-spending-is-appropr...
1•virgildotcodes•1h ago•3 comments

Show HN: StyloShare – privacy-first anonymous file sharing with zero sign-up

https://www.styloshare.com
1•stylofront•1h ago•0 comments

Part 1 the Persistent Vault Issue: Your Encryption Strategy Has a Shelf Life

1•PhantomKey•1h ago•0 comments

Show HN: Teleop_xr – Modular WebXR solution for bimanual robot teleoperation

https://github.com/qrafty-ai/teleop_xr
1•playercc7•1h ago•1 comments

The Highest Exam: How the Gaokao Shapes China

https://www.lrb.co.uk/the-paper/v48/n02/iza-ding/studying-is-harmful
2•mitchbob•1h ago•1 comments
Open in hackernews

GPT-5 System Prompt?

https://github.com/Wyattwalls/system_prompts/blob/main/OpenAI/gpt-5-thinking-20250809
36•georgehill•6mo ago

Comments

TZubiri•6mo ago
These are always so embarassing
NewsaHackO•6mo ago
It's because they always put things that seem way to specific to certain issues, like riddles and arithmetic. Also, I am not a WS, but the mention of "proud boys" are things that can be used as fodder for LLM bias. I wonder why they even have to use a system prompt; why can't that have a separate fine-tuned model for ChatGPT specifically so that they don't need a system prompt?
TZubiri•6mo ago
Also because we have these image of super scientist mathematician who fight for a better world and reject 1m salaries and raise billions in funding.

And their work is literally "DON'T do this, DO that in these situations"

sellmesoap•6mo ago
"Dear computer, I'm writing to you today to tell you to make sure you really check your math sums!" I find it amusing so much emphasis is put on a computer to get math correct.
TZubiri•5mo ago
And then they get offered 1M salaries for that.
dgreensp•6mo ago
> Never place rich UI elements within a table, list, or other markdown element.

> Place rich UI elements within tables, lists, or other markdown elements when appropriate.

mdaniel•6mo ago
It's a good thing people were enamored of how inexpensive GPT-5 is, given that the system prompt is (allegedly) 54kb. I don't know how many tokens that is offhand, but what a lot of them to burn just on setup of the thing
btdmaster•6mo ago
I might be wrong, but can't you checkpoint the post-system prompt model and restore from there, trading memory for compute? Or is that too much extra state?
mdaniel•6mo ago
My mental model is that the system prompt isn't one thing, and that seems even more apparent with line 6 telling the model what today's date is. I have no insider information but system prompts could undergo A/B testing just like any change, to find the optimal one for some population of users

Which is to say you wouldn't want to bake such a thing too deeply into a multi-terabyte bunch of floating points because it makes operating things harder

reitzensteinm•6mo ago
OpenAI automatically caches prompt prefixes on the API. Caching an infrequently changing internally controlled system prompt is trivial by comparison.
Tadpole9181•6mo ago
54,000 bytes, one byte per character. 4 characters per token (more or less). Around 13,000 tokens.

These are NOT included in the model context size for pricing.

crazygringo•6mo ago
How does a prompt this long affect resource usage?

Does inference need to process this whole thing from scratch at the start of every chat?

Or is there some way to cache the state of the LLM after processing this prompt, before the first user token is received, and every request starts from this cached state?

mdaniel•6mo ago
My understanding is that is what the KV cache does in models serving. I would imagine they'd want to prime any such KV cache with common tokens but retain a per-session cache to avoid leaks. It seems HF agrees with the concept, at least https://huggingface.co/docs/transformers/kv_cache#prefill-a-...
kingstnap•6mo ago
OpenAI has docs about how it works.

https://platform.openai.com/docs/guides/prompt-caching

It's fairly simple actually. Each machine stores the KV cache in blocks of 128 tokens.

That's stored in a prefix tree like structure. Probably with some sort of LRU eviction policy.

If you ask a machine to generate it does so starting from the longest matching sequence in the cache.

They route between racks using a hash of the prefix.

Therefore the system prompt, being frequently used and at the beginning of the context, will always be in the prefix cache.

crazygringo•6mo ago
Fascinating, exactly what I was wondering about. Thank you! Turns out it's very sophisticated, and also explains why the current date is always at the very end of the system prompt.