frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Origin of the rule that swap size should be 2x of the physical memory

https://retrocomputing.stackexchange.com/questions/32492/origin-of-the-rule-that-swap-size-should-be-2x-of-the-physical-memory
17•SeenNotHeard•1h ago

Comments

LowLevelKernel•44m ago
Curious, How much swap have you personally allocated on your personal setup?
void-star•34m ago
Why was this downvoted? I’m generally curious what current recommendations for swap are too!

Edit: oh and I don’t have an actual personal system with swap configuration on it anymore to give my own answer anymore either.

pezezin•31m ago
Zero. My office workstation has 48 GB of RAM, my home computer has 64 (I went a bit overboard). I have very bad memories of swap thrashing and the computer becoming totally unresponsive until I forced a reset; if I manage to fill up so much RAM, I very much prefer the offending process to die instead of killing the whole computer.
quotemstr•19m ago
It's funny how people think they're disabling swapping just because they don't have a swap file. Where do you think mmap()-ed file pages go? Your machine can still reclaim resident file-backed pages (either by discarding them if they're clear or writing them to their backing file if dirty) and reload them later. That's.... swap.

Instead of achieving responsiveness by disabling swap entirely (which is silly, because everyone has some very cold pages that don't deserve to be stuck in memory), people should mlockall essential processes, adjust the kernel's VM swap propensity, and so on.

Also, I wish we'd just do away with the separation between the anonymous-memory and file-backed memory subsystems entirely. The only special about MAP_ANONYMOUS should be that its backing file is the swap file.

linsomniac•15m ago
I did similar with my 32GB laptop, but it was fairly flaky for ~4 years and I just recently put 48GB of swap on and it's been so much better. It's using over 20GB of the swap. The are cases in Linux where running without swap results in situations very similar to swapping too much.
AnyTimeTraveler•25m ago
My work laptop currently has 96GB of RAM. 32 of it is allocated to the graphics portion of the APU. I have 128GB (2x) of SWAP allocated, since I sometimes do big FPGA Synthesizations, which take up 50GB of RAM on its own. Add another two IDEs and a browser, and my 64GB or remaining RAM is full.
drnick1•25m ago
64GB RAM, zero swap. Until recently RAM was cheap so swap made little sense when you could simply buy more RAM.
stock_toaster•6m ago
On systems with 32/64/128 GB of ram, I'll typically have a 1GB or 2GB swap. Just so that the system can page out here and there to run optimally. Depending on the system, swap is typically either empty or just has a couple hundred MB kicking around.
petcat•42m ago
The OP clearly states that he wants to know the earliest origin of the rule, and the only answers he gets are people giving their own opinions on how much swap space you should have.

Too bad because it's an interesting question that I would also like to know the answer to.

void-star•24m ago
Nope. Those are not the only answers I am seeing. I’m still curious though. 2x was nice because nobody really questioned it. Now that we have there doesn’t seem to be one “answer”. This is a fun/interesting question that comes up every now and then here and elsewhere :-) I suspect someone smarter than me about system tuning will have a much smarter and nuanced answer than “just use 2x”
kgwxd•11m ago
I thought the modern advice was you don't need it at all. No more spinning disks, so the there's no speed gain using the inner-most ring, and modern OSes manage memory in more advanced, and dynamic ways. That's what I choose to believe anyway, I don't need anymore hard choices when setting up Linux :)
xen2xen1•29m ago
It's old enough that I'd put money on DEC. Any takers on that.
dirk94018•20m ago
Early BSD VM pre-allocated swap backing for every anonymous page — you couldn't allocate virtual memory without a swap slot reserved for it, even if the page was never paged out.

When a process forks, the child needed swap reservations for the parent's entire address space (before exec replaces it). A large process forking temporarily needs double its swap allocation. If your working set is roughly equal to physical RAM, fork alone gets you to 2x.

This was the practical bottleneck people actually hit. Your system had enough RAM, swap wasn't full, but fork() failed because there wasn't enough contiguous swap to reserve. 2x was the number that made fork() stop failing on a reasonably loaded system.

The later overcommit/copy-on-write changes made this less relevant, but the rule of thumb outlived the technical reason. Most people repeating "2x RAM" today are running systems where anonymous pages aren't swap-backed until actually paged out.

Today swap is no longer about extending your address space, it's about giving the kernel room to page out cold anonymous pages so that RAM can be used for disk cache.

A little swap makes the system faster even when you're nowhere near running out of memory, because the kernel can evict pages it hasn't touched in hours and use that RAM for hot file data instead.

The exception is hibernation — you need swap >= RAM for that, which is why Ubuntu's recommendations are higher than RedHat's 20% of RAM.

quotemstr•17m ago
TBF, I think overcommit was and remains an ugliness in how we manage memory. I wish we'd solved the fork commit-charge-spike issue by encouraging vfork (and later, posix_spawn) more heavily, not by making the OS lie about the availability of memory.

The ship's long sailed though, so even I run with overcommit enabled and only grumble about what might have been.

bandrami•16m ago
I've had arguments with people about this for 20 years now and the most compelling case I heard involved the price of storage vs the price of RAM in the mid to late 1990s, and that this 2x represented an optimal use of money in designing a system at that point in time.
Sohcahtoa82•15m ago
None of the answers are satisfying to me, tbh.

I install more RAM so I can swap less. If I have 8 GB, then the 2x rule means I should have a 16 GB swap file, giving me 24 GB of total memory to work with. If I then stumble upon a good deal on RAM and upgrade to 32 GB, then if I never had memory problems with 24 GB, then I should be able to completely disable paging and not have a problem. But instead, the advice would be to increase my paging file to 64 GB!?

It doesn't make any sense. At all.

Bender•6m ago
Managed over 50k servers with zero swap. Simply set overcommit ratio to 0, min_free configured based on a Redhat formula and had application teams keep some memory free. Servers ranged from 144GB ram to 3TB ram. On servers meant to be stateless app and web servers panic was set to 2 to reboot on oom which mostly occurred in the performance team that were constantly load testing hardware and apps and a few dev machines were developers were not sharing nicely. Engineered correctly OOM will be very rare and this only gets better with time as applications have more controls over memory allocation. Java will always leak, just leave more room for it.

Postgres Jsonb Columns and Toast: A Performance Guide

https://www.snowflake.com/en/engineering-blog/postgres-jsonb-columns-and-toast/
1•craigkerstiens•1m ago•0 comments

(paper money) Hedge Fund staffed by AI Employees (experiment)

https://platypi.empla.io
1•pokot0•5m ago•1 comments

Show HN: Bloomfilter – A service for AI agents to register and manage domains

https://bloomfilter.xyz/
1•eronmmer•7m ago•0 comments

Examining Bias and AI in Latin America

https://elpais.com/america/lideresas-de-latinoamerica/2026-02-25/genero-racismo-y-xenofobia-asi-s...
1•shakiness3383•7m ago•0 comments

Show HN: WebMCP Core – AI agent tool definitions from any site

https://github.com/keak-ai/webmcp-core
1•eman11•7m ago•0 comments

Anthropic is dropping its signature safety pledge amid a heated AI race

https://www.businessinsider.com/anthropic-changing-safety-policy-2026-2
1•rahulskn86•9m ago•0 comments

Eleven Freedoms for Free AI

https://elevenfreedoms.org/
1•pabs3•12m ago•0 comments

Average Typing Speeds based on 221k user typing sessions

https://www.typequicker.com/average-typing-speed
1•absoluteunit1•14m ago•0 comments

WTF Happened in 2025?

https://wtfhappened2025.com/
3•swyx•17m ago•0 comments

Dead Internet Theory – A Win?

https://medium.com/@brandon_89699/4df2f34cba14
1•Fine-Palp-528•17m ago•0 comments

Open-Source Agent Operating System

https://github.com/RightNow-AI/openfang
3•OsamaJaber•19m ago•1 comments

RAG on a Budget: How I Replaced a $360/Month OpenSearch Cluster for $1.12/Month

https://stephaniespanjian.com/blog/rag-cost-reduction-replaced-opensearch-s3-in-memory-search
2•StephSpanjian•22m ago•1 comments

Tech Companies Shouldn't Be Bullied into Doing Surveillance

https://www.eff.org/deeplinks/2026/02/tech-companies-shouldnt-be-bullied-doing-surveillance
5•pseudolus•23m ago•0 comments

Honey Fraud as a Moving Analytical Target: Omics-Informed Authentication

https://www.mdpi.com/2304-8158/15/4/712
2•PaulHoule•23m ago•0 comments

Claude Code Video Toolkit

https://github.com/wilwaldon/Claude-Code-Video-Toolkit
1•stagezerowil•23m ago•0 comments

Show HN: Unix for the Commodore 64? Open Source

https://github.com/ascarola/c64ux/releases/tag/v0.7
1•ascarola•24m ago•0 comments

Which web frameworks are most token-efficient for AI agents?

https://martinalderson.com/posts/which-web-frameworks-are-most-token-efficient-for-ai-agents/
1•gmays•24m ago•0 comments

Show HN: Architect-Linter – Enforce architecture rules

https://crates.io/crates/architect-linter-pro
1•sergegriimm•26m ago•0 comments

Pete Hegseth and the AI Doomsday Machine

https://robertreich.substack.com/p/pete-hegseth-and-the-ai-doomsday
4•doener•28m ago•1 comments

Show HN: RubyLLM:Agents – A Rails engine for building and monitoring LLM agents

https://github.com/adham90/ruby_llm-agents
4•adham900•28m ago•0 comments

FBI raids of LAUSD Supt.'s home and office appear tied to AI chatbot probe

https://www.latimes.com/california/story/2026-02-25/fbi-raid-lausd-search-warrants
3•cdrnsf•29m ago•0 comments

Submitle – Submit, Share, and Discover Links Online

https://www.submitle.com/
1•exchangler•29m ago•0 comments

Show HN: OpenTrace – Self-hosted observability server with 75 MCP tools

https://github.com/adham90/opentrace
3•adham900•32m ago•0 comments

AT&T Acquires CenturyLink

https://old.reddit.com/r/Portland/comments/1reucu3/this_sucks_worse_than_you_may_yet_realize/
3•fullstacking•33m ago•1 comments

Automatic Discharges of Student Loans to Proceed After Dual Court Wins

https://www.forbes.com/sites/adamminsky/2026/02/25/automatic-discharges-of-student-loans-to-proce...
3•toomuchtodo•33m ago•1 comments

Multi-agent workflows often fail

https://github.blog/ai-and-ml/generative-ai/multi-agent-workflows-often-fail-heres-how-to-enginee...
1•e2e4•35m ago•0 comments

Show HN: Open-source MCP servers for self-hosted homelab AI

2•ai_engineering•35m ago•0 comments

Show HN: PixShot – Screenshot and OG Image API

https://pixshot.dev
1•juanjosegongi•36m ago•1 comments

Lawsuit could slow Micron DRAM chipmaking project in New York

https://www.syracuse.com/micron/2026/02/whos-behind-the-lawsuit-that-could-slow-microns-chipmakin...
1•walterbell•37m ago•1 comments

Nkmc – a virtual filesystem that lets AI agents call any API with ls, cat, grep

https://nkmc.ai/
1•guoyu•38m ago•1 comments