frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

The Irony of This Post

https://www.vibesec.app/
1•officialmoseti•6m ago•1 comments

Chapter 1 of Morris Chang's memoir, translated from Chinese

https://karinabao.substack.com/p/morris-changs-memoir-chapter-1-english
1•walz•7m ago•0 comments

Vision Transformers Don't Need Trained Registers

https://arxiv.org/abs/2506.08010
1•avd4292•12m ago•0 comments

It's 2025, But is it 1995 or 1998

https://ard.ninja/blog/2025-06-09-its-2025-but-is-it-1995-or-1998/
1•ardme•16m ago•0 comments

Mapping urban and rural British hedgehogs

https://anil.recoil.org/ideas/hedgehog-mapping
2•colinprince•19m ago•0 comments

Honda Japan confirms end of production of iconic Honda Civic Type R sports car

https://www.blanquivioletas.com/en/end-production-honda-civic-type-r-car/
2•teleforce•21m ago•0 comments

Trump's FTC may impose merger condition that forbids advertising boycotts

https://arstechnica.com/tech-policy/2025/06/trumps-ftc-targets-advertising-boycotts-in-potential-boost-for-elon-musks-x/
2•derbOac•23m ago•0 comments

Apache Fury Is Now Apache Fory

https://fory.apache.org/blog/fury_renamed_to_fory/
2•shscs911•24m ago•0 comments

Chinese AI outfits smuggle suitcases of hard drives to evade US chip

https://www.tomshardware.com/tech-industry/artificial-intelligence/chinese-ai-outfits-smuggling-suitcases-full-of-hard-drives-to-evade-u-s-chip-restrictions-training-ai-models-in-malaysia-using-rented-servers
1•toss1•29m ago•0 comments

Talk – Why Search Sucks (But First, a Brief History) [video]

https://www.youtube.com/watch?v=vZVcBUnre-c
1•kushal_goenka•37m ago•1 comments

Vienna Could be the model for how to tackle the housing crisis, climate change

https://www.npr.org/2025/06/15/nx-s1-5400642/affordable-housing-environment-vienna-climate-change
1•a_w•45m ago•0 comments

The Art of Princess Mononoke (2014)

https://archive.org/details/artof-mononoke
2•crescit_eundo•59m ago•0 comments

Show HN: I created a guide GPT for anyone who is confused on data enrichment

https://chatgpt.com/g/g-684f893ae7c881919f210399e940105a-data-enrichment-guide-and-expert
1•grayfox777•59m ago•0 comments

Safe-math-rs – write normal math expressions safely(overflow-checked, no panics)

https://github.com/GotenJBZ/safe-math-rs
1•gotenjbz•1h ago•1 comments

Hypershell: A Type-Level DSL for Shell-Scripting in Rust

https://contextgeneric.dev/blog/hypershell-release/
1•birdculture•1h ago•0 comments

Show HN: FeetGen Online – Transform simple prompts into feet artwork

https://feet-gen.online
1•webleadai•1h ago•0 comments

I Spent My Weekends Building an AI Debugger That Understands Your Code

https://marketplace.visualstudio.com/items?itemName=NandamYashwanth.go-debugger-ai
1•yashwanthnandam•1h ago•2 comments

Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing Task

https://arxiv.org/abs/2506.08872
2•stephen_g•1h ago•0 comments

Eating Cap'n Crunch

https://akkartik.name/post/capn-crunch
3•mfro•1h ago•1 comments

Permission Application in HarmonyOS

https://github.com/CaojingCode/HarmonyOS-Articles/blob/main/Permission%20application%20in%20HarmonyOS.md
1•cj641809386•1h ago•1 comments

Powerful Orchestration, Everything as Code

https://kestra.io
1•rammy1234•1h ago•0 comments

Extended Thinking Tips

https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/extended-thinking-tips
1•fzliu•1h ago•0 comments

Claude Code Tips

https://spiess.dev/blog/how-i-use-claude-code
2•ijidak•1h ago•0 comments

Show HN: Hackernews Clone (The 1001st)

https://news.expatcircle.com/en/
1•Beijinger•1h ago•2 comments

OneZoom Tree of All Life

https://www.onezoom.org/life/@biota=93302
1•agnosticmantis•1h ago•0 comments

Show HN: Missing slash-command package for Emacs

https://github.com/bluzky/slash-commands
2•bluzky•2h ago•0 comments

The Hewlett-Packard Archive

https://hparchive.com
3•joebig•2h ago•0 comments

A $3,600 luxury keyboard for the keyboard obsessed

https://www.theverge.com/reviews/659125/norbauer-seneca-review-luxury-capacitive-keyboard
2•gametorch•2h ago•0 comments

It's Not Just for Your Brain: Meditating Can Change Your DNA

https://www.fastcompany.com/3040039/its-not-just-for-your-brain-meditating-can-actually-change-your-dna
1•squircle•2h ago•1 comments

Engineer creates first custom motherboard for 1990s Playstation console

https://arstechnica.com/gaming/2025/06/engineer-creates-first-custom-motherboard-for-1990s-playstation-console/
1•gametorch•2h ago•0 comments
Open in hackernews

How fast can the RPython GC allocate?

https://pypy.org/posts/2025/06/rpython-gc-allocation-speed.html
33•todsacerdoti•8h ago

Comments

kragen•6h ago
My summary is that it's about one or two allocations per nanosecond on CF Bolz's machine, an AMD Ryzen 7 PRO 7840U, presumably on one core, and it's about 11 instructions per allocation.

This is about 2–4× faster than my pointer-bumping arena allocator for C, kmregion†, which is a similar number of instructions on the (inlined) fast path. But possibly that's because I was testing on slower hardware. I was also testing with 16-byte initialized objects, but without a GC. It's about 10× the speed of malloc/free.

I don't know that I'd recommend using kmregion, since it's never been used for anything serious, but it should at least serve as a proof of concept.

______

† http://canonical.org/~kragen/sw/dev3/kmregion.h http://canonical.org/~kragen/sw/dev3/kmregion.c http://canonical.org/~kragen/sw/dev3/kmregion_example.c

runlaszlorun•5h ago
I don't know much about language internals or allocation but am learning. why this could be significantly faster than a bump/arena allocator?

And is the speed up over malloc/free due to large block allocation as opposed to individual malloc?

kragen•5h ago
It is a bump allocator. I don't know why it's so much faster than mine, but my hypothesis was that CF Bolz was testing on a faster machine. The speedup over malloc/free is because bumping a pointer is much faster than calling a subroutine.
charleslmunger•2h ago
I simulated yours vs the UPB arena fast path:

https://godbolt.org/z/1oTrv1Y58

Messing with it a bit, it seems like yours has a slightly shorter dependency chain due to loading the two members separately, where UPB loads them as a pair (as it needs both in order to determine how much size is available). Also seems to have less register pressure. I think that's because yours bumps down. UPB's supports in place forward extension, so it needs to bump up.

If you added branch hints to signal to the compiler that your slow path is not often hit, you might see some improvement (although if you have PGO it should already do this). These paths could also be good candidates for the `preserve_most` calling convention.

However, there is an unfortunate compiler behavior here for both implementations - it doesn't track whether the slow path (which is not inlined, and clobbers the pointers) was actually hit, so it reloads on the hot path, for both approaches. Unfortunately this means that a sequence of allocations will store and load the arena pointers repeatedly, when ideally they'd keep the current position in a register on the hot path and refill that register after clobbering in the cold path.

kragen•2h ago
Thank you very much! I vaguely remember that it did that, and the failure to keep the pointer in registers might explain why PyPy's version is twice as fast (?).
pizlonator•6h ago
The reason why their allocator is faster than Boehm isn't because of conservative stack scanning.

You can move objects while using conservative stack scanning. This is a common approach. JavaScriptCore used to use it.

You can have super fast allocation in a non-moving collector, but that involves an algorithm that is vastly different from the Boehm one. I think the fastest non-moving collectors have similar allocation fast paths to the fastest moving collectors. JavaScriptCore has a fast non-moving allocator called bump'n'pop. In Fil-C, I use a different approach that I call SIMD turbosweep. There's also the Immix approach. And there are many others.

forrestthewoods•3h ago
> Every allocation takes 110116790943 / 10000000000 ≈ 11 instructions and 21074240395 / 10000000000 ≈ 2.1 cycles

I don’t believe this in even the slightest. That is not a meaningful metric for literally any actual workload in the universe. It defies common sense.

A few years ago I ran some benchmarks on an old but vaguely reasonable work load. I came up with a p95 or just 25nanoseconds but p99.9 on the order of tens of microseconds. https://www.forrestthewoods.com/blog/benchmarking-malloc-wit...

Of course “2% of time in GC” is doing a lot of heavy lifting here. But I’d really need to see a real work load for me to start to believe.

kragen•32m ago
You should believe it, because it's true.

You were measuring malloc, so of course you came up with numbers that were 20 times worse than PyPy's nursery allocator. That's because malloc is 20 times slower, whatever common sense may say.

Also, you're talking about tail latency, while CF Bolz was measuring throughput. Contrary to your assertion, throughput is indeed a meaningful metric, though, especially for interactive UIs such as videogames, tail latency is often more important. For applications like compilers and SMT solvers, on the other hand, throughput matters more.

You're right that there are some other costs. A generational or incremental GC more or less requires write barriers, which slow down the parts of your code that mutate existing objects instead of allocating new ones. And a generational collector can move existing objects to a different address, which complicates FFIs that might try to retain pointers to them outside the GC's purview.

There are a lot of papers running captured allocation traces like you did against different memory allocators, some of them including GCed allocators. Ben Zorn did a lot of work like this in the 90s, coming to the conclusion that GC was somewhat faster than malloc/free. Both GCs and mallocs have improved substantially since then, but it would be easy to imagine that the results would be similar. Maybe someone has done it.

But to a significant extent we write performance-critical code according to the performance characteristics of our platforms. If mutation is fast but allocation is slow, you tend to write code that does less allocation and more mutation than you would if mutation was slow and allocation was fast, all else being equal. So you'd expect code written for platforms with fast allocation to allocate more, which makes faster allocation a bigger advantage for that code. Benchmarking captured allocation traces won't capture that effect.

Tail latency has always been one of the worst things about GCs. I hear amazing things about new real-time GC algorithms, including remarkable reports from Azul, but I don't know if they're true or what their price is.