frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Claude 4

https://www.anthropic.com/news/claude-4
979•meetpateltech•3h ago•452 comments

Mozilla to shut down Pocket on July 8

https://support.mozilla.org/en-US/kb/future-of-pocket
454•phantomathkg•3h ago•304 comments

That fractal that's been up on my wall for 12 years

https://chriskw.xyz/2025/05/21/Fractal/
182•chriskw•4h ago•14 comments

How to cheat at settlers by loading the dice (2017)

https://izbicki.me/blog/how-to-cheat-at-settlers-of-catan-by-loading-the-dice-and-prove-it-with-p-values.html
41•jxmorris12•1h ago•24 comments

Does Earth have two high-tide bulges on opposite sides? (2014)

http://physics.stackexchange.com/questions/121830/does-earth-really-have-two-high-tide-bulges-on-opposite-sides
34•imurray•1h ago•6 comments

Loading Pydantic models from JSON without running out of memory

https://pythonspeed.com/articles/pydantic-json-memory/
30•itamarst•2h ago•11 comments

Improving performance of rav1d video decoder

https://ohadravid.github.io/posts/2025-05-rav1d-faster/
228•todsacerdoti•8h ago•77 comments

Launch HN: WorkDone (YC X25) – AI Audit of Medical Charts

43•digitaltzar•4h ago•39 comments

Fast Allocations in Ruby 3.5

https://railsatscale.com/2025-05-21-fast-allocations-in-ruby-3-5/
133•tekknolagi•6h ago•36 comments

A South Korean grand master on the art of the perfect soy sauce

https://www.theguardian.com/world/2025/may/21/without-time-there-is-no-flavour-a-south-korean-grand-master-on-the-art-of-the-perfect-soy-sauce
83•n1b0m•1d ago•34 comments

Show HN: SQLite JavaScript - extend your database with JavaScript

https://github.com/sqliteai/sqlite-js
119•marcobambini•6h ago•36 comments

Show HN: rtcollector - A modular, RedisTimeSeries-native observability agent

https://github.com/xe-nvdk/rtcollector
4•ignaciovdk•14m ago•0 comments

Problems in AI alignment: A scale model

https://muldoon.cloud/2025/05/22/alignment.html
4•hamburga•42m ago•0 comments

Kangaroo: A flash cache optimized for tiny objects (2021)

https://engineering.fb.com/2021/10/26/core-infra/kangaroo/
8•PaulHoule•1h ago•0 comments

Research Uncovers Parthenon Spectacular Lighting Effects for Athena in Antiquity

https://arkeonews.net/research-uncovers-the-parthenons-spectacular-lighting-effects-for-athena-in-antiquity/
8•bookofjoe•3d ago•0 comments

Planetfall

https://somethingaboutmaps.wordpress.com/2025/05/20/planetfall/
270•milliams•10h ago•67 comments

Practicing graphical debugging using visualizations of the Hilbert curve

https://akkartik.name/debugUIs.html
9•akkartik•1h ago•0 comments

I Built My Own Audio Player

https://nexo.sh/posts/why-i-built-a-native-mp3-player-in-swiftui/
105•nexo-v1•5h ago•62 comments

The scientific “unit” we call the decibel

https://lcamtuf.substack.com/p/decibels-are-ridiculous
538•Ariarule•15h ago•420 comments

Trump administration halts Harvard's ability to enroll international students

https://www.nytimes.com/2025/05/22/us/politics/trump-harvard-international-students.html
106•S0y•2h ago•46 comments

Show HN: DockFlow – Switch between multiple macOS Dock layouts instantly

https://dockflow.appitstudio.com/
37•pugdogdev•3h ago•24 comments

Adventures in Symbolic Algebra with Model Context Protocol

https://www.stephendiehl.com/posts/computer_algebra_mcp/
70•freediver•6h ago•19 comments

Benchmarking Crimes Meet Formal Verification

https://microkerneldude.org/2025/04/27/benchmarking-crimes-meet-formal-verification/
21•snvzz•3d ago•3 comments

The "AI 2027" Scenario: How realistic is it?

https://garymarcus.substack.com/p/the-ai-2027-scenario-how-realistic
68•NotInOurNames•2h ago•91 comments

We’ll be ending web hosting for your apps on Glitch

https://blog.glitch.com/post/changes-are-coming-to-glitch/
26•js4ever•2h ago•4 comments

Four years of sight reading practice

https://sandrock.co.za/carl/2025/05/four-years-of-sight-reading-pracice/
116•chthonicdaemon•3d ago•59 comments

Poireau: A Sampling Allocation Debugger

https://github.com/backtrace-labs/poireau
7•luu•3d ago•2 comments

Near-infrared spatiotemporal color vision enabled by upconversion contact lenses

https://www.cell.com/cell/fulltext/S0092-8674(25)00454-4
35•ArnoVW•4h ago•25 comments

Show HN: Pi Co-pilot – Evaluation of AI apps made easy

https://withpi.ai/
13•achintms•7h ago•3 comments

The Philosophy of Byung-Chul Han (2020)

https://newintrigue.com/2020/06/29/the-philosophy-of-byung-chul-han/
44•-__---____-ZXyw•6h ago•11 comments
Open in hackernews

Fast Allocations in Ruby 3.5

https://railsatscale.com/2025-05-21-fast-allocations-in-ruby-3-5/
133•tekknolagi•6h ago

Comments

alberth•5h ago
Can someone explain, is YJIT being abandoned over the new ZJIT? [0]

And if so, will these YJIT features likes Fast Allocations be brought to ZJIT?

https://railsatscale.com/2025-05-14-merge-zjit/

firemelt•5h ago
after reading your source I'd say YJIT still there up until ZJIT is ready and on par with YJIT

and the features is there when its there

ksec•5h ago
>For this reason, we will continue maintaining YJIT for now and Ruby 3.5 will ship with both YJIT and ZJIT. In parallel, we will improve ZJIT until it is on par (features and performance) with YJIT.

I guess YJIT will always be faster in warmup and minimal increase of memory usage. ZJIT being more traditional should bring more speedup than YJIT.

But most of the speedup right now is still coming from rewriting C into Ruby.

uticus•3h ago
> But most of the speedup right now is still coming from rewriting C into Ruby.

Quick glance, this statement seems backwards - shouldn't C always be faster? or maybe i'm misunderstanding how the JIT truly works

vidarh•3h ago
Unless your JIT can analyse the full code, a transition between byte code and native code is often costly because the JIT won't be able to optimize the full path. Once your JIT generates good enough code, it then becomes faster to avoid that transition even in cases when in isolation native code might still be faster.

EDIT: Note that this isn't an inherent limit. You could write a JIT that could analyze the compiled C code too. It's just that it's much harder to do.

molf•3h ago
C itself is fast; it's calls to C from Ruby that are slow. [1]

Crossing the Ruby -> C boundary means that a JIT compiler cannot optimize the code as much; because it cannot alter or inline the C code methods. Counterintuitively this means that rewriting (certain?) built-in methods in Ruby leads to performance gains when using YJIT. [2]

[1]: https://railsatscale.com/2023-08-29-ruby-outperforms-c/ [2]: https://jpcamara.com/2024/12/01/speeding-up-ruby.html

nightpool•38m ago
The sibling comments mention that C is used in a lot of places in Ruby that incur cross-language overheads, which is true, but it's also just true that in general, even ignoring this overhead, JIT'd functions are going to be faster then their comparable C functions, because 1) they have more profiling information to be able to work from, 2) they have more type information, and (as a consequence of 1&2) 3) they're more likely to be monomorphized, and the compiler is more able to inline specialized variants of them into different chunks of the code. Among other optimizations!
uticus•8m ago
> ...they have more profiling information to be able to work from... more type information... more likely to be monomorphized, and the compiler is more able to inline specialized variants of them into different chunks of the code.

this is fascinating to me. i always assumed C had everything in the language that was needed for the compiler to use. in other words, the compiler may have a lot to work through, but the pieces are all available. but this makes it sound like JIT'd functions provide more info to the compiler (more pieces to work with). is there another language besides C that does have language features to indicate to the compiler how to make things as performant as possible?

tenderlove•4h ago
It's not being abandoned, we're just shifting focus to evaluate a new style of compiler. YJIT will still get bug fixes and performance improvements.

ZJIT is a method based JIT (the type of compiler traditionally taught in schools) where YJIT is a lazy basic block versioning (LBBV) compiler. We're using what we learned developing and deploying YJIT to build an even better JIT compiler. IOW we're going to fold some of YJIT's techniques in to ZJIT.

> And if so, will these YJIT features likes Fast Allocations be brought to ZJIT?

It may not have been clear from the post, but this fast allocation strategy is actually implemented in the byte code interpreter. You will get a speedup without using any JIT compiler. We've already ported this fast-path to YJIT and are in the midst of implementing it in ZJIT.

FooBarWidget•3h ago
Why is a traditional method based JIT better than an LBBV JIT? I thought YJIT is LBBV because it's a better fit for Ruby, whereas traditional method based JIT is more suitable for static languages like Java.
tenderlove•2h ago
One reason is that we think we can make better use of registers. Since LBBV doesn't "see" all blocks in a particular method all at once, it's much more challenging to optimize register use across basic blocks. We've added type profiling, so ZJIT can "learn" types from the runtime.
ysavir•1h ago
Thanks for all the work you all are putting into Ruby! The improvements in the past few years have been incredible and I'm excited to see the continuous efforts in this area.
strzibny•1h ago
Awesome, thanks for all the good work on Ruby!
ksec•5h ago
I know I may be jumping the gun a little here but I wonder what percentage speedup could we expect on typical rails applications. Especially with Active Record.
GGO•4h ago
so far no diff here (https://speed.yjit.org/). But the build is from May 14 so maybe it will show up in new build?
tempest_•4h ago
At this point from the outside looking in Ruby is Rails at this point.
firemelt•5h ago
did it means more speeds to all rails/active records collections?
90s_dev•5h ago
It seems to me like all languages are converging towards something like WASM. I wonder if in 20 years we will see WASM become the de facto platform that all apps can compile to and all operating systems can run near-natively with only a thin like WASI but more convenient.
berkes•5h ago
Wasn't this the idea of the JVM?
foldr•4h ago
And of course the ill-fated Parrot VM associated with the Perl 6 project.
rhdjsjebshjffn•4h ago
I think that was more of a language-oriented effort rather than runtime/abi oriented effort.
foldr•4h ago
Parrot was intended to be a universal VM. It wasn’t just for Perl.

https://www.slideshare.net/slideshow/the-parrot-vm/2126925

rhdjsjebshjffn•4h ago
Sure, I just think that's a very odd way to characterize the project. Basically anything can be universal vm if you put enough effort to reimplementing the languages. Much of what sets Parrot aside is its support for frontend tooling.
foldr•2h ago
“The Parrot VM aims to be a universal virtual machine for dynamic languages…”

That’s how the people working on the project characterized it.

rhdjsjebshjffn•1h ago
I certainly think the humor in parrot/rakudo (and why they come up today still) is how little of their own self image the proponents could perceive. The absolute irony of thinking that perl's strength was due to familiarity with text-manipulation rather than the cultural mass....
90s_dev•4h ago
I think so, but that was the 90s where we needed a lot more hindsight to get it right. Plus that was mostly just Sun, right? WASM is backed by all browsers and it looks like MS might be looking at bridging it with its own kernel or something?
lloeki•4h ago
> that was the 90s

In the meantime the CLR happened too.

And - to an extent - LLVM IR.

bgwalter•3h ago
I don't know. The integration of Java applets was way smoother than WASM.

Security wise, perhaps a different story, though let's wait until WASM is in wide use with filesystem access and bugs start to appear.

hueho•3h ago
Java bytecode was originally never intended to be used with anything other than Java - unlike WASM it's very much designed to describe programs using virtual dispatch and automatic memory management. Sun eventually added stuff like invokedynamic to make it easier to implement dynamic languages (at the time, stuff like Ruby and Python), but it was always a bit of round peg in square hole.

By comparison, WASM is really more like traditional assembly, only running inside a sandbox.

zerd•1h ago
Like predicted in 2014 here: https://www.destroyallsoftware.com/talks/the-birth-and-death...
hinkley•2h ago
> I’ve been interested in speeding up allocations for quite some time. We know that calling a C function from Ruby incurs some overhead, and that the overhead depends on the type of parameters we pass.

> it seemed quite natural to use the triple-dot forwarding syntax (...).

> Unfortunately I found that using ... was quite expensive

> This lead me to implement an optimization for ... .

That’s some excellent yak shaving. And speaking up … in any language is good news even if allocation is not faster.

hinkley•2h ago
> It’s very rare for code to allocate exactly the same type of object many times in a row, so the class of the instance local variable will change quite frequently.

That’s dangerous thinking because constructors will be a bimodal distribution.

Either a graph of calls or objects will contain a large number of unique objects, layers of alternating objects, or a lot of one type of object. Any map function for instance will tend to return a bunch of the same object. When the median and the mean diverge like this your thinking about perf gets muddy. An inline cache will make bulk allocations in list comprehensions faster. It won’t make creating DAGs faster. One is better than none.

masklinn•1h ago
> One is better than none.

Not necessarily. An inline cache is cheap but it's not free, even less so when it also comes with the expense of moving Class#new from C to Ruby. It's probably not worth speeding up the 1% at the expense of the 99%.

> An inline cache will make bulk allocations in list comprehensions faster.

Only if such comprehensions create exactly one type of object, if they create two it's going to slow them down, and if they create zero (just do data extraction) it won't do anything.

hinkley•25m ago
> Only if such comprehensions create exactly one type of object,

We just had this conversation maybe a month ago. If it’s 50-50 then you are correct. However if it’s skewed then it depends. I can’t recall what ratio was discovered to be workable, it was more than 50% and less than or equal to 90%.

munificent•15m ago
> Any map function for instance will tend to return a bunch of the same object.

Yes, but if it ends up creating any ephemeral objects in the process of determining those returned objects, then the allocation sequence is still not homogeneous. In Ruby, according to the article, even calling a constructor with named arguments allocates, so it's very easy to still end up cycling through allocating different types.

At the same time, the callsite for any given `.new()` invocation will almost always be creating an instance of the exact same class. The target expression is nearly always just a constant name. That makes it a prime candidate for good inline caching at those callsites.