frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

They were drawn to Korea with dreams of K-pop stardom – but then let down

https://www.bbc.com/news/articles/cvgnq9rwyqno
1•breve•2m ago•0 comments

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•4m ago•0 comments

Bash parallel tasks and error handling

https://github.com/themattrix/bash-concurrent
1•pastage•4m ago•0 comments

Let's compile Quake like it's 1997

https://fabiensanglard.net/compile_like_1997/index.html
1•billiob•5m ago•0 comments

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
1•birdculture•10m ago•0 comments

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•16m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•17m ago•1 comments

Slop News - HN front page right now hallucinated as 100% AI SLOP

https://slop-news.pages.dev/slop-news
1•keepamovin•22m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•24m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
2•tosh•30m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
3•oxxoxoxooo•34m ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•34m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
2•goranmoomin•38m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

3•throwaw12•39m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•41m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•43m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
3•myk-e•46m ago•5 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•47m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
4•1vuio0pswjnm7•49m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
2•1vuio0pswjnm7•50m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•52m ago•2 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•55m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•1h ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•1h ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•1h ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•1h ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•1h ago•1 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•1h ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•1h ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•1h ago•0 comments
Open in hackernews

Julia 1.12 brings progress on standalone binaries and more

https://lwn.net/Articles/1044280/
47•leephillips•2mo ago

Comments

ekjhgkejhgk•2mo ago
This is so exciting.

Julia's multiple dispatch is really powerful. In my opinion it's great user experience for implementing generic code.

It has a downside: since you don't know at build-time what types you'll shove inside your functions, you don't know what to compile.

But some times the user does know. What does this, according to my understanding, is that if you tell the compiler what types you'll be using it can compile everything ahead of time.

mccoyb•2mo ago
> Julia's "secret sauce", the dynamic type system and method dispatch that endows it with its powers of composability, will never be a feature of languages such as Fortran. The tradeoff is a more complex compilation process and the necessity to have part of the Julia runtime available during execution.

> The main limitation is the prohibition of dynamic dispatch. This is a key feature of Julia, where methods can be selected at run time based on the types of function arguments encountered. The consequence is that most public packages don't work, as they may contain at least some instances of dynamic dispatch in contexts that are not performance-critical. Some of these packages can and will be rewritten so that they can be used in standalone binaries, but, in others, the dynamic dispatch is a necessary or desirable feature, so they will never be suitable for static compilation.

The problem (which the author didn't focus on, but which I believe to be the case) that Julia willingly hoisted on itself in the pursuit of maximum performance is _invoking the compiler at runtime_ to specialize methods when type information is finally known.

Method dispatch can be done statically. For instance, what if I don't know what method to call via abstract interpretation? Well, use a bunch of branches. Okay, you say, but that's garbage for performance ... well, raise a compiler error or warning like JET.jl so someone knows that it is garbage for performance.

Now, my read on this work is the noble goal of prying a different, more static version of Julia free from this compiler design decision.

But I think at the heart of this is an infrastructural problem ... does one really need to invoke the compiler at runtime? What space of programs is that serving that cannot be served statically, or with a bit of upfront user refactoring?

Open to be shown wrong, but I believe this is the key compiler issue.

DNF2•2mo ago
This is not how I understand the performance model. Allowing invokation of the compiler at runtime is definitely not something that is done for performance, but for dynamism, to allow some code to run that could not otherwise be run.

In performant Julia code, the compiler is not invoked, because types are statically inferred. In some cases you can have dynamic dispatch, but that doesn't necessarily mean that the compiler needs to run. Instead you can get runtime lookup of previously compiled methods. Dynamic dispatch does not necessitate running the compiler.

mccoyb•2mo ago
I don't believe it, otherwise why not just compile a static but generic version of the method with branches based on the tags of values? ("Can't figure out the types, wait until runtime and then just branch to the specialized method instances which I do know the types for")

Perhaps there is something about subtyping which makes this answer ... not correct -- and if someone knows the real answer, I'd love to understand it.

I believe that this answer is because of performance -- if I can JIT at runtime, that's great -- I get dynamism and performance ... at the cost of a small blip at runtime.

And yes, "performant Julia code" -- that's the static subset of the language that I roughly equated to be the subset which is trying to be pried free from the dynamic "invoking the compiler again" part.

DNF2•2mo ago
I'm not exactly sure what you don't believe, your comment is hard to follow, or relies on premises I haven't detected. What you are describing in your first paragraph is somewhat reminiscent of dynamic dispatch, which Julia does use, but generally hampers performance. It is something to avoid in most cases.

Anyway, performance in Julia relies heavily on statically inferring types and aggressive type specialization at compile time. Triggering the compiler later, during actual runtime, can happen, but is certainly not beneficial for performance, and it's quite unusual to claim that it's central to the performance model of Julia.

If you are asking why Julia allows recompiling code and has dynamic types, it's not for performance, but to allow an interactive workflow and user friendly dynamism. It is the central tradeoff in Julia to enable this while retaining performance. If performance was the only concern, the language would be very different.

mccoyb•2mo ago
I used Julia for 4 years. I'm not a moron: I'm familiar with how it works, I've written several packages in it, including some speculative compiler ones.

You claimed:

> Allowing invokation of the compiler at runtime is definitely not something that is done for performance, but for dynamism, to allow some code to run that could not otherwise be run.

I asked:

> why not just compile a static but generic version of the method with branches based on the tags of values? ("Can't figure out the types, wait until runtime and then just branch to the specialized method instances which I do know the types for")

Which can be done completely ahead of time, before runtime, and doesn't rely on re-invoking the compiler, thereby making this whole "ahead of time compilation only works for a subset of Julia code" problem disappear.

Do you understand now?

My original comment:

> The problem (which the author didn't focus on, but which I believe to be the case) that Julia willingly hoisted on itself in the pursuit of maximum performance is _invoking the compiler at runtime_ to specialize methods when type information is finally known.

is NOT a claim about the overall architecture of Julia -- it's a point about this specific problem (Julia's static ahead-of-time compilation) which is currently highly limited.

DNF2•2mo ago
First of all, I think this sort of aggressive tone is unwarranted.

Secondly, I think it's on you to clarify that you were talking specifically and exclusively about static compilation to standalone binaries. Re-reading your first post strongly gives the impression that you were talking about the compilation strategy in general.

I would also remind you that Julia always does does-ahead-of-time compilation.

Furthermore, my limited understanding of the static compiler (--trim feature), based on hearsay, is that it does pretty much what you are suggesting, supporting dynamic dispatch as long as one can enumerate all the types in advance (though requiring special implementation tricks). Open-ended type sets are not at all supported.

eigenspace•2mo ago
> why not just compile a static but generic version of the method with branches based on the tags of values? ("Can't figure out the types, wait until runtime and then just branch to the specialized method instances which I do know the types for")

This is exactly what the new AOT compiler (juliac) does. The original article is just a bit inaccurate.

The problem though, is that if you have a truly dynamic call-site where you have no idea which method body will be called, then the AOT compiler can't know if the right method specializations will survive the trimming process, so you'll get errors or warnings when compiling with the --trim feature active (--trim is what is used to make the AOT compiled binaries small).

However, there are still lots of cases where you can have a dynamic dispatch but can convince the compiler that there will be an already compiled method signature for every possible specialization. In that case --trimm will work fine and do exactly what you described above.

nineteen999•2mo ago
> My experiments with a "hello world" program resulted in a 1.7MB binary and a directory of library files that occupied a further 91MB

lol

pjmlp•2mo ago
If you want to LOL even more, see Rust or Go with a full static linked binary without using something like UPX.
nineteen999•2mo ago
I have, and what's worse, those ecosystems full on encourage static linking as a panacea, at least Rust used to.
ChrisRackauckas•2mo ago
The caveat to mention here is that this is with the standard IO interface. If you use Core.println instead of println you get something substantially smaller. A limitation of v1.12 is that the handling of IO is not that great, and this is something that IIUC is being addressed in the v1.13 updates.
leephillips•2mo ago
I just tried compiling using Core.println and still got a 92MB build directory.