frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenAI might pivot to the "most addictive digital friend" or face extinction

https://twitter.com/lebed2045/status/2020184853271167186
1•lebed2045•50s ago•1 comments

Show HN: Know how your SaaS is doing in 30 seconds

https://anypanel.io
1•dasfelix•1m ago•0 comments

ClawdBot Ordered Me Lunch

https://nickalexander.org/drafts/auto-sandwich.html
1•nick007•2m ago•0 comments

What the News media thinks about your Indian stock investments

https://stocktrends.numerical.works/
1•mindaslab•3m ago•0 comments

Running Lua on a tiny console from 2001

https://ivie.codes/page/pokemon-mini-lua
1•Charmunk•3m ago•0 comments

Google and Microsoft Paying Creators $500K+ to Promote AI Tools

https://www.cnbc.com/2026/02/06/google-microsoft-pay-creators-500000-and-more-to-promote-ai.html
2•belter•5m ago•0 comments

New filtration technology could be game-changer in removal of PFAS

https://www.theguardian.com/environment/2026/jan/23/pfas-forever-chemicals-filtration
1•PaulHoule•7m ago•0 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
1•momciloo•7m ago•0 comments

Kinda Surprised by Seadance2's Moderation

https://seedanceai.me/
1•ri-vai•7m ago•1 comments

I Write Games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
2•valyala•7m ago•0 comments

Django scales. Stop blaming the framework (part 1 of 3)

https://medium.com/@tk512/django-scales-stop-blaming-the-framework-part-1-of-3-a2b5b0ff811f
1•sgt•8m ago•0 comments

Malwarebytes Is Now in ChatGPT

https://www.malwarebytes.com/blog/product/2026/02/scam-checking-just-got-easier-malwarebytes-is-n...
1•m-hodges•8m ago•0 comments

Thoughts on the job market in the age of LLMs

https://www.interconnects.ai/p/thoughts-on-the-hiring-market-in
1•gmays•8m ago•0 comments

Show HN: Stacky – certain block game clone

https://www.susmel.com/stacky/
2•Keyframe•11m ago•0 comments

AIII: A public benchmark for AI narrative and political independence

https://github.com/GRMPZQUIDOS/AIII
1•GRMPZ23•11m ago•0 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
2•valyala•13m ago•0 comments

The API Is a Dead End; Machines Need a Labor Economy

1•bot_uid_life•14m ago•0 comments

Digital Iris [video]

https://www.youtube.com/watch?v=Kg_2MAgS_pE
1•Jyaif•15m ago•0 comments

New wave of GLP-1 drugs is coming–and they're stronger than Wegovy and Zepbound

https://www.scientificamerican.com/article/new-glp-1-weight-loss-drugs-are-coming-and-theyre-stro...
4•randycupertino•16m ago•0 comments

Convert tempo (BPM) to millisecond durations for musical note subdivisions

https://brylie.music/apps/bpm-calculator/
1•brylie•19m ago•0 comments

Show HN: Tasty A.F.

https://tastyaf.recipes/about
2•adammfrank•19m ago•0 comments

The Contagious Taste of Cancer

https://www.historytoday.com/archive/history-matters/contagious-taste-cancer
1•Thevet•21m ago•0 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
1•alephnerd•21m ago•1 comments

Bithumb mistakenly hands out $195M in Bitcoin to users in 'Random Box' giveaway

https://koreajoongangdaily.joins.com/news/2026-02-07/business/finance/Crypto-exchange-Bithumb-mis...
1•giuliomagnifico•21m ago•0 comments

Beyond Agentic Coding

https://haskellforall.com/2026/02/beyond-agentic-coding
3•todsacerdoti•22m ago•0 comments

OpenClaw ClawHub Broken Windows Theory – If basic sorting isn't working what is?

https://www.loom.com/embed/e26a750c0c754312b032e2290630853d
1•kaicianflone•24m ago•0 comments

OpenBSD Copyright Policy

https://www.openbsd.org/policy.html
1•Panino•25m ago•0 comments

OpenClaw Creator: Why 80% of Apps Will Disappear

https://www.youtube.com/watch?v=4uzGDAoNOZc
2•schwentkerr•29m ago•0 comments

What Happens When Technical Debt Vanishes?

https://ieeexplore.ieee.org/document/11316905
2•blenderob•30m ago•0 comments

AI Is Finally Eating Software's Total Market: Here's What's Next

https://vinvashishta.substack.com/p/ai-is-finally-eating-softwares-total
3•gmays•31m ago•0 comments
Open in hackernews

Julia 1.12 brings progress on standalone binaries and more

https://lwn.net/Articles/1044280/
47•leephillips•2mo ago

Comments

ekjhgkejhgk•2mo ago
This is so exciting.

Julia's multiple dispatch is really powerful. In my opinion it's great user experience for implementing generic code.

It has a downside: since you don't know at build-time what types you'll shove inside your functions, you don't know what to compile.

But some times the user does know. What does this, according to my understanding, is that if you tell the compiler what types you'll be using it can compile everything ahead of time.

mccoyb•2mo ago
> Julia's "secret sauce", the dynamic type system and method dispatch that endows it with its powers of composability, will never be a feature of languages such as Fortran. The tradeoff is a more complex compilation process and the necessity to have part of the Julia runtime available during execution.

> The main limitation is the prohibition of dynamic dispatch. This is a key feature of Julia, where methods can be selected at run time based on the types of function arguments encountered. The consequence is that most public packages don't work, as they may contain at least some instances of dynamic dispatch in contexts that are not performance-critical. Some of these packages can and will be rewritten so that they can be used in standalone binaries, but, in others, the dynamic dispatch is a necessary or desirable feature, so they will never be suitable for static compilation.

The problem (which the author didn't focus on, but which I believe to be the case) that Julia willingly hoisted on itself in the pursuit of maximum performance is _invoking the compiler at runtime_ to specialize methods when type information is finally known.

Method dispatch can be done statically. For instance, what if I don't know what method to call via abstract interpretation? Well, use a bunch of branches. Okay, you say, but that's garbage for performance ... well, raise a compiler error or warning like JET.jl so someone knows that it is garbage for performance.

Now, my read on this work is the noble goal of prying a different, more static version of Julia free from this compiler design decision.

But I think at the heart of this is an infrastructural problem ... does one really need to invoke the compiler at runtime? What space of programs is that serving that cannot be served statically, or with a bit of upfront user refactoring?

Open to be shown wrong, but I believe this is the key compiler issue.

DNF2•2mo ago
This is not how I understand the performance model. Allowing invokation of the compiler at runtime is definitely not something that is done for performance, but for dynamism, to allow some code to run that could not otherwise be run.

In performant Julia code, the compiler is not invoked, because types are statically inferred. In some cases you can have dynamic dispatch, but that doesn't necessarily mean that the compiler needs to run. Instead you can get runtime lookup of previously compiled methods. Dynamic dispatch does not necessitate running the compiler.

mccoyb•2mo ago
I don't believe it, otherwise why not just compile a static but generic version of the method with branches based on the tags of values? ("Can't figure out the types, wait until runtime and then just branch to the specialized method instances which I do know the types for")

Perhaps there is something about subtyping which makes this answer ... not correct -- and if someone knows the real answer, I'd love to understand it.

I believe that this answer is because of performance -- if I can JIT at runtime, that's great -- I get dynamism and performance ... at the cost of a small blip at runtime.

And yes, "performant Julia code" -- that's the static subset of the language that I roughly equated to be the subset which is trying to be pried free from the dynamic "invoking the compiler again" part.

DNF2•2mo ago
I'm not exactly sure what you don't believe, your comment is hard to follow, or relies on premises I haven't detected. What you are describing in your first paragraph is somewhat reminiscent of dynamic dispatch, which Julia does use, but generally hampers performance. It is something to avoid in most cases.

Anyway, performance in Julia relies heavily on statically inferring types and aggressive type specialization at compile time. Triggering the compiler later, during actual runtime, can happen, but is certainly not beneficial for performance, and it's quite unusual to claim that it's central to the performance model of Julia.

If you are asking why Julia allows recompiling code and has dynamic types, it's not for performance, but to allow an interactive workflow and user friendly dynamism. It is the central tradeoff in Julia to enable this while retaining performance. If performance was the only concern, the language would be very different.

mccoyb•2mo ago
I used Julia for 4 years. I'm not a moron: I'm familiar with how it works, I've written several packages in it, including some speculative compiler ones.

You claimed:

> Allowing invokation of the compiler at runtime is definitely not something that is done for performance, but for dynamism, to allow some code to run that could not otherwise be run.

I asked:

> why not just compile a static but generic version of the method with branches based on the tags of values? ("Can't figure out the types, wait until runtime and then just branch to the specialized method instances which I do know the types for")

Which can be done completely ahead of time, before runtime, and doesn't rely on re-invoking the compiler, thereby making this whole "ahead of time compilation only works for a subset of Julia code" problem disappear.

Do you understand now?

My original comment:

> The problem (which the author didn't focus on, but which I believe to be the case) that Julia willingly hoisted on itself in the pursuit of maximum performance is _invoking the compiler at runtime_ to specialize methods when type information is finally known.

is NOT a claim about the overall architecture of Julia -- it's a point about this specific problem (Julia's static ahead-of-time compilation) which is currently highly limited.

DNF2•2mo ago
First of all, I think this sort of aggressive tone is unwarranted.

Secondly, I think it's on you to clarify that you were talking specifically and exclusively about static compilation to standalone binaries. Re-reading your first post strongly gives the impression that you were talking about the compilation strategy in general.

I would also remind you that Julia always does does-ahead-of-time compilation.

Furthermore, my limited understanding of the static compiler (--trim feature), based on hearsay, is that it does pretty much what you are suggesting, supporting dynamic dispatch as long as one can enumerate all the types in advance (though requiring special implementation tricks). Open-ended type sets are not at all supported.

eigenspace•2mo ago
> why not just compile a static but generic version of the method with branches based on the tags of values? ("Can't figure out the types, wait until runtime and then just branch to the specialized method instances which I do know the types for")

This is exactly what the new AOT compiler (juliac) does. The original article is just a bit inaccurate.

The problem though, is that if you have a truly dynamic call-site where you have no idea which method body will be called, then the AOT compiler can't know if the right method specializations will survive the trimming process, so you'll get errors or warnings when compiling with the --trim feature active (--trim is what is used to make the AOT compiled binaries small).

However, there are still lots of cases where you can have a dynamic dispatch but can convince the compiler that there will be an already compiled method signature for every possible specialization. In that case --trimm will work fine and do exactly what you described above.

nineteen999•2mo ago
> My experiments with a "hello world" program resulted in a 1.7MB binary and a directory of library files that occupied a further 91MB

lol

pjmlp•2mo ago
If you want to LOL even more, see Rust or Go with a full static linked binary without using something like UPX.
nineteen999•2mo ago
I have, and what's worse, those ecosystems full on encourage static linking as a panacea, at least Rust used to.
ChrisRackauckas•2mo ago
The caveat to mention here is that this is with the standard IO interface. If you use Core.println instead of println you get something substantially smaller. A limitation of v1.12 is that the handling of IO is not that great, and this is something that IIUC is being addressed in the v1.13 updates.
leephillips•2mo ago
I just tried compiling using Core.println and still got a 92MB build directory.