frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Sebastian Galiani on the Marginal Revolution

https://marginalrevolution.com/marginalrevolution/2026/02/sebastian-galiani-on-the-marginal-revol...
1•paulpauper•1m ago•0 comments

Ask HN: Are we at the point where software can improve itself?

1•ManuelKiessling•1m ago•0 comments

Binance Gives Trump Family's Crypto Firm a Leg Up

https://www.nytimes.com/2026/02/07/business/binance-trump-crypto.html
1•paulpauper•1m ago•0 comments

Reverse engineering Chinese 'shit-program' for absolute glory: R/ClaudeCode

https://old.reddit.com/r/ClaudeCode/comments/1qy5l0n/reverse_engineering_chinese_shitprogram_for/
1•edward•1m ago•0 comments

Indian Culture

https://indianculture.gov.in/
1•saikatsg•4m ago•0 comments

Show HN: Maravel-Framework 10.61 prevents circular dependency

https://marius-ciclistu.medium.com/maravel-framework-10-61-0-prevents-circular-dependency-cdb5d25...
1•marius-ciclistu•4m ago•0 comments

The age of a treacherous, falling dollar

https://www.economist.com/leaders/2026/02/05/the-age-of-a-treacherous-falling-dollar
2•stopbulying•4m ago•0 comments

Ask HN: AI Generated Diagrams

1•voidhorse•7m ago•0 comments

Microsoft Account bugs locked me out of Notepad – are Thin Clients ruining PCs?

https://www.windowscentral.com/microsoft/windows-11/windows-locked-me-out-of-notepad-is-the-thin-...
2•josephcsible•7m ago•0 comments

Show HN: A delightful Mac app to vibe code beautiful iOS apps

https://milq.ai/hacker-news
2•jdjuwadi•10m ago•1 comments

Show HN: Gemini Station – A local Chrome extension to organize AI chats

https://github.com/rajeshkumarblr/gemini_station
1•rajeshkumar_dev•10m ago•0 comments

Welfare states build financial markets through social policy design

https://theloop.ecpr.eu/its-not-finance-its-your-pensions/
2•kome•14m ago•0 comments

Market orientation and national homicide rates

https://onlinelibrary.wiley.com/doi/10.1111/1745-9125.70023
3•PaulHoule•14m ago•0 comments

California urges people avoid wild mushrooms after 4 deaths, 3 liver transplants

https://www.cbsnews.com/news/california-death-cap-mushrooms-poisonings-liver-transplants/
1•rolph•15m ago•0 comments

Matthew Shulman, co-creator of Intellisense, died 2019 March 22

https://www.capenews.net/falmouth/obituaries/matthew-a-shulman/article_33af6330-4f52-5f69-a9ff-58...
3•canucker2016•16m ago•1 comments

Show HN: SuperLocalMemory – AI memory that stays on your machine, forever free

https://github.com/varun369/SuperLocalMemoryV2
1•varunpratap369•17m ago•0 comments

Show HN: Pyrig – One command to set up a production-ready Python project

https://github.com/Winipedia/pyrig
1•Winipedia•19m ago•0 comments

Fast Response or Silence: Conversation Persistence in an AI-Agent Social Network [pdf]

https://github.com/AysajanE/moltbook-persistence/blob/main/paper/main.pdf
1•EagleEdge•19m ago•0 comments

C and C++ dependencies: don't dream it, be it

https://nibblestew.blogspot.com/2026/02/c-and-c-dependencies-dont-dream-it-be-it.html
1•ingve•20m ago•0 comments

Show HN: Vbuckets – Infinite virtual S3 buckets

https://github.com/danthegoodman1/vbuckets
1•dangoodmanUT•20m ago•0 comments

Open Molten Claw: Post-Eval as a Service

https://idiallo.com/blog/open-molten-claw
1•watchful_moose•21m ago•0 comments

New York Budget Bill Mandates File Scans for 3D Printers

https://reclaimthenet.org/new-york-3d-printer-law-mandates-firearm-file-blocking
2•bilsbie•22m ago•1 comments

The End of Software as a Business?

https://www.thatwastheweek.com/p/ai-is-growing-up-its-ceos-arent
1•kteare•23m ago•0 comments

Exploring 1,400 reusable skills for AI coding tools

https://ai-devkit.com/skills/
1•hoangnnguyen•23m ago•0 comments

Show HN: A unique twist on Tetris and block puzzle

https://playdropstack.com/
1•lastodyssey•27m ago•1 comments

The logs I never read

https://pydantic.dev/articles/the-logs-i-never-read
1•nojito•28m ago•0 comments

How to use AI with expressive writing without generating AI slop

https://idratherbewriting.com/blog/bakhtin-collapse-ai-expressive-writing
1•cnunciato•29m ago•0 comments

Show HN: LinkScope – Real-Time UART Analyzer Using ESP32-S3 and PC GUI

https://github.com/choihimchan/linkscope-bpu-uart-analyzer
1•octablock•29m ago•0 comments

Cppsp v1.4.5–custom pattern-driven, nested, namespace-scoped templates

https://github.com/user19870/cppsp
1•user19870•31m ago•1 comments

The next frontier in weight-loss drugs: one-time gene therapy

https://www.washingtonpost.com/health/2026/01/24/fractyl-glp1-gene-therapy/
2•bookofjoe•33m ago•1 comments
Open in hackernews

NativeJIT: A C++ expression –> x64 JIT (2018)

https://github.com/BitFunnel/NativeJIT
75•nateb2022•7mo ago

Comments

anon-3988•7mo ago
Interesting, this is very similar to llvmlite.Builder which is a wrapper over llvm. I am probably going to create something similar for my Python -> C -> assembly JIT.
Twirrim•7mo ago
There's also libgccjit, https://gcc.gnu.org/wiki/JIT, though all of the third party language bindings appear to be stale for it.
cout•7mo ago
Interesting, I didn't know about that one! Does it ship with GCC?
Twirrim•7mo ago
Yes, has done for a while
globalnode•7mo ago
that project sounds interesting as well, but what do you do with libraries in python.. have the generated C code translate back to python calls?
anon-3988•7mo ago
The point is not to compile entire Python programs, the point is to optimize specific parts of Python that matters. To illustrate, consider a calculating sum of 1 to N in python

def sum(N): x = 0 for i in range(N): x += i return x

There's absolute zero reason why this code has to involve pushing and popping stuff on the python virtual stack. This should be compiled into assembly with a small conversion between C/PyObject.

The goal is to get to a point where we can even do non-trivial things inside this optimized context.

Python will never be able to go down to assembly because Python support doing "weird shit" like dynamically creating modules, hell, even creating a Python file, running eval on that, and loading it as a new module. How are you even going to transpile that to assembly?

So I approach the problem the same way numba is approaching. But hopefully more modern and simpler (implementation wise). Planning on doing it using Rust and the backend should be agnostic (GCC, Clang, whatever C compiler there is)

hayley-patton•7mo ago
> "weird shit" like dynamically creating modules, hell, even creating a Python file, running eval on that, and loading it as a new module.

Expect that you don't, and deoptimise when you do: https://bibliography.selflanguage.org/_static/dynamic-deopti...

It's really not that impossible.

lhames•7mo ago
The LLVM ORC and Clang-REPL projects would be worth checking out if you haven't already: there's a healthy community of high performance computing folks working in this space over at https://compiler-research.org.

In particular, this talk might be interesting:

"Unlocking the Power of C++ as a Service: Uniting Python's Usability with C++'s Performance"

Video: https://www.youtube.com/watch?v=rdfBnGjyFrc Slides: https://llvm.org/devmtg/2023-10/slides/techtalks/Vassilev-Un...

almostgotcaught•7mo ago
it's mostly upstream now, no need to dig around in their repos

https://github.com/llvm/llvm-project/tree/main/clang/tools/c...

b0a04gl•7mo ago
how deterministic is the emit really. if i feed same expression tree twice,same node layout same captures. do i get exact same bytes out every time (ignoring reloc) or not. if output produced is byte stable across runs for same input graph ,that opens up memoized JIT paths.worth checking if current impl already does this or needs a pass to normalise alloc order
jdnend•7mo ago
Why wouldn't it be deterministic?
xnacly•7mo ago
Several possible reasons: - parallelism - concurrent machine code gen - different optimisations for different runs, producing differing machine code order, etc
nurettin•7mo ago
It really sounds like a job for Java (Microsoft, I know, I know.)
adwn•7mo ago
> It really sounds like a job for Java

Why?

dontlaugh•7mo ago
Because the JVM's JIT does already specialise based on runtime values.
nurettin•7mo ago
Hotspot (TM) JIT compiles java code to machine code when it detects hot code paths, speeding up execution during runtime, exactly the use case described in the article.
adwn•7mo ago
Doesn't Hotspot have notoriously long warm-up times? Have those been exaggerated, or have they recently improved?

If you know beforehand that you'll execute some piece of code many times, the most efficient approach is to JIT-compile it right away, and not only after a lot of time has passed.

eddythompson80•7mo ago
They have not been exaggerated and they have not improved (well, nothing outside standard, keeping up with inflation, type improvement)

The JVM ecosystem has given up on jit warm up time for hotspot. There are other solutions like graal.

Whenever I have to deal with Java projects I'm always astonished at how completely "normal" a 4 minute startup time for a REST endpoint project on a single core machine.

nurettin•7mo ago
I don't know the latest metrics, I remember java servers being slow at first, then gradually picking up momentum. Are you saying that Java doesn't do what the article describes?
adwn•7mo ago
> Are you saying that Java doesn't do what the article describes?

That's right, Java doesn't do what the NativeJIT library does: Hotspot starts in an interpreted mode and only later JIT-compiles frequently executed code; NativeJIT, in contrast, immediately compiles the generated code, at least according to this description:

Bing formulates a custom expression for each query and then uses NativeJIT to compile the expression into x64 code that will be run on a large set of candidate documents spread across a cluster of machines.

I don't know the specifics, but my guess is that by the time Hotspot would even start compiling, the user has already received the results for their query. Ergo, Java – at least with the Hotspot VM – wouldn't be suitable for this task.

nurettin•7mo ago
Ah ok pedantism that's fine thanks.
adwn•7mo ago
That's not "pedantism". Java+Hotspot are literally unfit for the task at hand.
nurettin•7mo ago
They specifically mentioned compiling at runtime in the article. I cannot be convinced otherwise. Sorry.
adwn•7mo ago
I started typing a reply to explain that "compiling at runtime" does not mean "having a warm-up phase before JIT-compilation", but this:

> I cannot be convinced otherwise.

made me think "why bother?". If you insist on being wrong, go on being wrong.

nurettin•7mo ago
Well if you insist, this is what makes me think that there is a gradual process of compiling and running native jitted code as the project runs. Hence, runtime.

> The scoring process attempts to gauge how well each document matches the user's intent, and as such, depends on the specifics of how each query was phrased. Bing formulates a custom expression for each query and then uses NativeJIT to compile the expression into x64 code that will be run on a large set of candidate documents spread across a cluster of machines.

If this is "wrong" to you, then I would like to remain wrong. In fact I would like to have my mental faculties as acutely orthogonally aligned as possible compared to yours in every possible dimension.

adwn•7mo ago
The difference between Hotspot on one hand and Bing's use of NativeJIT on the other is that after code generation (the "formulates a custom expression" part), the code is immediately JIT-compiled to machine code, while Hotspot would first interpret the generated code for a while and collect execution metrics before it JIT-compiles the code. This delay between code generation and compilation to machine code would be unacceptable for Bing's use case, hence why Java+Hotspot aren't suitable for this task. Hope this clears it up.
nurettin•7mo ago
> the code is immediately JIT-compiled to machine code

As I said multiple times, we disagree on this point. There is a scoring process during runtime which mirrors what hotspot does in the non-literal sense. And again, I'm ok with being "wrong" so off you go.

adwn•7mo ago
The "scoring process" is the generated code, which scores a page's relevance to the user's query. It has absolutely nothing to do with gathering runtime metrics about the generated code, nor with optimizing the performance of the compiled machine code, which is what Hotspot does.
whizzter•7mo ago
Having written small compilers or other code-generators targeting both the JVM and .NET runtimes, i can say that the .NET equivalents have some extra simple options for scenarios like this.

Both have always supported building libraries/assemblies and loading them (the ASM library+custom classloaders for Java and AssemblyBuilder in .NET are about equally capable).

However .NET also has DynamicMethod that is specifically built to quickly build just small functions that aren't tied to larger contexts (similar API to asm/assemblybuilder).

But an even easier option for stuff exactly like in the article that people don't widely really seem to know about is that Linq (yes that "sql-like" stuff) actually contains parts for deferred method building that can be easily leveraged to quickly produce customized native code methods.

The neat part about the Linq-code generator is that you can just drop in plain C# code-blocks to be translataed by the C# compiler into Linq snippets and then with some helpers transform everything to Linq-tree's that can then be compiled.

The benefit over Asm/AssemblyBuilder/DynamicMethod is that Linq nodes are basically an built-in AST that can be directly leveraged whereas the other API's requires some mapping of your own AST's to the respective stack-machines.

https://asm.ow2.io/

https://learn.microsoft.com/en-us/dotnet/api/system.reflecti...

https://learn.microsoft.com/en-us/dotnet/api/system.reflecti...

https://learn.microsoft.com/en-us/dotnet/api/system.linq.exp...

pjmlp•7mo ago
Usual remark regarding the age of bytecode systems and JIT (aka dynamic compilation), predating Java.
kookamamie•7mo ago
> auto & rsquared = expression.Mul(expression.GetP1(), expression.GetP1());

This is C++, no? Why not use operator overloading for the project?

plq•7mo ago
This line is part of the code that creates an AST-like structure that is then fed into the compiler. The actual multiplication is done by calling the function handle returned from the Compile method.
OskarS•7mo ago
Yes, but what I suspect the commenter was saying is that you can build the expression usung operator overloading as well, so you can type ”a + b”, not ”a.Add(b)”.

I love it when libraries like this do that. z3 in python is similar, you just build your constraints using normal syntax and it all just works. Great use of operator overloading.

kookamamie•7mo ago
Yes, exactly. See Eigen as an example.
BearOso•7mo ago
Except that's not what's happening. expression.Mul isnt multiplying itself against something, it's adding a Mul instruction to its list. Maybe it would have been more obvious if the method name was insertMul instead.
whizzter•7mo ago
I think what GP was referring to that there is nothing stopping the code from being designed so that:

AST<float> p1 = exp.GetP1();

AST<float> rsqr = p1 * p1; // AST<float> implements an operator* overload that produces an AST<float> object

Even if many frown upon operator overloading due to the annoying magical-ness of the standard-librarys appropriation of the shift operators for "piping" (<< and >>), it's still what makes many people prefer C++ for vector-math,etc tasks.

So whilst the result isn't a direct multiplication it should still be an acceptable use since the resulting code will actually be doing multiplications.

Considering what goes on under the hood however, I guess there might be some compiler optimization reasons to keep everything in the expression object in the example as the holder of data instead of spread out in an allocation tree with lots of pointer-chasing.

plq•7mo ago
> So whilst the result isn't a direct multiplication it should still be an acceptable use since the resulting code will actually be doing multiplications.

First, nope, if it's not multiplication it should not be using the * operator, period. Operator overloading is already overused and it leads to so much problematic code that looks fine to the untrained eye (string concat using operator+ being a famous example).

That said, you may also want to pass more options to the Mul node in the future and operator*() can only accept two arguments.

As another example, run the following Python code to see how python represents its own AST:

    import ast;print(ast.dump(ast.parse("2*PI*r*r"),indent=2))
whizzter•7mo ago
So basically you're saying that if I want to add mathematical expressions to my JIT'ed code I should obfuscate the purpose by writing multi-line operations to build them instead of having the ability to plonk it in more or less verbatim?

As I said, I fully agree that operator overloading is horribly overused but the purpose of this JIT is to quickly create JIT code with customized expressions then those expressions should be possible to write fairly verbatim instead of following something with a religious zeal (is even matrix multiplication allowed to overload?).

And yes, AST's usually contain a lot more information such as source-location,etc for debugging (here it'd be interesting if an operator overload is able to take C++20 source_location as default parameters), but again this is for a specialized jit.

As for passing more options to mul nodes, a mul node is just that and nothing more (only extra interesting data in a restricted scenario as this would possibly be source_location as noted above).

dontlaugh•7mo ago
I think they didn't find it useful.

They built this to translate a search query that is only known at runtime. Presumably they already have an AST or similar, so calling methods as it is being walked isn't any harder than operators.

izabera•7mo ago
this looks convenient to use from c++, but the example code it generates is rather suboptimal (see https://godbolt.org/z/3rWceeYoW in which no normal compiler would set up and tear down a stack frame for that) so i'm guessing there isn't any support for optimisations? what's the advantage of this over just compiling + calling dlopen/LoadLibrary on the result?
rurban•7mo ago
I guess for the first function call not, but subsequent calls yes. They claim that register optimizations are properly done.
whizzter•7mo ago
For simple functions an C compiler will generate code that is perhaps 50% faster than this standard prologue/epilogue (modern CPU's eat up most of the "bloat" whereas the branch to _any_ function will cause some branch predictor pressure), as soon as the function grows the gains will be smaller as long as the code runs somewhat in a straight line and isn't subject to cache misses.

Compared to even an optimized interpreter this will be somewhere between 4x to 20x faster (mainly due to having far far smaller branch predictor costs), so even if it doesn't generate optimal code it will still be within an magnitude of optimal native code whereas an interpreter will be much further behind.

dlopen/LoadLibrary,etc comes with far more memory pressure and OS bookkeeping.

rurban•7mo ago
(2018)

Bing uses internally a better version, but improvements are not merged back to github. See https://github.com/BitFunnel/NativeJIT/issues/84#issuecommen...

smartaz42•7mo ago
FWIW for C only I've used libtcc repo.or.cz/w/tinycc.git with great success. The API is a joy, as we all expect from a Bellard project. It focuses on compilation speed, the generated code is not at all optimized.
emmanueloga_•7mo ago
Looks similar to https://github.com/kobalicek/mathpresso
cout•7mo ago
It's interesting to see C++ expressions being used to create what is I think an AST that then gets compiled. I would love to see some syntactic sugar, though. For example, `expression.Mul(rsquared, expression.Immediate(PI))` could be `rsquared * expression.Immediate(PI)`. With overloading, anything that is not a recognized type could be converted to an immediate, so it could simply be `rsquared * PI`. Simple control structures could be even implemented with lambdas.

I did this for ruby-libjit, and it made writing a JIT compiler much easier to read. Here's an example: https://github.com/cout/ruby-libjit/blob/master/sample/fib.r...

And a real-world example (one of the very earliest JIT compilers for ruby, written in ruby, dates back to the ruby 1.8.x days): https://github.com/cout/ludicrous/blob/master/lib/ludicrous/...