frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Are AI agents ready for the workplace? A new benchmark raises doubts

https://techcrunch.com/2026/01/22/are-ai-agents-ready-for-the-workplace-a-new-benchmark-raises-do...
1•PaulHoule•2m ago•0 comments

Show HN: AI Watermark and Stego Scanner

https://ulrischa.github.io/AIWatermarkDetector/
1•ulrischa•3m ago•0 comments

Clarity vs. complexity: the invisible work of subtraction

https://www.alexscamp.com/p/clarity-vs-complexity-the-invisible
1•dovhyi•4m ago•0 comments

Solid-State Freezer Needs No Refrigerants

https://spectrum.ieee.org/subzero-elastocaloric-cooling
1•Brajeshwar•4m ago•0 comments

Ask HN: Will LLMs/AI Decrease Human Intelligence and Make Expertise a Commodity?

1•mc-0•5m ago•1 comments

From Zero to Hero: A Brief Introduction to Spring Boot

https://jcob-sikorski.github.io/me/writing/from-zero-to-hello-world-spring-boot
1•jcob_sikorski•6m ago•0 comments

NSA detected phone call between foreign intelligence and person close to Trump

https://www.theguardian.com/us-news/2026/feb/07/nsa-foreign-intelligence-trump-whistleblower
4•c420•6m ago•0 comments

How to Fake a Robotics Result

https://itcanthink.substack.com/p/how-to-fake-a-robotics-result
1•ai_critic•7m ago•0 comments

It's time for the world to boycott the US

https://www.aljazeera.com/opinions/2026/2/5/its-time-for-the-world-to-boycott-the-us
1•HotGarbage•7m ago•0 comments

Show HN: Semantic Search for terminal commands in the Browser (No Back end)

https://jslambda.github.io/tldr-vsearch/
1•jslambda•7m ago•1 comments

The AI CEO Experiment

https://yukicapital.com/blog/the-ai-ceo-experiment/
2•romainsimon•9m ago•0 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
3•surprisetalk•12m ago•0 comments

MS-DOS game copy protection and cracks

https://www.dosdays.co.uk/topics/game_cracks.php
3•TheCraiggers•13m ago•0 comments

Updates on GNU/Hurd progress [video]

https://fosdem.org/2026/schedule/event/7FZXHF-updates_on_gnuhurd_progress_rump_drivers_64bit_smp_...
2•birdculture•14m ago•0 comments

Epstein took a photo of his 2015 dinner with Zuckerberg and Musk

https://xcancel.com/search?f=tweets&q=davenewworld_2%2Fstatus%2F2020128223850316274
7•doener•14m ago•2 comments

MyFlames: Visualize MySQL query execution plans as interactive FlameGraphs

https://github.com/vgrippa/myflames
1•tanelpoder•16m ago•0 comments

Show HN: LLM of Babel

https://clairefro.github.io/llm-of-babel/
1•marjipan200•16m ago•0 comments

A modern iperf3 alternative with a live TUI, multi-client server, QUIC support

https://github.com/lance0/xfr
3•tanelpoder•17m ago•0 comments

Famfamfam Silk icons – also with CSS spritesheet

https://github.com/legacy-icons/famfamfam-silk
1•thunderbong•18m ago•0 comments

Apple is the only Big Tech company whose capex declined last quarter

https://sherwood.news/tech/apple-is-the-only-big-tech-company-whose-capex-declined-last-quarter/
2•elsewhen•21m ago•0 comments

Reverse-Engineering Raiders of the Lost Ark for the Atari 2600

https://github.com/joshuanwalker/Raiders2600
2•todsacerdoti•22m ago•0 comments

Show HN: Deterministic NDJSON audit logs – v1.2 update (structural gaps)

https://github.com/yupme-bot/kernel-ndjson-proofs
1•Slaine•26m ago•0 comments

The Greater Copenhagen Region could be your friend's next career move

https://www.greatercphregion.com/friend-recruiter-program
2•mooreds•26m ago•0 comments

Do Not Confirm – Fiction by OpenClaw

https://thedailymolt.substack.com/p/do-not-confirm
1•jamesjyu•27m ago•0 comments

The Analytical Profile of Peas

https://www.fossanalytics.com/en/news-articles/more-industries/the-analytical-profile-of-peas
1•mooreds•27m ago•0 comments

Hallucinations in GPT5 – Can models say "I don't know" (June 2025)

https://jobswithgpt.com/blog/llm-eval-hallucinations-t20-cricket/
1•sp1982•27m ago•0 comments

What AI is good for, according to developers

https://github.blog/ai-and-ml/generative-ai/what-ai-is-actually-good-for-according-to-developers/
1•mooreds•27m ago•0 comments

OpenAI might pivot to the "most addictive digital friend" or face extinction

https://twitter.com/lebed2045/status/2020184853271167186
1•lebed2045•28m ago•2 comments

Show HN: Know how your SaaS is doing in 30 seconds

https://anypanel.io
1•dasfelix•29m ago•0 comments

ClawdBot Ordered Me Lunch

https://nickalexander.org/drafts/auto-sandwich.html
3•nick007•30m ago•0 comments
Open in hackernews

Compiling a Lisp: Lambda lifting

https://bernsteinbear.com/blog/compiling-a-lisp-12/
162•azhenley•6mo ago

Comments

Jtsummers•6mo ago
If you like Ghuloum's paper, there are three fairly recent compiler books that are inspired by it:

https://nostarch.com/writing-c-compiler - Writing a C Compiler by Nora Sandler, language agnostic for the implementation.

https://mitpress.mit.edu/9780262047760/essentials-of-compila... - Essentials of Compilation (using Racket) by Jeremy Siek

https://mitpress.mit.edu/9780262048248/essentials-of-compila... - Essentials of Compilation (using Python) by Jeremy Siek

Those last two both have open access versions.

dang•6mo ago
The paper itself has been discussed a few times:

An Incremental Approach to Compiler Construction (2006) [pdf] - https://news.ycombinator.com/item?id=29123715 - Nov 2021 (10 comments)

An Incremental Approach to Compiler Construction (2006) [pdf] - https://news.ycombinator.com/item?id=20577660 - July 2019 (5 comments)

An Incremental Approach to Compiler Construction (2006) [pdf] - https://news.ycombinator.com/item?id=13207441 - Dec 2016 (19 comments)

An Incremental Approach to Compiler Construction (2006) [pdf] - https://news.ycombinator.com/item?id=10785164 - Dec 2015 (13 comments)

Writing a Compiler in 24 Small Steps [pdf] - https://news.ycombinator.com/item?id=1652623 - Sept 2010 (16 comments)

An Incremental Approach to Compiler Construction - https://news.ycombinator.com/item?id=1408241 - June 2010 (18 comments)

(and also in comments: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...)

alhazrod•6mo ago
The Essentials of Compilation (using Racket) by Jeremy Siek links to this[0] which when downloaded says "An Incremental Approach in Python" and is in Python.

[0]: https://github.com/IUCompilerCourse/Essentials-of-Compilatio...

Jtsummers•6mo ago
https://github.com/IUCompilerCourse/Essentials-of-Compilatio...
MangoToupe•6mo ago
My man. I appreciate you, and your kindness to me.
WantonQuantum•6mo ago
The "lambda lifting" seems to be referring to section 3.11 "Complex Constants" in the linked Ghuloum PDF:

Scheme’s constants are not limited to the immediate objects. Using the quote form, lists, vectors, and strings can be turned into constants as well. The formal semantics of Scheme require that quoted constants always evaluate to the same object. The following example must always evaluate to true:

    (let ((f (lambda () (quote (1 . "H")))))
      (eq? (f) (f)))
So, in general, we cannot transform a quoted constant into an unquoted series of constructions as the following incorrect transformation demonstrates:

    (let ((f (lambda () (cons 1 (string #\H)))))
      (eq? (f) (f)))
One way of implementing complex constants is by lifting their construction to the top of the program. The example program can be transformed to an equivalent program containing no complex constants as follows:

    (let ((tmp0 (cons 1 (string #\H))))
      (let ((f (lambda () tmp0)))
        (eq? (f) (f))))
Performing this transformation before closure conversion makes the introduced temporaries occur as free variables in the enclosing lambdas. This increases the size of many closures, increasing heap consumption and slowing down the compiled programs. Another approach for implementing complex constants is by introducing global memory locations to hold the values of these constants. Every complex constant is assigned a label, denoting its location. All the complex constants are initialized at the start of the program. Our running example would be transformed to:

    (labels ((f0 (code () () (constant-ref t1)))
             (t1 (datum)))
      (constant-init t1 (cons 1 (string #\H)))
      (let ((f (closure f0)))
        (eq? (f) (f))))
The code generator should now be modified to handle the data labels as well as the two internal forms constant-ref and constant-init.
JonChesterfield•6mo ago
The idea is to move variables from the body of the function to the argument list and rewrite the call sites to match.

That decreases the size of the closure (and increases the size of the code, and of however you're passing arguments).

Do it repeatedly though and you end up with no free variables, i.e. no closure to allocate. Hence the name, the lambda (closure) has been lifted (through the call tree) to the top level, where it is now a function (and not a lambda, if following the usual conflating of anonymous function with allocated closure).

Doesn't work in the general case because you can't find all the call sites.

zozbot234•6mo ago
I think the "no closure to allocate" is not quite right because the captured parameters of a first-class function still need to be stored somewhere. It just happens as part of the calling code, e.g. consider how a "function object" in C++/Java works: the operator() or .call() code does not need to allocate anything, but the allocation might occur as part of constructing the object itself.
mrkeen•6mo ago
Once they've been converted from free variables to formal parameters then it's assumed you can just stack allocate them, and roll them off when you return from your lambda (which is no longer a closure)
MangoToupe•6mo ago
> Using the quote form, lists, vectors, and strings can be turned into constants as well.

So all of these forms will transformed to bss?

JonChesterfield•6mo ago
bss stores zeros but sure, this lot could end up in rodata if you were careful about the runtime representation, or data if you were a little less careful. Treat the elf (ro)data section as the longest lived region in the garbage collector and/or don't decrement refcounts found there. Good thing to do to the language standard library.
kazinator•6mo ago
In the TXR Lisp compiler, I did lambda lifiting simply: lambda expressions that don't capture variables can move to the top via a code transformation that inserts them into a load-time form (very similar to ANSI Common Lisp's load-time-value).

E.g.

  (let ((fun (lambda (x) (+ x x))))
    ...)
That can just be turned into:

  (let ((fun (load-time (lambda (x) (+ x x)))))
    ...)
Then the compilation strategy for load-time takes care of it. I had load-time working and debugged at the time I started thinking about optimizing lambdas in this way, so it was obvious.

load-time creates a kind of pseudo-constant. The compiler arranges for the enclosed expression to be evaluated just once. The object is captured and it becomes a de-facto constant after that; each time the expression is evaluated it just refers to that object.

At the VM level, constants are represented by D registers. The only reason D registers are writable is to support load-time: load time will store a value into a D register, where it becomes indistinguishable from a constant. If I were so inclined, I could put in a vm feature that will write-protect the D register file after the static time has done executing.

If we compile the following expression, the d0 register is initially nil. The d1 register holds 3, which comes from the (+ 3 x ) expression:

  1> (compile-toplevel '(lambda () (lambda (x) (+ 3 x))))
  #<sys:vm-desc: a32a150>
  2> (disassemble *1)
  data:
      0: nil
      1: 3
  syms:
      0: sys:b+
  code:
      0: 8C000009 close d0 0 4 9 1 1 nil t2
      1: 00000400
      2: 00010001
      3: 00000004
      4: 00000002
      5: 20020003 gcall t3 0 d1 t2
      6: 04010000
      7: 00000002
      8: 10000003 end t3
      9: 8C00000E close t2 0 2 14 0 0 nil
     10: 00000002
     11: 00000000
     12: 00000002
     13: 10000400 end d0
     14: 10000002 end t2
  instruction count:
      6
  #<sys:vm-desc: a32a150>
The close instruction has d0 as its destination register "close d0 ...". The 9 argument in it indicates the instruction offset where to jump to after the closure is created: offset 9, where another "close ..." instruction is found: that represents the outer (lambda () ...)

We have only compiled this top-level form and not yet executed any of the code. To execute it we can call it as if it were a function, with no arguments:

  3> [*1]
  #<vm fun: 0 param>
It returns the outer lambda produced at instruction 9: as expected. When we dissassemble the compiled form again, register d0 is filled in, because the close instruction at 0 executed:

  4> (disassemble *1)
  data:
      0: #<vm fun: 1 param>
      1: 3
  syms:
      0: sys:b+
  code:
      0: 8C000009 close d0 0 4 9 1 1 nil t2
  [... SNIP; all same]

d0 now holds a #<vm fun: 1 param>, which is the compiled (lambda (x) ...). We can call the #<fm fun: 0 param> returned at prompt 3 to get that inner lambda:

  5> [*3]
  #<vm fun: 1 param>
  6> [*5 4]
  7
We can disassemble the functions 3 and 5; we get the same assembly code, but different entry points. I.e. the lambdas reference this same VM description for their code and static data:

  7> (disassemble *3)   ; <-- outer (lambda () ...)
  data:
      0: #<vm fun: 1 param>
      1: 3
  [ SNIP same disassembly ]
      9: 8C00000E close t2 0 2 14 0 0 nil
     10: 00000002
     11: 00000000
     12: 00000002
     13: 10000400 end d0    <---
     14: 10000002 end t2
  instruction count:
      6
  entry point:
     13                     <---
  #<vm fun: 0 param>
The entry point for the outer-lambda is offset 13. And that just executes "end d0": terminate and return d0, which holds the compiled inner lambda.

If we disassemble that inner lambda:

  8> (disassemble *5) ; <--- inner lambda (lambda (x) (+ 3 x)) 
  data:
      0: #<vm fun: 1 param>  <--- also in here 
      1: 3
  syms:
      0: sys:b+
  code:
      0: 8C000009 close d0 0 4 9 1 1 nil t2
      1: 00000400
      2: 00010001
      3: 00000004
      4: 00000002    <---
      5: 20020003 gcall t3 0 d1 t2.    <-- sys:b+ 3 x
      6: 04010000
      7: 00000002
      8: 10000003 end t3
  [ SNIP ... ]
  entry point:
      4             <---
  #<vm fun: 1 param>
The entry point is 4, referencing into the lifted lambda that got placed into d0. Entry point 4 is in part of the close instruction which indicates the parameter mapping. The word there indicates that the argument value is to be put into register t2. Function 0 is called with the t2 argument and d1 (which is 3). Function 0 is in the syms table: sys:b+, a binary add. When it returns, its value is put into t3 and execution terminates with "end t3".
FrustratedMonky•6mo ago
Off topic, but anybody have a quick take why LISP isn't the primary language of all of these AI models, and why everybody defaulted to using Python.

I just remember 30 years ago everyone thought LISP would be the language of AI.

Was it just that some nice easy Python Libraries came out, and that was enough to win the mindshare market.? More people can use Python Glue?

Jtsummers•6mo ago
Lisp's market share was declining 30 years ago and only continued to decline. Python's has consistently risen in that same time. Also, Lisp offers little benefit, if any, when all the ANN implementations rely on C++ and CUDA code. You can write fast numerical code in Lisp, but it's not as straightforward and it's certainly not an idiomatic or common way to use Lisp. That could have let it compete with the C++ libraries, but wouldn't help with the GPU programming part. Lisp could have been the glue language like Python, but again, Python's popularity was on the rise and Lisp's on the decline.
cardanome•6mo ago
Lisp was more associated with the classical symbolic and logic-based AI approach which doesn't share much in common with generative AI. It questionable whether the later should even be called "AI" but that battle has been lost years ago.

Python is just a really good glue language and was good enough for the task.

spauldo•5mo ago
A lot of Lisp was funded by AI research. That research dried up in the 90s (see "AI Winter"). Regular computers of the time were underpowered for Lisp, which originated on mainframes and then moved onto purpose-built hardware. So the new generation of programmers learned Pascal, C, and C++ instead and were mostly disconnected from the AI research that had come before.

Modern AI focuses on numeric methods, whereas classic AI was heavily symbolic. That makes Lisp less well suited for it, since symbolic computation is where Lisp shines. Classic AI does have some useful bits to it, but insofar as AI development goes it's mostly considered a dead end.

rurban•5mo ago
Because worse is better. You cannot persuade dummies to use a better language, they'll always fallback to the currently worst language. If VB or currently python.