frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Looking for 4 Autistic Co-Founders for AI Startup (Equity-Based)

1•au-ai-aisl•3m ago•0 comments

AI-native capabilities, a new API Catalog, and updated plans and pricing

https://blog.postman.com/new-capabilities-march-2026/
1•thunderbong•4m ago•0 comments

What changed in tech from 2010 to 2020?

https://www.tedsanders.com/what-changed-in-tech-from-2010-to-2020/
2•endorphine•9m ago•0 comments

From Human Ergonomics to Agent Ergonomics

https://wesmckinney.com/blog/agent-ergonomics/
1•Anon84•12m ago•0 comments

Advanced Inertial Reference Sphere

https://en.wikipedia.org/wiki/Advanced_Inertial_Reference_Sphere
1•cyanf•14m ago•0 comments

Toyota Developing a Console-Grade, Open-Source Game Engine with Flutter and Dart

https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine
1•computer23•16m ago•0 comments

Typing for Love or Money: The Hidden Labor Behind Modern Literary Masterpieces

https://publicdomainreview.org/essay/typing-for-love-or-money/
1•prismatic•17m ago•0 comments

Show HN: A longitudinal health record built from fragmented medical data

https://myaether.live
1•takmak007•19m ago•0 comments

CoreWeave's $30B Bet on GPU Market Infrastructure

https://davefriedman.substack.com/p/coreweaves-30-billion-bet-on-gpu
1•gmays•31m ago•0 comments

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•36m ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
1•cwwc•41m ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•49m ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
3•eeko_systems•56m ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
3•neogoose•59m ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
2•mav5431•1h ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
3•sizzle•1h ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•1h ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•1h ago•1 comments

Baldur's Gate to be turned into TV series – without the game's developers

https://www.bbc.com/news/articles/c24g457y534o
2•vunderba•1h ago•0 comments

Interview with 'Just use a VPS' bro (OpenClaw version) [video]

https://www.youtube.com/watch?v=40SnEd1RWUU
2•dangtony98•1h ago•0 comments

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•1h ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
1•1vuio0pswjnm7•1h ago•0 comments

Effective Nihilism

https://www.effectivenihilism.org/
1•abetusk•1h ago•1 comments

The UK government didn't want you to see this report on ecosystem collapse

https://www.theguardian.com/commentisfree/2026/jan/27/uk-government-report-ecosystem-collapse-foi...
5•pabs3•1h ago•0 comments

No 10 blocks report on impact of rainforest collapse on food prices

https://www.thetimes.com/uk/environment/article/no-10-blocks-report-on-impact-of-rainforest-colla...
3•pabs3•1h ago•0 comments

Seedance 2.0 Is Coming

https://seedance-2.app/
1•Jenny249•1h ago•0 comments

Show HN: Fitspire – a simple 5-minute workout app for busy people (iOS)

https://apps.apple.com/us/app/fitspire-5-minute-workout/id6758784938
2•devavinoth12•1h ago•0 comments

Dexterous robotic hands: 2009 – 2014 – 2025

https://old.reddit.com/r/robotics/comments/1qp7z15/dexterous_robotic_hands_2009_2014_2025/
1•gmays•1h ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•ksec•1h ago•1 comments

JobArena – Human Intuition vs. Artificial Intelligence

https://www.jobarena.ai/
1•84634E1A607A•1h ago•0 comments
Open in hackernews

Parsing Advances

https://matklad.github.io/2025/12/28/parsing-advances.html
108•birdculture•1mo ago

Comments

kccqzy•1mo ago
How about another way, which is memoization: at each position in the source code we never attempt to parse the same production more than once. This solves infinite looping as discussed by the author because the “loop” will be downgraded by the memoization to execute once. Of course I wouldn't literally use a while loop in code to represent the production. I would use a higher-level abstraction to indicate one-or-more or zero-or-more in the production; indeed I would represent productions as data not code.

This also has another benefit of work sharing. A production like `A B | C B` will ensure that in case parsing A or C consumes the same number of characters, the work to parse B will be shared, despite not literally factoring the production into `(A | C) B`.

smj-edison•1mo ago
That's a slick way, would you essentially have a second counter that you'd set to the current cursor whenever you use `.currentToken()` or something like that?
luizfelberti•1mo ago
I also find this to be an elegant way of doing this, and it is also how the Thompson VM style of regex engines work [0]

It's a bit harder to adapt the technique to parsers because the Thompson NFA always increments the sequence pointer by the same amount, while a parser's production usually has a variable size, making it harder to run several parsing heads in lockstep.

[0] https://swtch.com/~rsc/regexp/regexp2.html

Porygon•1mo ago
Memoization to limit left-recursive recursion is nicely described in Guido van Rossums' article here: https://medium.com/@gvanrossum_83706/left-recursive-peg-gram...

I recently tried that approach while simultaneously building an abstract syntax tree, but I dropped it in favor of a right-recursive grammar for now, since restoring the AST when backtracking got a bit complex.

kccqzy•1mo ago
You can look at the Earley parser. It handles left recursion well well using a method that’s basically memoization.
smj-edison•1mo ago
Huh, that's a really interesting approach. I just wrote my first Pratt parser a month ago, and one of the most annoying things was debugging infinite loops in various places (I had both tokenizer bugs where no characters were consumed and parser bugs where a token was emitted but not advanced). It's doubly annoying in Zig, because the default test runner won't print out stdout at all, and won't print stderr unless the program terminates by itself (Ctrl + C doesn't print). I resorted to building the test and running it manually, or jumping into a debugger to figure out recursion issues. It's working now, but if (really when) I run into issues in the future I'll definitely add some helper functions to check emitting invariants.
someone_jain_•1mo ago
its also very annoying that one can't have two test names where one is substring of other
eru•1mo ago
Writing parsers by hand this way can be fun (and might be required for the highest performance ones, maybe?), but for robustness and ease of development you are generally better off using a parser combinator library.
tubs•1mo ago
Are you?

The majority of production compilers use hand rolled parsers, ostensibly for better error reporting and panic synch.

cipherself•1mo ago
One anecdote in the same vein, a couple of months ago, I wanted to parse systemd-networkd INI files in Python and the python built-in ConfigParser [0] and pytest's iniconfig parser [1] couldn't handle multiple sections with the same name so I ended up writing 2 parsers, one using a ParserCombinator library and one by hand and ended up using the latter given it was much simpler to understand and I didn't have to introduce an extra dependency.

Admittedly, INI is quite a simple format, hence I mention this as an anecdote.

[0] https://docs.python.org/3/library/configparser.html

[1] https://github.com/pytest-dev/iniconfig

thechao•1mo ago
As a project gets larger the cost of owning a dependency directly begins to outweigh the impedance mismatch between 3rd party software & software customized to your project.

I've got 10 full time senior engineers on a project heading in to its 15th year. We rewrite even extremely low level code like std::vector or malloc to make sure it matches our requirements.

UNIX was written by a couple of dudes.

kccqzy•1mo ago
That’s because Python is a bad language for writing parser combinators and parsers based on them. Try Haskell.
cipherself•1mo ago
I have written parsers using parser combinators in Haskell and Clojure. I find that ML-like (Haskell, OCaml, StandardML) languages generally are great at writing parsers, even hand-written ones in it is a superior experience.

In this case, this was a project at $EMPLOYER in an existing codebase with colleagues who have never seen Haskell code, using Haskell would've been a major error in judgement.

eru•1mo ago
I agree!

Haskell is a great language. It can even be a great language for beginners, especially if there's some senior help on hand.

But it's a terrible language to foist upon an unsuspecting and even unwilling victim.

tgv•1mo ago
So ... someone calls their parsing strategy "resilient LL parsing" without actually implementing LL parsing, a technique known since the 1970s, and then has an infinite recursion bug? Probably skipped Parsing 101.
sureglymop•1mo ago
In rust I really like the grmtools set of tools: https://github.com/softdevteam/grmtools.

It is lexx/yacc style lexer and parser generation and generates an LR1 parser but using the CPCT+ algorithm for error recovery. Iirc the way it works is that when an error occurs, the nearest likely valid token is inserted, the error is recorded and parsing continues.

I would use this for anything that is simple enough and recursive descent for anything more complicated and where even more context is needed for errors.

ratmice•1mo ago
I always feel that when saying lex/yacc style tools, it comes with a lot of preconceived notions that using the tools involves a slow development cycle with code gen + compilation steps.

What drew me to the grmtools (eventually contributing to it) was that you can evaluate grammars basically like an interpreter without going through that compilation process. Leading to a fairly quick turnaround times during language development process.

I hope this year I can work on porting my grmtools based LSP to browser/wasm.

sureglymop•1mo ago
I've seen your commits, thank you sincerely for your work!
dcrazy•1mo ago
I’m curious why the author chose to model this as an assertion stack. The developer must still remember to consume the assertion within the loop. Could the original example not be rewritten more simply as:

    const result: ast.Expression[] = [];
    p.expect("(");
    while (!p.eof() && !p.at(")")) {
     subexpr = expression(p);
     assert(p !== undefined); // << here
     result.push(subexpr);
     if (!p.at(")")) p.expect(",");
    }
    p.expect(")");
    return result;
matklad•1mo ago
I assume you ment to write `assert(subexpression != undefined)`?

This is resilient parsing --- we are parsing source code with syntax errors, but still want to produce a best-effort syntax tree. Although expression is required by the grammar, the `expression` function might still return nothing if the user typed some garbage there instead of a valid expression.

However, even if we return nothing due to garbage, there are two possible behaviors:

* We can consume no tokens, making a guess that what looks like "garbage" from the perspective of expression parser is actually a start of next larger syntax construct:

``` function f() { let x = foo(1, let not_garbage = 92; } ```

In this example, it would be smart to _not_ consume `let` when parsing `foo(`'s arglist.

* Alternatively, we can consume some tokens, guessing that the user _meant_ to write an expression there

``` function f() { let x = foo(1, /); } ```

In the above example, it would be smart to skip over `/`.