> The query language is deliberately less expressive than jq's. jsongrep is a search tool, not a transformation tool-- it finds values but doesn't compute new ones. There are no filters, no arithmetic, no string interpolation.
Mind me asking what sorts of TB json files you work with? Seems excessively immense.
I'm sure there are reasons against switching to something more efficient–we've all been there–I'm just surprised.
I'm not OP,but structured JSON logs can easily result in humongous ndjson files, even with a modest fleet of servers over a not-very-long period of time.
I'd probably just shove it all into Postgres, but even a multi terabyte SQLite database seems more reasonable.
The comment I was replying to implied this was something more regular.
EDIT: why is this being downvoted? I didn't think I was rude. The person I responded to made a good point, I was just clarifying that it wasn't quite the situation I was asking about.
Even if it's once off, some people handle a lot of once-offs, that's exactly where you need good CLI tooling to support it.
Sure jq isn't exactly super slow, but I also have avoided it in pipelines where I just need faster throughput.
rg was insanely useful in a project I once got where they had about 5GB of source files, a lot of them auto-generated. And you needed to find stuff in there. People were using Notepad++ and waiting minutes for a query to find something in the haystack. rg returned results in seconds.
You could probably do something similar for a faster jq.
But every now and then a well-optimised tool/page comes along with instant feedback and is a real pleasure to use.
I think some people are more affected by that than others.
Obligatory https://m.xkcd.com/1205
Usually, a perceptive user/technical mind is able to tweak their usage of the tools around their limitations, but if you can find a tool that doesn't have those limitations, it feels far more superior.
The only place where ripgrep hasn't seeped into my workflow for example, is after the pipe and that's just out of (bad?) habit. So much so, sometimes I'll do this foolishly rg "<term>" | grep <second filter>; then proceed to do a metaphoric facepalm on my mind. Let's see if jg can make me go jg <term> | jq <transformation> :)
Not implementing the same features but still trumpeting speed = marketing bullshit
Maybe I can use this as a pre-filter for jq when necessary.
If the arm64 version was on homebrew (didn’t check if it is but assume not because it’s not mentioned on the page), I’d install it from there rather than from cargo.
I don’t really manually install binaries from GitHub, but it’s nice that the author provides binaries for several platforms for people that do like to install it that way.
To address the concern, anyway, I'm sure it would soon be available in brew as an arm binary.
> Jq is a powerful tool, but its imperative filter syntax can be verbose for common path-matching tasks. jsongrep is declarative: you describe the shape of the paths you want, and the engine finds them.
IMO, this isn't a common use case. The comparison here is essentially like Java vs Python. Jq is perfectly fine for quick peeking. If you actually need better performance, there are always faster ways to parse JSON than using a CLI.
However, as someone who always loved faster software and being an optimisation nerd, hat's off!
If you don't mind me asking, which yq? There's a Go variant and a Python pass-through variant, the latter also including xq and tomlq.
Second, some comments on the presentation: the horizontal violin graphs are nice, but all tools have the same colours, and so it's just hard to even spot where jsongrep is. I'd recommend grouping by tool and colour coding it. Besides, jq itself isn't in the graphs at all (but the title of the post made me think it would be!).
Last, xLarge is a 190MiB file. I was surprised by that. It seems too low for xLarge. I daily check 400MiB json documents, and sometimes GiB ones.
Sure there are 0.000001% edge cases where that MIGHT be the next big bottleneck.
I see the same thing repeated in various front end tooling too. They all claim to be _much_ faster than their counterpart.
9/10 whatever tooling you are using now will be perfectly fine. Example; I use grep a lot in an ad hoc manner on really large files I switch to rg. But that is only in the handful of cases.
Opencode, ClaudeCode, etc, feel slow. Whatever make them faster is a win :)
Bigpet•1h ago
vladvasiliu•1h ago
keysersoze33•1h ago
qwe----3•1h ago
jvdvegt•1h ago
Also, there are lots of charts without comparison so the numbers mean nothing...
youngtaff•1h ago
CodeCompost•39m ago
shellac•12m ago