Can convert between JSON<>EDN<>YAML<>Transit easily, plus includes a nifty little query language that is basically Clojure, so data transformations/extraction ends up really simple and concise.
I've always liked jq for simple things, but since I never sat down to actually work through the syntax, harder things tend to be too complicated to figure out quickly. Usually end up using Jet instead as if you already know Clojure, you already know the query language Jet uses.
Please, don't do that!
1: If it speeds things up non-negligibly, there's almost always a way to get a similar speedup without setting safety to 0; e.g. if you check your types outside of your hot loops, the compiler is smart enough to omit type-checks inside the loop.
It's kind of like building in Debug mode in other languages. Internally and for testing, use (safety 3). If the code in question doesn't trigger any errors or warnings, then in most cases it's safe to turn (safety 0) and get the tiny performance boost.
I wouldn't recommend (safety 0) globally, but it's probably fine locally in performance critical code that's been tested well, but I do agree it's probably not worth going to (safety 0) in most cases.
The best solution is a compiler who's (speed 3) optimization level is smart enough to optimize out the unnecessary safety checks from (safety 3). I think SBCL can do that in some cases (the safety checks get optimized for speed, at least).
This is trivially not true. Consider:
(defun foo (x)
(declare (safety 0)
(type x (array fixnum (4)))
[Lots of code that doesn't trigger any warnings])
Then in a different source file doing e.g: (foo nil)
Nothing good will come of that.> I wouldn't recommend (safety 0) globally, but it's probably fine locally in performance critical code that's been tested well, but I do agree it's probably not worth going to (safety 0) in most cases.
> The best solution is a compiler who's (speed 3) optimization level is smart enough to optimize out the unnecessary safety checks from (safety 3). I think SBCL can do that in some cases (the safety checks get optimized for speed, at least).
The only thing I can think of is that I communicated things poorly in my comment, because this is nearly exactly what I was saying in my comment.
I think we both agree that 99.9% of the time it's not worth using (safety 0), though.
A more apples-to-apples comparison would be to use find {} + to pass multiple filenames to jq and output using input_filename.
echo "$SOME_JSON" | jq '.[]' --raw-output --compact-output | while read -r LINE ; do ... ; done
...lets you process stuff "record by record" pretty consistently. (and `( xxx ; yyy ; zzz ) | jq --slurp '.'` lets you do the reverse, "absorbing" multiple records into an array.Don't forget `--argjson`
echo "{}" | jq --argjson FOO "$( cat test.json )" '{ bar: $FOO }'
...lets you "load" json for merging, processing, formatting, etc. The leading "{}" is moderately necessary because `jq` technically _processes_ json, not generates it.Finally, it's a huge cheat code for string formatting!!
$ echo "{}" | jq \
--arg FOO "hello \$world" \
--arg BAR "complicated \| chars" \
--arg ONE 1 \
--arg TWO 2 \
'"aaa \( $FOO ) and \( $BAR ) and \( ($ONE | tonumber) + ($TWO | tonumber) ) bbb"'
"aaa hello $world and complicated \\| chars and 3 bbb"
...optionally with `--raw-output` (un-json-quoted), and even supports some regex substitution in strings via `... | gsub(...)`.Yes, yes... it's overly complicated compared to you and your fancy "programming languages", but sometimes with shell stuff, the ability to _CAPTURE_ arbitrary command output (eg: `--argjson LS_OUTPUT="$( ls -lart ... )"`), but then also use JSON/jq to _safely_ marshal/deaden the data into JSON is really helpful!
The --null-input/-n option is the "out-of-the-box" way to achieve this, and avoids a pipe (usually not a big deal, but leaves stdin free and sometimes saves a fork).
This lets you rewrite your first "pattern":
jq -cnr --argjson SOME_JSON "$SOME_JSON" '$SOME_JSON[]' | while read ...
We also have a "useless use of cat": --slurpfile does that job better: jq -n --slurpfile FOO test.json '{bar: $FOO[]}'
(assuming you are assured that test.json contains one json value; --argjson will immediately fail if this is not the case, but with --slurpjson you may need to check that $FOO is a 1-item array.)And of course, for exactly the single-file single-object case, you can just write:
jq '{bar: .}' test.json
Pipelines allow consistent syntax, but thanks for pointing out all the different variations of file support in jq.
Had similar thoughts a couple years ago, and wrote jql[0] as a jq alternative with a lispy syntax (custom, not Common Lisp), and I’ve been using it for command-line JSON processing ever since!
$ echo "$json" | cljq '(? $ "root" * 1)'
more intuitive than the good ol' jq $ echo "$json" | jq '.root | map(.[1])'
Really, people should know by now that jq does point-free programming.Personally, I probably would've written '[.root[][1]]' for that problem myself though—not a huge fan of map/1.
1) I dislike that .[1] can be both an expression evaluated as a query and a "lambda". Really messes with my mind.
2) In my eyes, it's more intuitive because it looks like globbing and everybody knows globbing (this is the reason I use `**` too).
But yeah, this is a bit subjective. What isn't, though, is that I don't plan on adding much more than that; maybe merge, transform and an accessor using the same syntax. So if you know the host language, there's much less friction.
I really see this like Avisynth vs Vapoursynth.
* [lqn](https://github.com/inconvergent/lqn) - query language and terminal utility for querying and transforming Lisp, JSON and other text files.
(by this person doing nice generative art: https://inconvergent.net/)
[edit]
Removing "Unifont" from the font-family list fixes the problem, so I must have an issue with my unifont install?
Edit: Well, I just found out about `cat some.json | from json` in nushell. Pretty cool! The nested tables are nice.
open some.json
I like jq and gnuplot quite well. Makes me want to try CMake out ;)
This issue is almost negated today: I find myself no longer writing jq queries, or regular expressions (both I am quite proficient in,) but having AI write those for me. This is exactly where the so-called "vibe coding" shines, and why I no longer care about tool specific DSLs.
But it does also seem like a place where LLMs are handy. Why learn jq or regex or AWK, if you use them infrequently, when you can just ask an llm?
Edit: tutorial: https://earthly.dev/blog/jq-select/
As an example.. any candidate for replacing jq needs to be either faster or easier. If it's only a faster implementation, why change the query language? If it's only a different query language but not faster, then why not transpile the new query language into one that works with the old engine? Doing both at the same time without sacrificing completeness/expressiveness in the query language may warrant fragmentation of effort/interest, but that's a very high bar I would think..
precompute•13h ago
mhitza•12h ago
For those that like that style, on Linux both Xfce and KDE have themes that replicate it for their window decorations (recommending the desktop environment would be a bit too much)
ramses0•11h ago