Really, I do not see the point of this. These configuration languages are just different syntaxes for expressing the same fundamental data, bearing the same semantics. It would be much more interesting to see a language which experiments with what is fundamentally representable, for example like how the Nix language supports functional programming and has functions as a first-class data type.
This is my complaint too. However, they do add a proper integer type, which is the only thing that they do change with the data, as far as I can tell.
> It would be much more interesting to see a language which experiments with what is fundamentally representable
DER (and TER, which is a text format I made up to be compiled into DER (although TER is not really intended to be used directly in application programs); so TER does have comments, hexadecimal numeric literals, and other syntax features) does support many more data types, such as arbitrarily long integers, ASCII, ISO 2022, etc. My own extension to the format adds some additional types, such as a key/value list type and a TRON string type; the key/value list type is the only nonstandard ASN.1 type needed (together with a few of the standard ASN.1 types: sequence, real, UTF-8 string, null, boolean) to represent the same data as JSON does.
> for example like how the Nix language supports functional programming and has functions as a first-class data type.
For some applications this is useful and good but in others it is undesirable, I think.
There are currently two items in the FAQ. While the first one seems to be formatted with AI (I don't know if the arguments are AI generated though, how do you tell?), the other certainly doesn't look AI-generated: https://github.com/maml-dev/maml/issues/3#issuecomment-33559...
You might checkout my project, Confetti [1]. I conceived of it as Unix configuration files with the flexibility of S-expressions. I think the examples page on the website shows interesting use cases. It doesn't have a formal execution model, however, for that you might checkout Tcl or Lua.
Or is there any reason why to choose this over the others?
This is basically JSON for humans. YAML is harder to use due to significant indentation (easy to mess up in editors, and hard to identify the context), and TOML isn't great for hierarchical data.
It addresses all my complaints about JSON:
> Comments
> Multiline strings
> Optional commas
> Optional key quotes
I wish it was a superset of JSON (so, a valid JSON would also be valid MAML), but it doesn't seem to be the case.
EDIT: As I understand, HCL is very similar in terms of goals, and has been around for a while. It looks great too. https://github.com/hashicorp/hcl/
My experience is different: TOML isn't obvious if there's an array that's far from the leaf data. Maybe that's what you experienced with the hierarchical data?
In my usage of it (where we use base and override config layers), arrays are the enemy. Overrides can only delete the array, not merge data in. TOML merely makes this code smell more smelly, so it's perfect for us.
What valid JSON would be invalid MAML?
Why are they optional? Why not just make them mandatory? So I don't need to guess which chars need quotes.
Edit: What most languages also lack: semantics on de-serialization. In the best case, I want to preserve formatting and stuff when the config is changed/re-committed programmatically.
There are dozens of us!
Toml fixes some issues with shallow Yaml, but sucks at deeply nested data.
Maml looks nice at cursory glance. It seems to do nesting, numerics, comments, and strings right.
We're really close to having a great format. I'd like to see more attempts before accepting what we have as permanent.
I've been using it for niri recently, and it's quite nice.
It seems like we will be forced to use both forever though
YAML is also often abused as a DSL and for very large documents (Ansible, k8s, GH Actions, etc.), which makes it a pain to work with.
It's not so much that liking all of this is controversial. It's just a bad opinion. :p
When I was a teen I made something called Nabla:
* XML-like syntax
* Schema language
* Compact binary representation
* Trivial parser for binary representation
* Optionally, simple dynamic programming language on top
Initially made it for my 3d engine scene serialization format, but then used everywhere some non-trivial data format was needed (e.g. anything with nested data structures).(To be fair, I’m in favor of that)
Edit: Oh, no commas.
I’ll just stick to environment vars or something code
This time around, I worked with Claude Code and we basically filled in each other's knowledge gaps to finish implementing every feature I was looking for in about 3 days of work:
Day 1:
- Plugin initialization
- Syntax highlighting
- JSON Schema integration
- Error inspections
Day 2:
- Code formatter (the code style settings page probably took longer to get right than the formatter)
- Test suite for existing features
Day 3:
- Intentions, QuickFix actions, etc. to help quickly reformat or fix issues detected in the file
- More graceful parsing error recovery and reporting
- Contextual completions (e.g., relevant keys/values from a JSON schema, existing keys from elsewhere in the file, etc.)
- Color picker gutter icon from string values that represent colors (in various formats)
I'm sure there are a few other features that I'm forgetting, but at the end of the day, roughly 80-85% of the code was generated from the command line by conversing with Claude Code (Sonnet 4.5) to plan, implement, test, and revise individual features.
For IntelliJ plugins, the SDK docs tend to cover the bare minimum to get common functionality working, and beyond that, the way to learn is by reading the source of existing OSS plugins. Claude was shockingly good at finding extension points for features I'd never implemented before and figuring out how to wire them up (though not always 100% successfully). It turns out that Claude can be quite an accelerator for building plugins for the JetBrains ecosystem.
Bottom line, if you're sitting on an idea for a plugin because you thought it might to take too long to bootstrap and figure out all the IDE integration parts, there's never been a better time to just go for it.
ghthor•2h ago
sedatk•2h ago
antonymoose•2h ago
As you’ve said, all I did was fork a JSON.stringify function and swap colon for equals.
Anyone have a better solution they’ve worked with?
Edit: Why the downvotes? Terraform is using HCL? Are we talking a different HCL here?
zanecodes•15m ago
Terraform doesn't have a built-in deep merge function, but it will merge `.tfvars.json` files in the order given on the CLI, if you specify multiple `-var-file` arguments. For what it's worth, as of Terraform 1.8, you can also use functions from third-party providers like isometry/deepmerge [2] to perform a deep merge.
[1] https://developer.hashicorp.com/terraform/language/parameter... [2] https://registry.terraform.io/providers/isometry/deepmerge/l...