*even if I would prefer more transformation/conversion features that would bring it to more more a parser rather than only a validator
I agree that it is easy to implement a recent version in your own code, what I meant is that a lot of the tools/software you might want to use JSON Schema with (eg mongodb validation) only support old versions
I'm in the process of writing a toolchain of sorts, with the OpenAPI document as an abstract syntax tree that goes through various passes (parsing, validation, aggregation, analysis, transformation, serialization...). My immediate use-case is generating C++ type/class headers from component schemas, with the intent to eventually auto-generate as much code as I can from a single source of truth specification (like binding these generated C++ data classes with serializers/deserializers, generating a command-line interface...).
JSON schema is so flexible that I have several passes to normalize/canonicalize the component schemas of an OpenAPI document into something that I can then project into the C++ language. It works, but this was significantly trickier to accomplish than I anticipated.
[1]: https://swagger.io/docs/specification/v3_0/data-models/keywo...
Many users are stuck at 3.0 or even Swagger 2.0 because the libraries they use refuse to support recent versions. Also OpenAPI is still not a strict superset because things like `discriminator` are still missing in JSON schema.
If you're building a brand new, multi-language, multi-platform system that uses advanced open-api features - you will get bitten by lack of support in 3.1 versions of tooling for features that already existed and work fine right now in 3.0 tool versions. Especially if you're using a schema-first workflow (which you should be). For example, $ref's to files across windows/linux/macos across multiple different language tools - java, .net, typescript, etc.
If you need (or just want) maximum compatibility across tools, platforms and languages - open-api 3.1 is still not viable, and isn't looking like it will be anytime soon.
Now it feels like writing a validator is extremely complicated.
IMO, the built-in vocabularies were enough, and keeping it simple would provide more value.
JSON as a format didn't win because it supported binary number encoding or could be extended with custom data types -- but rather because it couldn't.
I also often make heavy use of schema based editors to auto-generate UI.
Perhaps I'd feel differently if I ever had to write a validator myself, but they seem to exist in all the popular languages as is.
For example, the following issues pass under the metaschema.
{"foo": {"bar": { ... }}} # wrong
{"foo": {"type": "object", "properties": {"bar": { ... }}}} # correct
{"additional_properties": false} # wrong
{"additionalProperties": false} # correct"Unfortunately, [the terms] leaked into the documentation that everyone reads" - We did this on purpose to align everyone's terms. It makes things so much easier when the people asking and answering questions are using the same language.
"The official JSON Schema website has a validator you can try: https://www.jsonschemavalidator.net/" - Would have been better to point to the actual official JSON Schema website's tools page (https://json-schema.org/tools) that lists many online validators.
There are some interesting conceptions of OpenAPI in here as well. Specifically, OpenAPI isn't a JSON Schema document. It's its own kind of document that has JSON Schemas embedded in it.
Still, it's a decent high-level summary. If you're interested in diving a bit deeper, definitely come visit us in Slack (https://json-schema.org/slack).
I'm not really sure why you'd say that OpenAPI isn't a JSON Schema document: there are published JSON Schema files on the official OpenAPI website. See for example:
One using the draft-04 of JSON schema: https://spec.openapis.org/oas/3.0/schema/2024-10-18.html One using the 2020-12 version of JSON schema: https://spec.openapis.org/oas/3.2/schema/2025-09-17.html
OpenAPI descriptions are not themselves JSON Schema. They _use_ JSON Schema.
There _are_ JSON Schemas that describe OpenAPI documents as well, but that's just because OpenAPI can be described in JSON.
JSON Schema could get more traction if its homepage was oriented more towards users instead of implementors.
However, we already had an XMLSpy license, so decided to just stick with designing XSDs in XMLSpy and then just translate that to JSON Schema.
If you make some small decisions, like value with attributes becomes an object, you can get a fairy decent subset of XSD to map 1:1 onto JSON Schema 2020-12.
As a nice side effect of writing the XSD to JSON Schema converter, it's trivial for us to support reading XMLs and convert that to JSON. Great for the customers who have programs that doesn't speak JSON.
It's not a problem for a dozen properties, but we have several hundreds in our larger schemas, even accounting for them being fairly normalized w.r.t. types. And five or more levels of nesting turns into an effective ten plus levels in the schema.
Yeah, though while it does make each subschema somewhat more readable and contained, you still don't get a good overview. If you're reading the spec for a given object, do you don't easily see where it's being used in the schema.
For now I've just supplied the JSON Schema as a self-contained thing, and deferred other parties to the XSD to get an overview. The self-contained makes it trivial to load into a validator and such.
So while it helps for knowing what to fill into that exact object, it doesn't help for getting a feel for the overall schema. This is where the visual view of tools such as XMLSpy really helps.
> lack of a good standard for schema repositories
Interesting, do you have something public to show? For our large ones I feel they'd be entirely custom anyway, but perhaps I can see standard sub-schemas useful for other tasks. Would be interesting to have a look.
True, when focusing only on the schemas as code. But good tooling could provide links and similar.
> do you have something public to show
Just a very early PoC [0]. I'm slowly working my way through a very long to-do list of improvements, but I'm lacking time and resources to do it more efficiently.
adamzwasserman•2mo ago
I've been exploring how this generalizes beyond side effects. Every React state library creates a JavaScript copy of state that must sync with the DOM. This is the original sin. Two truths = lies.
The solution isn't better syncing, it's refusing to duplicate. The DOM is already a perfectly good state container. All you have to do is read it.
Releasing a paper (DATAOS) and React implementation (stateless, <1KB) soon. It's the architecture behind multicardz (hyper-performing kanban on steroids, rows AND columns, 1M+ cards, sub second searches, perfect lighthouse scores, zero state sync bugs). Because there's no state to sync.
chwzr•2mo ago
croes•2mo ago
adamzwasserman•2mo ago
adamzwasserman•2mo ago
I am doing final testing, packaging, integrating strip, etc. My target was before end of month. If I work hard I just might make it.
I'm releasing a bunch of things at once: 1. multicardz.com 2. a paper on using DOM as single source of user state dataos.software 3. an OSS repo of hyper performant implementation of that for React (stateless.software) 4. an OSS repo that might best be described as Tailwind for frontend behavior. (genX.software)
they are all somewhat interconnected