Hopefully version 1.2 which addresses a lot of shortcomings should officially be a thing this year.
In the meantime you can take a look at some of the specification docs here https://w3c.github.io/rdf-concepts/spec/
It's overburdened by terminology, an exponential explosion of nested definitions, and abstraction to the point of unintelligibility.
It is clear that the authors have implementation(s) of the spec in mind while writing, but very carefully dance around it and refuse to be nailed down with pedestrian specifics.
I'm reminded of the Wikipedia mathematics articles that define everything in terms of other definitions, and if you navigate to those definitions you eventually end up going in circles back to the article you started out at, no wiser.
I'm completely out of time or energy for any side project at the moment, but if someone wants to steal my idea: please take an llm model and fine tune so that it can take any question and turn it into a SparQL query for Wikidata. Also, make a web crawler that reads the page and turns into a set of RDF triples or QuickStatements for any new facts that are presented. This would effectively be the "ultimate information organizer" and could potentially turn Wikidata into most people's entry page of the internet.
Here you are :)
SELECT ?country (COUNT(*) AS ?stationCount) WHERE {
?station wdt:P31 wd:Q928830.
?station wdt:P17 ?country.
}
GROUP BY ?country
ORDER BY DESC(?stationCount)
LIMIT 1
https://query.wikidata.org/#SELECT%20%3Fcountry%20%28COUNT%2...which is not unreasonable as a quick first attempt, but doesn't account for the fact that many things on Wikidata aren't tagged directly with a country (P17) and instead you first need to walk up a chain of "located in the administrative territorial entity" (P131) to find it, i.e. I would write
SELECT ?country (COUNT(DISTINCT ?station) AS ?stationCount) WHERE {
?station wdt:P31 wd:Q928830.
?station wdt:P131*/wdt:P17 ?country.
}
GROUP BY ?country
ORDER BY DESC(?stationCount)
LIMIT 1
https://query.wikidata.org/#SELECT%20%3Fcountry%20%28COUNT%2...In this case it doesn't change the answer (it only finds 3 more subway stations in China), but sometimes it does.
Graph database vendors are now trying to convince you that AI will be better with a graph database, but what i've seen so far indicates that the LLM just needs the RDF, not an actual database with data stored in triplets. Maybe because these were small tests, if you need to store a large amount of id mappings it may be different.
There are several formats to represent these predicates (Turtle), database implementations, query languages (SPARQL), and there are ontologies, which are schemas, basically, defining/describing what how to describe resource in some domain.
It's highly related to the semantic web vision of the early 2000s.
If you don't know about it, it is worth taking a few minutes to study it. It sometimes surfaces and it's nice to understand what's going on, it can give good design ideas, and it's an important piece of computer history.
It's also the quiet basis for many things, OpenGraph [3] metadata tags in HTML documents are basically RDF for instance. (TIL about RDFa [4] btw, I had always seen these meta tags as very RDF-like, for a good reason indeed).
[1] https://en.wikipedia.org/wiki/Resource_Description_Framework
[2] https://en.wikipedia.org/wiki/Semantic_Web
[3] https://ogp.me/
I'd say defining this as linked data was quite idiomatic / elegant. It's possibly mainly because OpenGraph was inspired of Dublin Core [1], which was RDF-based. They didn't reinvent everything with OpenGraph, but kept the spirit, I suppose.
In the end it's probably quite equivalent.
And in this end, why not both? Apparently we defined an RDF ontology for JSON schemas! [2]
RDF is one of those things it's easy to assume everybody has already encountered. RDF feels fundamental. Its predicate triplet design is fundamental, almost obvious (in hindsight?). It could not have not existed. Had RDF not existed, something else very similar would have appeared, it's a certitude.
But we might have reached a point where this assumption is quite false though. RDF and the semantic web were hot in the early 2000s, which was twenty years ago after all.
Yup, it's still that RDF. Inevitably, it had to be converted to new JSON-like syntaxes.
It reminds me of the "is-a" predicate era of AI. That turned out to be not too useful for formal reasoning about the real world. As as a representation for SQL database output going into a LLM, though, it might go somewhere. Maybe.
Probably because the output of an SQL query is positional, and LLMs suck at positional representations.
I worked on semantic web tech back in the day, the approach has major weaknesses and limitations that are being glossed over here. The same article touting RDF as the missing ingredient has been written for every tech trend since it was invented. We don’t need to re-litigate it for AI.
This is a non-trivial exercise. How does one transform knowledge into a knowledge graph using RDF?
RFD is extremely flexible and can represent any data and that's exactly it's great weakness. It's such a free format there is no consensus on how to represent knowledge. Many academic panels exist to set standards, but many of these efforts end up in github as unmaintained repositories.
The most important thing about RDF is that everyone needs to agree on the same modeling standards and use the same ontologies. This is very hard to achieve, and room for a lot of discussion, which makes it 'academic' :)
> The Resource Description Framework (RDF) is a method to describe and exchange graph data. It was originally designed as a data model for metadata by the World Wide Web Consortium (W3C).
https://www.wikipedia.org/wiki/Resource_Description_Framewor...
The Resource Description Framework (RDF) is a standard model for data interchange on the web, designed to represent interconnected data using a structure of subject-predicate-object triples. It facilitates the merging of data from different sources and supports the evolution of schemas over time without requiring changes to all data consumers.
What does that even mean? Suppose something 97% accurate became 99.5% accurate? How can we talk of accuracy doubling or tripling in that context? The only way I could see that working is if the accuracy of something went from say 1% to 3% or 33% to 99%. Which are not realistic values in the LLM case. (And I’m writing as a fan of knowledge graphs).
Rdf has the same problems as the sql schemas with information scattered. What fields mean requires documentation.
There - they have a name on a person. What name? Given? Legal? Chosen? Preferred for this use case?
You only have one id for apple eh? Companies are complex to model, do you mean apple just as someone would talk about it? The legal structure of entities that underpins all major companies, what part of it is referred to?
I spent a long time building identifiers for universities and companies (which was taken for ROR later) and it was a nightmare to say what a university even was. What’s the name of Cambridge? It’s not “Cambridge University” or “The university of Cambridge” legally. But it also is the actual name as people use it. The university of Paris went from something like 13 institutes to maybe one to then a bunch more. Are companies locations at their headquarters? Which headquarters?
Someone will suggest modelling to solve this but here lies the biggest problem:
The correct modelling depends on the questions you want to answer.
Our modelling had good tradeoffs for mapping academic citation tracking. It had bad modelling for legal ownership. There isn’t one modelling that solves both well.
And this is all for the simplest of questions about an organisation - what is it called and is it one or two things?
I went looking and as far as I can tell "The Chancellor, Masters, and Scholars of the University of Cambridge" is the official name! https://www.cam.ac.uk/about-the-university/how-the-universit...
Using it to solve specific problems is good. A company I work with tries to do context engineering / adding guard rails to LLMs by modeling the knowledge in organizations, and that seems very promising.
The big question I still have is whether RDF offers any significant benefits for these way more limited scopes. Is it really that much faster, simpler or better to do queries on knowledge graphs rather than something like SQL?
- Using URIs to clarify ambiguous IDs and terms
- EAV or subject/verb/object representation for all knowledge
- "Open world" graph where you can munge together facts from different sources
I guess using RDF specifically, instead of just inventing your own graph database with namespaced properties, means using existing RDF tooling and languages like SPARQL, OWL, SHACL etc.
Having looked into the RDF ecosystem to see if I can put something together for a side project inspired by https://paradicms.github.io, it really feels like there's a whole shed of tools out there, but the shed is a bit dingy, you can't really tell the purpose of the oddly-shaped tools you can see, nobody's organised and laid things out in a clear arrangement and, well, everything seems to be written in Java, which shouldn't be a huge issue but really isn't to my taste.
For the interested: resource description framework.
> The Big Picture: Knowledge graphs triple LLM accuracy on enterprise data. But here’s what nobody tells you upfront: every knowledge graph converges on the same patterns, the same solutions. This series reveals why RDF isn’t just one option among many — it’s the natural endpoint of knowledge representation. By Post 6, you’ll see real enterprises learning this lesson at great cost — or great savings.
If you really want to continue reading and discuss this kind of drivel, go ahead. RDF the "natural endpoint of knowledge representation" right. As someone having worked on commercial RDF projects at the time, after two decades of pushing RDF by a self-serving W3C and academia until around 2018 or so, let's say I welcome people having come to their senses and are back at working with Datalog and Prolog. Even as a target for neurolinguistics and generation by coding LLMs does SOARQL suck because of its design-by-comittee and idiosyncratic nature compared to the minimalism and elegance of Prolog.
flanked-evergl•1h ago
The tooling is not in a state where you can use it for any commercial or mission critical application. The tooling is mainly maintained by academics, and their concerns run almost exactly counter to normal engineering concerns.
An engineer would rather have tooling with limited functionality that is well designed and behaves correctly without bugs.
Academics would rather have tooling with lots of niche features, and they can tolerate poor design, incorrect behavior and bugs. They care more for features, even if they are incorrect, as they need to publish something "novel".
The end result is that almost all things you find for RDF is academia quality and lots of it is abandoned because it was just part of publication spam being pumped and dumped by academics that need to publish or perish.
Anyone who wants to use it commercially really has to start from scratch almost.
jraph•1h ago
Uh. Do you have a source for this? Correctness is a major need in academia.
tsimionescu•1h ago
jraph•17m ago
rglullis•1h ago
My experience working with software developed by academics is that it is focused on getting the job done for a very small user base of people who are okay with getting their hands dirty. This means lots of workarounds, one-off scripts, zero regards for maintainability or future-proofing...
jraph•32m ago
DougBTX•21m ago
lmm•1h ago
How so? Consider the famous result that most published research findings are false.
jraph•18m ago
Now. I'll assume you are referring to "Why Most Published Research Findings Are False". This paper is 20 years old, only addresses medical research despite its title, and seems to have mixed reception [1]
> Biostatisticians Jager and Leek criticized the model as being based on justifiable but arbitrary assumptions rather than empirical data, and did an investigation of their own which calculated that the false positive rate in biomedical studies was estimated to be around 14%, not over 50% as Ioannidis asserted.[12] Their paper was published in a 2014 special edition of the journal Biostatistics along with extended, supporting critiques from other statisticians
14% is a huge concern and I think nobody will disagree with this. But we are far from most, if this is true.
[1] https://en.wikipedia.org/wiki/Why_Most_Published_Research_Fi...
ragebol•1h ago
philjohn•44m ago
I worked for a company that went hard into "Semantic Web" tech for libraries (as in, the places with books), using an RDF Quad Store for data storage (OpenLink Virtuoso) and structuring all data as triples - which is a better fit for the Heirarchical MARC21 format than a relational database.
There are a few libraries (the software kind) out there that follow the W3 spec correctly, Redland being one of them.
Zardoz84•33m ago
simonw•25m ago