Ask HN: Has AI breathed new life into Semantic (web) Technologies?
4•rottyguy•13h ago
The Knowledge Graph Conference is currently happening in NYC and there's a bit of talk around KG assisting AI with various things like truth grounding (RAG) and Agentic development. Curious if anyone is seeing more discussions of Semantic Technologies in their orbit these days?
Comments
dtagames•12h ago
Text, whether "semantic" or not, just gets tokenized and stored as weighted numbers in a model. It looses all its "semantic-ness."
So I would say the opposite is true. AI tools are removing the need for special declarative wrappers around a lot of text. For example, there's no need to surround a headline with <H1> when you can ask a GPT to "get the headlines from all these articles."
There are a couple kinds of wrapping that do help working with LLMs. That's markdown in prompts and JSON and XML in system instructions for MCP. But RAG refers to the non-LLM end of the process, getting data from files or a database, so the style of training data doesn't directly affect how that works.
bjourne•11h ago
Quite the contrary. The idea behind the semantic web was to make content machine-readable by manually annotating it. For instance, this comment would have fields like "author", "date", "language", and maybe "ip" to make it interpretable to the machines. You don't need that because the machines can figure it out without the annotations. A run-of-the-mill computer vision model can tag an image much better and much more accurately than most humans.
evanjrowley•10h ago
Multiple comments here state that AI eliminates the need for Semantic web tech, and I can understand that perspective, but it's also a narrow way of interpreting the question. While LLMs produce great results without relying on semantic relationships, they could also be used to build semantic relationships. There's probably applications there worth exploring. For example, if a general-purpose LLM can build a semantic dataset for solving specialized problems, then might that approach be more efficient versus training a specialized LLM?
dtagames•2h ago
They do build those relationships, and by being trained on large, general data sets rather than specialized ones. There's no need for special markup to achieve that.
dtagames•12h ago
So I would say the opposite is true. AI tools are removing the need for special declarative wrappers around a lot of text. For example, there's no need to surround a headline with <H1> when you can ask a GPT to "get the headlines from all these articles."
There are a couple kinds of wrapping that do help working with LLMs. That's markdown in prompts and JSON and XML in system instructions for MCP. But RAG refers to the non-LLM end of the process, getting data from files or a database, so the style of training data doesn't directly affect how that works.