frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Shifting Center of Capitalism: A Historical Map

https://medium.com/@ersinesen/the-shifting-center-of-capitalism-a-historical-map-5cc4a51ae5d0
1•ersinesen•1m ago•1 comments

Capitalization of Initialisms

https://www.teamten.com/lawrence/writings/capitalization_of_initialisms.html
1•todsacerdoti•2m ago•0 comments

Backdoors Could Become a Routine Problem for Open Source Projects Because of AI

https://www.techdirt.com/2025/09/04/why-powerful-but-hard-to-detect-backdoors-could-become-a-rout...
1•latexr•3m ago•0 comments

My friends spent thousands on engagement photos, so I built an AI alternative

https://www.engagement-photos.com
1•michaellzd0303•6m ago•1 comments

I bought the cheapest EV (a used Nissan Leaf)

https://www.jeffgeerling.com/blog/2025/i-bought-cheapest-ev-used-nissan-leaf
1•calcifer•7m ago•0 comments

Cloud based ERP for Small and medium businesses

https://obsidyun.com/
1•ahmeddabdurahim•8m ago•0 comments

Search the Web Before AI

https://pregptsearch.com
2•jiayuanzhang•13m ago•0 comments

Agents to Do the Things Claude Can't (With Marimo and Bespoken)

https://elite-ai-assisted-coding.dev/p/agents-to-do-the-things-claude-cant
1•intellectronica•14m ago•0 comments

An Open Letter to Everyone I've Butted Heads With

https://andrewkelley.me/post/open-letter-everyone-butted-heads.html
1•weinzierl•15m ago•0 comments

Show HN: IssuePay Job Offers – Hire developers through their open-source work

https://issuepay.app
1•Mario10•15m ago•0 comments

Mycareerwise.ai – See your CV's role fit score and learning path

https://mycareerwise.ai
1•Farid_agha87•15m ago•1 comments

Fiber Concurrency

https://honeyryderchuck.gitlab.io/httpx/wiki/Fiber-Concurrency
1•amalinovic•20m ago•0 comments

The state of `fq_codel` and `sch_cake` worldwide (2022)

https://blog.cerowrt.org/post/state_of_fq_codel/
1•Bogdanp•20m ago•0 comments

Microsoft's Rust Bet: From Blue Screens to Safer Code

https://thenewstack.io/microsofts-rust-bet-from-blue-screens-to-safer-code/
2•unripe_syntax•21m ago•0 comments

Carbon Language: An experimental successor to C++

https://github.com/carbon-language/carbon-lang
1•thunderbong•25m ago•0 comments

Alexandria: A Lightweight Library Genesis eBook Browser

https://github.com/Samin100/Alexandria/releases
1•freetonik•30m ago•0 comments

A wild week: Grok Code Fast 1 exploding to 66% usage share

https://blog.kilocode.ai/p/a-wild-week-grok-code-fast-i-exploding
1•nix_95•31m ago•1 comments

$TRAVEL, a coin where holders win free flights with the coin fees. Pump or rug?

https://twitter.com/qt_tahani/status/1963829870288663021
1•bingwu1995•31m ago•0 comments

Cooking the Federal Reserve's Credibility

https://paulkrugman.substack.com/p/cooking-the-federal-reserves-credibility
1•throwawayffffas•32m ago•0 comments

Subverting code integrity checks to locally backdoor Signal, 1Password and more

https://blog.trailofbits.com/2025/09/03/subverting-code-integrity-checks-to-locally-backdoor-sign...
1•elashri•32m ago•0 comments

To what extent is the war in Gaza justified?

https://mathsandsoundingoff.wordpress.com/2025/06/23/to-what-extent-is-the-war-in-gaza-justified/
2•EvgeniyZh•36m ago•0 comments

Robinhood CEO: Investing for a living could replace labor in a post-AI world

https://fortune.com/2025/08/27/robinhood-ceo-vlad-tenev-leadership-next/
1•signa11•38m ago•0 comments

Polish oil company Orlen to build small nuclear power

https://www.reuters.com/sustainability/boards-policy-regulation/polish-oil-company-orlen-build-sm...
3•danielam•39m ago•1 comments

Show HN: Democratic Writing

https://hivedtokens.com
1•levmiseri•40m ago•0 comments

Truco and clones: the beginnings of Argentinian computer gaming

https://zeitgame.net/archives/18373
1•Michelangelo11•46m ago•0 comments

First brain-wide map of decision-making charted in mice

https://www.eurekalert.org/news-releases/1096579
2•signa11•48m ago•0 comments

Beijing Is Failing to Solve Its Involution Problem

https://www.thewirechina.com/2025/08/31/beijing-is-failing-to-solve-its-involution-problem/
1•theconomist•50m ago•0 comments

Kimi K2-0905 Update

https://twitter.com/Kimi_Moonshot/status/1963802687230947698
2•tosh•53m ago•0 comments

Nepal blocks Facebook, X, YouTube and others for failing to register

https://apnews.com/article/nepal-ban-social-media-platform-3b42bbbd07bc9b97acb4df09d42029d5
2•perihelions•56m ago•0 comments

Using LSD to Treat Anxiety

https://www.newscientist.com/article/2495132-a-single-dose-of-lsd-seems-to-reduce-anxiety/
1•didntknowyou•59m ago•1 comments
Open in hackernews

Why RDF Is the Natural Knowledge Layer for AI Systems

https://bryon.io/why-rdf-is-the-natural-knowledge-layer-for-ai-systems-a5fd0b43d4c5
37•arto•2h ago

Comments

flanked-evergl•1h ago
RDF is great but it's somewhat inadvertently captured by academia.

The tooling is not in a state where you can use it for any commercial or mission critical application. The tooling is mainly maintained by academics, and their concerns run almost exactly counter to normal engineering concerns.

An engineer would rather have tooling with limited functionality that is well designed and behaves correctly without bugs.

Academics would rather have tooling with lots of niche features, and they can tolerate poor design, incorrect behavior and bugs. They care more for features, even if they are incorrect, as they need to publish something "novel".

The end result is that almost all things you find for RDF is academia quality and lots of it is abandoned because it was just part of publication spam being pumped and dumped by academics that need to publish or perish.

Anyone who wants to use it commercially really has to start from scratch almost.

jraph•1h ago
> even if they are incorrect

Uh. Do you have a source for this? Correctness is a major need in academia.

tsimionescu•1h ago
I think they mean things like a tool that has feature X even if it crashes 50% when it is used is preferable to a tool that doesn't have feature X at all.
jraph•17m ago
Ok, makes sense, I hadn't read it like this. For me, "correct" means "provides correct results".
rglullis•1h ago
Correct != Bug-free.

My experience working with software developed by academics is that it is focused on getting the job done for a very small user base of people who are okay with getting their hands dirty. This means lots of workarounds, one-off scripts, zero regards for maintainability or future-proofing...

jraph•32m ago
This I fully agree with.
DougBTX•21m ago
“Incomplete” seems like a better word than “incorrect” for this. The code is likely correct in the narrow scope it was needed for, but is missing features (and error checking!) beyond the happy path, making it easy to use incorrectly.
lmm•1h ago
> Correctness is a major need in academia.

How so? Consider the famous result that most published research findings are false.

jraph•18m ago
How so? Finding correct stuff is the whole point of research, no matter the extent at which it actually succeeds in reaching this. So yes, regardless on the actual results it is a major need in academia. We have nothing better anyway (which doesn't need it can't improve; we critically need it to improve).

Now. I'll assume you are referring to "Why Most Published Research Findings Are False". This paper is 20 years old, only addresses medical research despite its title, and seems to have mixed reception [1]

> Biostatisticians Jager and Leek criticized the model as being based on justifiable but arbitrary assumptions rather than empirical data, and did an investigation of their own which calculated that the false positive rate in biomedical studies was estimated to be around 14%, not over 50% as Ioannidis asserted.[12] Their paper was published in a 2014 special edition of the journal Biostatistics along with extended, supporting critiques from other statisticians

14% is a huge concern and I think nobody will disagree with this. But we are far from most, if this is true.

[1] https://en.wikipedia.org/wiki/Why_Most_Published_Research_Fi...

ragebol•1h ago
Tooling sounds like it can be fixed? If the knowledge bases are useful, why not use them with better tools?
philjohn•44m ago
Yes and no.

I worked for a company that went hard into "Semantic Web" tech for libraries (as in, the places with books), using an RDF Quad Store for data storage (OpenLink Virtuoso) and structuring all data as triples - which is a better fit for the Heirarchical MARC21 format than a relational database.

There are a few libraries (the software kind) out there that follow the W3 spec correctly, Redland being one of them.

Zardoz84•33m ago
I'm in a similar boat. On my case, it's software for public libraries, and it's a must having data accessible as RDF. Event, we have our own public fork of Marc4j .
simonw•25m ago
How well did that work? Based on your experience at that company would you build a new project on the stack that they chose?
mdhb•1h ago
Maybe worth also pointing out that a meaningful refresh of the RDF specification is getting rather close to completion.

Hopefully version 1.2 which addresses a lot of shortcomings should officially be a thing this year.

In the meantime you can take a look at some of the specification docs here https://w3c.github.io/rdf-concepts/spec/

jiggawatts•1h ago
The sibling comment by flanked-evergl "RDF is great but it's somewhat inadvertently captured by academia." is made manifestly obvious when reading this spec.

It's overburdened by terminology, an exponential explosion of nested definitions, and abstraction to the point of unintelligibility.

It is clear that the authors have implementation(s) of the spec in mind while writing, but very carefully dance around it and refuse to be nailed down with pedestrian specifics.

I'm reminded of the Wikipedia mathematics articles that define everything in terms of other definitions, and if you navigate to those definitions you eventually end up going in circles back to the article you started out at, no wiser.

rglullis•1h ago
Wrote this about one month ago here at https://news.ycombinator.com/item?id=44839132

I'm completely out of time or energy for any side project at the moment, but if someone wants to steal my idea: please take an llm model and fine tune so that it can take any question and turn it into a SparQL query for Wikidata. Also, make a web crawler that reads the page and turns into a set of RDF triples or QuickStatements for any new facts that are presented. This would effectively be the "ultimate information organizer" and could potentially turn Wikidata into most people's entry page of the internet.

luguenth•1h ago
https://spinach.genie.stanford.edu/

Here you are :)

yorwba•44m ago
I asked "Which country has the most subway stations?" and got the query

  SELECT ?country (COUNT(*) AS ?stationCount) WHERE {
    ?station wdt:P31 wd:Q928830.
    ?station wdt:P17 ?country.
  }
  GROUP BY ?country
  ORDER BY DESC(?stationCount)
  LIMIT 1
https://query.wikidata.org/#SELECT%20%3Fcountry%20%28COUNT%2...

which is not unreasonable as a quick first attempt, but doesn't account for the fact that many things on Wikidata aren't tagged directly with a country (P17) and instead you first need to walk up a chain of "located in the administrative territorial entity" (P131) to find it, i.e. I would write

  SELECT ?country (COUNT(DISTINCT ?station) AS ?stationCount) WHERE {
    ?station wdt:P31 wd:Q928830.
    ?station wdt:P131*/wdt:P17 ?country.
  }
  GROUP BY ?country
  ORDER BY DESC(?stationCount)
  LIMIT 1
https://query.wikidata.org/#SELECT%20%3Fcountry%20%28COUNT%2...

In this case it doesn't change the answer (it only finds 3 more subway stations in China), but sometimes it does.

IanCal•57m ago
Even without tuning Claude is pretty solid at this, just give it the sparql endpoint as a tool call. Claude can generate this integration too.
rglullis•49m ago
But the idea of tuning the model for this task is to make a model that is more efficient, cheaper to operate and not requiring $BILLIONS of infrastructure going to the hands of NVDA and AMZN.
ako•16m ago
I've built an mcp for sparql and rdf. Used claude on iphone to turn pictures of archeological site information shields to transcription, to an ontology, to an rdf, to an er-model and sql statements, and then with mcp tool and claude desktop to save the data into parquet files on blobstorage and the ontology graph into a graph database. Then used it to query data from parquet (using duckdb), where sonnet 4 used the rdf graph to write better sql statements. Works quite well. Now in the process of using sonnet 4 to find the optimal system prompt for qwen coder to also handle rdf and sparql: i've given sonnet 4 access to qwen coder through an mcp tool, so it can trial and error different system prompt strategies. Results are promising, but can't compete with the quality of sonnet 4.

Graph database vendors are now trying to convince you that AI will be better with a graph database, but what i've seen so far indicates that the LLM just needs the RDF, not an actual database with data stored in triplets. Maybe because these were small tests, if you need to store a large amount of id mappings it may be different.

barrenko•1h ago
Or a semantic layer it's called?
retube•1h ago
What is "RDF" ? Not defined in the article
jraph•1h ago
Resource Description Framework [1] is basically a way to describes resources with (subject, verb, object) predicates, where subject is the resource being described and object is another resource related to the subject in a way verb defines (verb is not necessarily a grammatical verb/action, it's often a property name).

There are several formats to represent these predicates (Turtle), database implementations, query languages (SPARQL), and there are ontologies, which are schemas, basically, defining/describing what how to describe resource in some domain.

It's highly related to the semantic web vision of the early 2000s.

If you don't know about it, it is worth taking a few minutes to study it. It sometimes surfaces and it's nice to understand what's going on, it can give good design ideas, and it's an important piece of computer history.

It's also the quiet basis for many things, OpenGraph [3] metadata tags in HTML documents are basically RDF for instance. (TIL about RDFa [4] btw, I had always seen these meta tags as very RDF-like, for a good reason indeed).

[1] https://en.wikipedia.org/wiki/Resource_Description_Framework

[2] https://en.wikipedia.org/wiki/Semantic_Web

[3] https://ogp.me/

[4] https://en.wikipedia.org/wiki/RDFa

rapnie•1h ago
Does OpenGraph gain any benefit from its definition as linked data? Or might it just as well have been defined as, say, JSON Schema's and referring to those property names in html?
jraph•56m ago
I'm not expert on OpenGraph, and it's been a while I've actually manipulated RDF other than the automatically generated og meta tags.

I'd say defining this as linked data was quite idiomatic / elegant. It's possibly mainly because OpenGraph was inspired of Dublin Core [1], which was RDF-based. They didn't reinvent everything with OpenGraph, but kept the spirit, I suppose.

In the end it's probably quite equivalent.

And in this end, why not both? Apparently we defined an RDF ontology for JSON schemas! [2]

[1] https://en.wikipedia.org/wiki/Dublin_Core

[2] https://www.w3.org/2019/wot/json-schema

vixen99•1h ago
We meet this casual use of acronyms all too often on HN. It only takes a line or two to enable everyone to follow along without recourse to a search expedition.
jraph•1h ago
You wouldn't spell out HyperText Markup Language each time.

RDF is one of those things it's easy to assume everybody has already encountered. RDF feels fundamental. Its predicate triplet design is fundamental, almost obvious (in hindsight?). It could not have not existed. Had RDF not existed, something else very similar would have appeared, it's a certitude.

But we might have reached a point where this assumption is quite false though. RDF and the semantic web were hot in the early 2000s, which was twenty years ago after all.

rapnie•54m ago
Tim Berners-Lee, now co-founder of Inrupt [0], has launched Solid Project [1] where he kept working on semantic web concepts and linked data specs. Looks like Inrupt went full AI today. What ailed Solid was I think the academic approach mentioned in other comments, the heavy-weight specification process (inspired by W3C), and overlooking the fact that you better get your dev community on board and excited as a good road to adoption. Inrupt didn't spend much attention to their Solid community, except for the active followers in their chat channels, and were directly targeting commercial customers. I don't know the health of Solid project today, but there are a couple of interesting projects around social networking and the fediverse.

[0] https://www.inrupt.com/about

[1] https://solidproject.org/

Animats•1h ago
Right. I was thinking "RDF - vaguely remember that as some XML thing from Semantic Web era".

Yup, it's still that RDF. Inevitably, it had to be converted to new JSON-like syntaxes.

It reminds me of the "is-a" predicate era of AI. That turned out to be not too useful for formal reasoning about the real world. As as a representation for SQL database output going into a LLM, though, it might go somewhere. Maybe.

Probably because the output of an SQL query is positional, and LLMs suck at positional representations.

jandrewrogers•1h ago
As the article itself points out, this has been around for 25 years. It isn’t an accident that nobody does things this way, it wasn’t an oversight.

I worked on semantic web tech back in the day, the approach has major weaknesses and limitations that are being glossed over here. The same article touting RDF as the missing ingredient has been written for every tech trend since it was invented. We don’t need to re-litigate it for AI.

rglullis•1h ago
I would be very interested in reading what you think it can't work. I am inclined to agree with the post on a sibling thread that mentions that the main problem with RDF is that it is been captured by academia.
4ndrewl•48m ago
IME it's less than a "capture", more that most outside of academia don't have the requisite learning to be able to think in the abstract outside of trivial examples.
FrankyHollywood•43m ago
The article states "When that same data is transformed into a knowledge graph"

This is a non-trivial exercise. How does one transform knowledge into a knowledge graph using RDF?

RFD is extremely flexible and can represent any data and that's exactly it's great weakness. It's such a free format there is no consensus on how to represent knowledge. Many academic panels exist to set standards, but many of these efforts end up in github as unmaintained repositories.

The most important thing about RDF is that everyone needs to agree on the same modeling standards and use the same ontologies. This is very hard to achieve, and room for a lot of discussion, which makes it 'academic' :)

verisimi•1h ago
RDF - Resource Description Framework

> The Resource Description Framework (RDF) is a method to describe and exchange graph data. It was originally designed as a data model for metadata by the World Wide Web Consortium (W3C).

https://www.wikipedia.org/wiki/Resource_Description_Framewor...

zekrioca•1h ago
Author listed RDF a couple dozen of times but didn’t define it, so:

The Resource Description Framework (RDF) is a standard model for data interchange on the web, designed to represent interconnected data using a structure of subject-predicate-object triples. It facilitates the merging of data from different sources and supports the evolution of schemas over time without requiring changes to all data consumers.

jraph•1h ago
I wrote a comment trying to explain it there, with a concrete, current and widespread example: https://news.ycombinator.com/item?id=45135302#45135593
ricksunny•52m ago
Five times in that article he says some version of “Accuracy triples”.

What does that even mean? Suppose something 97% accurate became 99.5% accurate? How can we talk of accuracy doubling or tripling in that context? The only way I could see that working is if the accuracy of something went from say 1% to 3% or 33% to 99%. Which are not realistic values in the LLM case. (And I’m writing as a fan of knowledge graphs).

IanCal•43m ago
This seems to miss the other side of why all this failed before.

Rdf has the same problems as the sql schemas with information scattered. What fields mean requires documentation.

There - they have a name on a person. What name? Given? Legal? Chosen? Preferred for this use case?

You only have one id for apple eh? Companies are complex to model, do you mean apple just as someone would talk about it? The legal structure of entities that underpins all major companies, what part of it is referred to?

I spent a long time building identifiers for universities and companies (which was taken for ROR later) and it was a nightmare to say what a university even was. What’s the name of Cambridge? It’s not “Cambridge University” or “The university of Cambridge” legally. But it also is the actual name as people use it. The university of Paris went from something like 13 institutes to maybe one to then a bunch more. Are companies locations at their headquarters? Which headquarters?

Someone will suggest modelling to solve this but here lies the biggest problem:

The correct modelling depends on the questions you want to answer.

Our modelling had good tradeoffs for mapping academic citation tracking. It had bad modelling for legal ownership. There isn’t one modelling that solves both well.

And this is all for the simplest of questions about an organisation - what is it called and is it one or two things?

simonw•29m ago
That university example is fantastic.

I went looking and as far as I can tell "The Chancellor, Masters, and Scholars of the University of Cambridge" is the official name! https://www.cam.ac.uk/about-the-university/how-the-universit...

jtwaleson•21m ago
Indeed, I often get the impression that (young) academics want to model the entire world in RDF. This can't work because the world is very ambiguous.

Using it to solve specific problems is good. A company I work with tries to do context engineering / adding guard rails to LLMs by modeling the knowledge in organizations, and that seems very promising.

The big question I still have is whether RDF offers any significant benefits for these way more limited scopes. Is it really that much faster, simpler or better to do queries on knowledge graphs rather than something like SQL?

crabmusket•35m ago
I really like RDF in theory, as a lot of its ideas just make sense to me:

- Using URIs to clarify ambiguous IDs and terms

- EAV or subject/verb/object representation for all knowledge

- "Open world" graph where you can munge together facts from different sources

I guess using RDF specifically, instead of just inventing your own graph database with namespaced properties, means using existing RDF tooling and languages like SPARQL, OWL, SHACL etc.

Having looked into the RDF ecosystem to see if I can put something together for a side project inspired by https://paradicms.github.io, it really feels like there's a whole shed of tools out there, but the shed is a bit dingy, you can't really tell the purpose of the oddly-shaped tools you can see, nobody's organised and laid things out in a clear arrangement and, well, everything seems to be written in Java, which shouldn't be a huge issue but really isn't to my taste.

Kwpolska•31m ago
Stop trying to make Semantic Web happen. It’s not going to happen.
epolanski•24m ago
Small rant: I hate how RDF is the central topic of the blog post, yet it isn't defined even *once*.

For the interested: resource description framework.

tannhaeuser•5m ago
Here's the first paragraph of that article:

> The Big Picture: Knowledge graphs triple LLM accuracy on enterprise data. But here’s what nobody tells you upfront: every knowledge graph converges on the same patterns, the same solutions. This series reveals why RDF isn’t just one option among many — it’s the natural endpoint of knowledge representation. By Post 6, you’ll see real enterprises learning this lesson at great cost — or great savings.

If you really want to continue reading and discuss this kind of drivel, go ahead. RDF the "natural endpoint of knowledge representation" right. As someone having worked on commercial RDF projects at the time, after two decades of pushing RDF by a self-serving W3C and academia until around 2018 or so, let's say I welcome people having come to their senses and are back at working with Datalog and Prolog. Even as a target for neurolinguistics and generation by coding LLMs does SOARQL suck because of its design-by-comittee and idiosyncratic nature compared to the minimalism and elegance of Prolog.