I am a maintainer of the Servo web engine. Two years ago, our project banned AI-generated contributions. I was a dissenting voice on that policy, and have continued to experiment with AI on side-projects. This is perhaps the most ambitious side-project so far: implementing the WebNN (Neural Network) API.
Given the policy in place, I'm doing this on a personal branch of Servo. I’m using it as a sandbox to test a specific thesis: AI is not an architect; it is a "fluid syntax engine."
This is also my response to the "autonomous slop" seen in projects like fastrender that claim to build complex software with AI but produce what I call "fine-looking nonsense" (see https://news.ycombinator.com/item?id=46624541#46709191). I wanted to see if a human architect could use an AI agent to handle the syntax and boilerplate of a Web Standard while the human retains total control over the conceptual structure.
Progress & Tech Stack:
The scope: I am using `rustnn`, and this project can be seen as doing for the Web what `pywebnn` does for Python.
The model: I am intentionally using a "cheap" LLM: Raptor mini.
The scale: I’m 7k LOC into the implementation. It currently passes a subset of WPT (Web Platform Tests) conformance tests for a specific operator (and its dependencies): add. Such a conformance test requires implementing the full graph compilation and dispatch workflow. Code is at https://github.com/gterzian/servo/compare/master...gterzian:servo:webnn
The backend: The work involves integrating the `rustnn` library into Servo’s runtime, with compute currently happening only via CoreML (macOS only).
The Reality Check:
While the productivity boost is real, the code is not yet up to my standards. Despite ongoing reviews, I’ve had to accept a certain amount of "slop" that I’ll need to clean up later. More importantly, I hit several conceptual bottlenecks: architectural problems the AI was not only unable to solve without guidance but was unable to even identify in the first place.
You can read the full breakdown of the first week and a half (contains links to code and various illustrative commits) in the link for this post.
polyglotfacto•1h ago
Given the policy in place, I'm doing this on a personal branch of Servo. I’m using it as a sandbox to test a specific thesis: AI is not an architect; it is a "fluid syntax engine."
This is also my response to the "autonomous slop" seen in projects like fastrender that claim to build complex software with AI but produce what I call "fine-looking nonsense" (see https://news.ycombinator.com/item?id=46624541#46709191). I wanted to see if a human architect could use an AI agent to handle the syntax and boilerplate of a Web Standard while the human retains total control over the conceptual structure.
Progress & Tech Stack:
The Reality Check: While the productivity boost is real, the code is not yet up to my standards. Despite ongoing reviews, I’ve had to accept a certain amount of "slop" that I’ll need to clean up later. More importantly, I hit several conceptual bottlenecks: architectural problems the AI was not only unable to solve without guidance but was unable to even identify in the first place.You can read the full breakdown of the first week and a half (contains links to code and various illustrative commits) in the link for this post.
And the backstory on why I'm doing this is at: https://medium.com/@polyglot_factotum/the-slop-diaries-imple...
I’d be happy to hear from others on how they balance the "syntax speed" of AI with the "architectural integrity" required for long-term projects.