Or the meaningful commit message of “.”
And the commit editing 1,000s of lines of python code mislabeled as a docs change?
Docs / Markdown: AI handled repetitive stuff like READMEs and summaries.
Core logic / Python: fully written by me.
Commit messages: some minimal ones just for quick iterations — the real work is in the code.
AI helped with boilerplate so I could ship faster; all functionality is hand-crafted.
The “meaningful commit messages” — again are a single period as the message for a single commit for the entire python portion of the codebase.
My question was rhetorical. Whether the AI did it or a human did, it burns credibility to refer to things that don’t exist (like “meaningful commit messages”)
Well done to the author for shipping code. I look forward to trying it out.
If it was their work your point would hold.
> I wanted to understand how these things work by building one myself.
Directly to this:
What if training an LLM was as easy as npx create-next-app?
I mean that the second thought seems to be the opposite of the first (what if the entirety of training llm was abstracted behind a simple command)
When I started, I wanted to understand LLMs deeply. But I hit a wall: tutorials were either "hello world" toys or "here's 500 lines of setup before you start."
What I needed was: "give me working code quickly, THEN let me modify and learn."
That's what create-llm does. It scaffolds the boilerplate (like create-next-app), so you can spend time learning the interesting parts: - Why does vocab size matter? (adjust config, see results) - What causes overfitting? (train on small data, see it happen) - How do different architectures perform? (swap templates, compare)
It's "easy to start, deep to master." The abstraction gets you running in 60 seconds, then you dig into the code
kk58•10h ago
theaniketgiri•10h ago