btw, I am working on allowing users to index their local files and fully store it locally! will update you on that
At this time I can't even think about using the tool until I know what you are doing with my information and who owns or has access to it.
edit: it is on the website now. forgot to add it, mb
Are you using some form of canvas fingerprinting, either intentionally or unintentionally (through third-party scripts)?
Especially please don't do this in Show HN threads, which have extra rules to forbid this kind of thing: https://news.ycombinator.com/showhn.html.
What external docs do you have access to that aren't found on the web?
LLMs and coding agents have general knowledge but they mostly give outdated info, even when asked to search on the web.
- it also supports both private and public repos :)
I’ve had generally good results with this approach (I’m on project #3 using this method).
give nia a try and use it on any docs, very curious to hear ur feedback
I haven't done extensive experiments, but I have noticed anecdotal benefits to asking the LLM how they want things structured as well.
For example, for complex multi-stage tasks I asked Claude Code how best to communicate the objective and it recommended a markdown file with the following sections: "High-level goal", "User stories", "Technical requirements", "Non-goals". I then created such a doc for a pretty complex task then asked Claude to review the doc and ask any clarifications. I then answer any questions (usually 5-7) and put them into a "Clarification" section. I have also added a "Completion checklist" section that I use to ensure that Claude follows all of the rules in my subdirectory "README.md" files (I have one for each major sub-section of code, like my service layer, my router layer, my database, etc). I usually go and do 2-3 rounds of Claude asking questions and me adding to the "Clarification" section and then Claude is satisfied and ready to implement.
The bonus of this approach is I now have a growing list of the task specifications checked into a "tasks" directory showing the history of how the code base came to be.
At this point, I typically do an LLM-readme at the branch level to document both planning and progress. At the project level I've started having it dump (and organize) everything in a work-focused Obsidian vault. This way I end up with cross-project resources in one place, it doesn't bloat my repos, and it can be used by other agents from where it is.
In there I have generic advice on project management (use `gh` and Github issues for todo lists) and language-specific guidance in separate files, like which libraries to use etc.
Then I have a common prompt template for different agents that tells them to look there for specific technology choices and create/update their own WHATEVER.md file in the repo.
Gemini-cli is pretty efficient for creating specs and doesn't run out of context. With Context7 it can pull up API specs into the documentation it creates and with Brave API it can search for other stuff.
After it's done, I can just tell Claude to make a step by step plan based on the specs and create Github issues for them with the appropriate labels.
Clear context, and get Claude working on the issues one by one.
https://github.com/jerpint/context-llemur
It’s MCP/CLI friendly , and wraps git around a context: folder, so you can super easily load context anywhere using: “ctx load” and ask LLMs to update and save context as things move along
I believe right now you're requiring us to do the scraping/adding?
Nia already supports that. Just take the link i.e https://mintlify.com/docs and ask to index it (it will crawl every subpage available from the rool link you specify)
- nia can do deep research across any docs / codebase and then find any relevant links or repos to index. - it also supports both private and public repos :)
lmk about ur experience with context7 (if u used) it and what docs did u use?
one of my recent customers (yc s25) needed to migrate to stripe ASAP and cursor etc gave them deprecated docs. they used my tool to index entire stripe docs and then use it to migrate in couple hours:)
lmk if u have more questions and happy to help
Will keep you in the loop
I’m going to try this today. Best of luck with this!
Are you still building this yourself (and Claude)?
I suggest to watch this quickstart: https://youtu.be/5019k3Bi8Wo?si=3mMcp1Zd5C3Z0Rso
Yes, I am building solo + claude code haha
I used it recently to do a major refactor and upgrade to MLFlow version 3.0. Their documentation is a horrid mess right now, but the MCP server made it a breeze because i could just query the assistant to browse their codebase. Would have taken me hours extra myself.
As for GitMCP: I think the url fetching tool of the docs it does is not great, but the code searching tool is quite good. Regardless, i remain open to alternatives, not stuck to this yet.
Also FYI a bunch of search quality improvements dropped this week so you might want to try again. :)
At some point we will need an aggregator of MCPs to be delivered with the agents, the perceived cost of shopping for them individually is not worth the cost from the consumer perspective.
NiloCK•1d ago
jellyotsiro•1d ago
dang•1d ago
jellyotsiro•1d ago