I am building a terminal-native tool for code generation, and one of the recent updates was to package a local model (Qwen 2.5 Coder 7B, downloads on the first try) for those users who do not want their code uploaded to third-party servers.
Initial response from users to this addition was favorable - but I have my doubts: the model is fairly basic and does not compare in quality to online offerings.
So - I am planning to improve RAG capabilities for building a message with relevant source file chunks, add a planning call, add validation loop, maybe have a multi-sample with re-ranking, etc.: all those techniques that are common and when implemented properly - could improve quality of output.
So - the question: I believe (hope?) that with all those things implemented - 7B can be bumped approximately to quality of a 20B, do you agree that's possible or do you think it would be a wasted effort and that kind of improvement would not happen?
The source is here - give it a star if you like what you see: https://github.com/acrotron/aye-chat
acro-v•1h ago
Post: https://cline.bot/blog/why-cline-doesnt-index-your-codebase-...
That post however does not apply to offline processing use case. Here are his 3 main problem points they re trying to solve:
Code Doesn't Think in Chunks
But then he is describing follow semantic links through imports, etc. -> that technique is still hierarchical chunking, and I am planning to implement that as well: it's straightforward.
2. Indexes Decay While Code Evolves
This is just not true - there are multiple ways to solve it. One, for example, is continuous indexing at low priority in the background. Another one - monitoring for file changes and reindexing only differences, etc. I already implemented first iteration for this: index remains current.
3. Security Becomes a Liability (and then goes into embeddings to be stored somewhere)
We are talking about offline mode of operation. Not with Aye Chat: it implements embedding store locally - with ChromaDB and ONNXMiniLM_L6_V2 model.
So as you can see - none of his premises apply here.
And then as part of solution he claims that "context window does not matter because Claude and ChatGPT models are now into 1M context window" - but once again that does not apply to locally hosted models: I am getting 32K context with Qwen 2.5 Coder 7B on my non-optimized setup with 8Gb VRAM.
The main thing why I think it may work is the following: answering a question includes "planning for what to do", and then "doing it". Models are good at "doing it" if they are given all necessary info, so if we unload that "planning" into application itself - I think it may work.