I've had this in mind for a while, and I finally did it: it shows system information like regular NeoFetch, but I've added extra features for those using local LLMs (Ollama, llama.cpp, etc.).
For example: -How much VRAM does your GPU have, which model (NVIDIA, AMD, Intel, Apple M series)? -How many billions of parameters can your machine comfortably run (is 70B or 13B more sensible?) -Which GGUF quantization does what (Q4_K_M vs Q8_0 vs)? -Comparison of Ollama / llama.cpp / vLLM / LM Studio -Disk speed test + JSON/Markdown export
Simple installation: pip install llm-neofetch-plus
llm-neofetch -d 3 ← This is the detailed version, showing suggestions etc.
GitHub: https://github.com/HFerrahoglu/llm-neofetch-plus
If anyone tries it, could you tell me if you liked it or not, and what we should change? Thanks!
akssassin907•1h ago
The quantization comparison is the feature I'd use most. It's one of those things that sounds simple but in practice nobody wants to dig through benchmarks just to figure out whether Q4 or Q8 is worth the extra memory on their specific machine.
Does it factor in what else is running in the background when estimating how much your machine can handle? That number can shift a lot depending on what else has memory tied up.