frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
232•isitcontent•14h ago•25 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
332•vecti•17h ago•145 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
290•eljojo•17h ago•177 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
73•phreda4•14h ago•14 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
91•antves•1d ago•66 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
2•melvinzammit•2h ago•0 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
17•denuoweb•1d ago•2 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•2h ago•1 comments

Show HN: BioTradingArena – Benchmark for LLMs to predict biotech stock movements

https://www.biotradingarena.com/hn
25•dchu17•19h ago•12 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
47•nwparker•1d ago•11 comments

Show HN: Artifact Keeper – Open-Source Artifactory/Nexus Alternative in Rust

https://github.com/artifact-keeper
151•bsgeraci•1d ago•63 comments

Show HN: Compile-Time Vibe Coding

https://github.com/Michael-JB/vibecode
10•michaelchicory•4h ago•1 comments

Show HN: Gigacode – Use OpenCode's UI with Claude Code/Codex/Amp

https://github.com/rivet-dev/sandbox-agent/tree/main/gigacode
17•NathanFlurry•22h ago•8 comments

Show HN: Slop News – HN front page now, but it's all slop

https://dosaygo-studio.github.io/hn-front-page-2035/slop-news
13•keepamovin•5h ago•5 comments

Show HN: Horizons – OSS agent execution engine

https://github.com/synth-laboratories/Horizons
23•JoshPurtell•1d ago•5 comments

Show HN: Daily-updated database of malicious browser extensions

https://github.com/toborrm9/malicious_extension_sentry
14•toborrm9•19h ago•7 comments

Show HN: Fitspire – a simple 5-minute workout app for busy people (iOS)

https://apps.apple.com/us/app/fitspire-5-minute-workout/id6758784938
2•devavinoth12•7h ago•0 comments

Show HN: Micropolis/SimCity Clone in Emacs Lisp

https://github.com/vkazanov/elcity
172•vkazanov•2d ago•49 comments

Show HN: I built a RAG engine to search Singaporean laws

https://github.com/adityaprasad-sudo/Explore-Singapore
4•ambitious_potat•8h ago•4 comments

Show HN: Sem – Semantic diffs and patches for Git

https://ataraxy-labs.github.io/sem/
2•rs545837•9h ago•1 comments

Show HN: Falcon's Eye (isometric NetHack) running in the browser via WebAssembly

https://rahuljaguste.github.io/Nethack_Falcons_Eye/
4•rahuljaguste•14h ago•1 comments

Show HN: Local task classifier and dispatcher on RTX 3080

https://github.com/resilientworkflowsentinel/resilient-workflow-sentinel
25•Shubham_Amb•1d ago•2 comments

Show HN: FastLog: 1.4 GB/s text file analyzer with AVX2 SIMD

https://github.com/AGDNoob/FastLog
5•AGDNoob•10h ago•1 comments

Show HN: A password system with no database, no sync, and nothing to breach

https://bastion-enclave.vercel.app
12•KevinChasse•19h ago•16 comments

Show HN: Gohpts tproxy with arp spoofing and sniffing got a new update

https://github.com/shadowy-pycoder/go-http-proxy-to-socks
2•shadowy-pycoder•11h ago•0 comments

Show HN: GitClaw – An AI assistant that runs in GitHub Actions

https://github.com/SawyerHood/gitclaw
9•sawyerjhood•20h ago•0 comments

Show HN: I built a directory of $1M+ in free credits for startups

https://startupperks.directory
4•osmansiddique•12h ago•0 comments

Show HN: A Kubernetes Operator to Validate Jupyter Notebooks in MLOps

https://github.com/tosin2013/jupyter-notebook-validator-operator
2•takinosh•12h ago•0 comments

Show HN: Craftplan – I built my wife a production management tool for her bakery

https://github.com/puemos/craftplan
568•deofoo•5d ago•166 comments

Show HN: 33rpm – A vinyl screensaver for macOS that syncs to your music

https://33rpm.noonpacific.com/
3•kaniksu•13h ago•0 comments
Open in hackernews

Show HN: Gerbil – an open source desktop app for running LLMs locally

https://github.com/lone-cloud/gerbil
37•lone-cloud•2mo ago
Gerbil is an open source app that I've been working on for the last couple of months. The development now is largely done and I'm unlikely to add anymore major features. Instead I'm focusing on any bug fixes, small QoL features and dependency upgrades.

Under the hood it runs llama.cpp (via koboldcpp) backends and allows easy integration with the popular modern frontends like Open WebUI, SillyTavern, ComfyUI, StableUI (built-in) and KoboldAI Lite (built-in).

Why did I create this? I wanted an all-in-one solution for simple text and image-gen local LLMs. I got fed up with needing to manage multiple tools for the various LLM backends and frontends. In addition, as a Linux Wayland user I needed something that would work and look great on my system.

Comments

throwaway81998•2mo ago
Serious question, not a "what's the point of this" shitpost... My experience with local LLMs is limited.

Just installed LM Studio on a new machine today (2025 Asus ROG Flow Z13, 96GB VRAM, running Linux). Haven't had the time to test it out yet.

Is there a reason for me to choose Gerbil instead? Or something else entirely?

A4ET8a8uTh0_v2•2mo ago
Not OP, but I am running ollama as a testing ground for various projects ( separately from gpt sub ).

<< Is there a reason for me to choose Gerbil instead? Or something else entirely?

My initial reaction is positive, because it seems to integrate everything without sacrificing being able to customize it further if need be. That said, did not test it yet, but now I will.

lone-cloud•2mo ago
Holy, your machine is a beast. 96GB of VRAM is pretty insane. I've been running a single 16GB VRAM AMD GPU. At the bottom of Gerbil's readme I listed out my setup where I use a 27b text gen model (gemma 3) but you'll be able to use much larger models and everything will run super fast.

Now as for your question, I started out with LM studio too, but the problem is that you'll need to juggle multiple apps if you want to do text gen or image gen or if you want to use a custom front-end. As an example, my favorite text gen front-end is "open webui" which gerbil can automatically set up for you (as long as you have Python's uv pre-installed). Gerbil will allow you to run text, image and video gen, as well as set up (and keep updated) any of the front-ends that I listed in my original post. I could be wrong but I'm not sure if LM studio can legally integrate GLP licensed software in the same way that Gerbil can because it's a closed source app.

throwaway81998•2mo ago
Thanks for the reply, I'll give Gerbil a try.
WillAdams•2mo ago
The big feature which I would like to see is a way to easily interact with the content of the local filesystem --- I have a prompt for re-naming scans based on parsing their content which I've been using in Copilot --- recent changes require that I:

- launch Copilot

- enter a prompt to get it into Copilot Pages mode

- click a button to actually get into that mode

- paste in the prompt

- drag in 20 files

- wait for them to upload

- click the button to process the prompt on the uploaded files

- quit Copilot, launch Copilot, delete the conversation, quit, launch Copilot and have it not start, which then allows repeating from the beginning

It would be much easier if I could just paste in the prompt specifying a folder full of files for it to run on, then clear that folder out for the next day's files and repeat.

Would that be something which your front-end could do? If not, is there one which could do that now? (apparently jan.ai has something like this on their roadmap for 0.8)

lone-cloud•2mo ago
I believe what you're describing is outside of the scope of Gerbil. Gerbil is not an LLM front-end, but Gerbil will run your LLM and seamlessly integrate (orchestrate) it with a custom front-end from the list in my original message. I believe this functionality will need to live in a custom front-end. I'm curious how jan.ai is planning on handling this. I'm guessing they're writing their own custom front-ends which is probably tightly integrated with their system.
WillAdams•2mo ago
Very hopeful of the multi-file stuff from jan.ai --- in the meanwhile, it's easier using Co-pilot for this than:

- isolating 50 files at a time

- dragging them into Adobe Acrobat

- closing each w/ ctrl w

- tapping enter to confirm saving

- typing the Invoice ID

- repeating until all 50 have been done, then remembering to quite Adobe Acrobat so as to re-launch it and repeat (can't leave it running, because there is a (reported) bug where after doing this several times, it stops saving)

- running a batch file made from a concatenated column in a spreadsheet to rename the files

The next question is when there will be an LLM front-end which can:

- open each file in a folder, parsing the content

- open each file in a PDF viewer

- fill in the entry fields of a Java application

- wait for the user to review both windows, if necessary, correct/update what was entered and save, then repeat for the next file

Ah well, job secure, even when that happens (though maybe hours would be cut back?) --- the big question is when LLMs will be reliable enough that human review is no longer viewed as worth the expense of a salary.

tell_me_whai•2mo ago
Hey, funny finding you comment as I've actually recently been developing a CLI app to improve LLM integration in my filesystem. Not sure what you are doing with your files, but maybe it could be useful for you too! You can check it out on my github (https://github.com/gael-vanderlee/whai) and hopefully that can help you with your use case. Look into roles in particular, as they allow you to save and reuse specific workflows.
radial_symmetry•2mo ago
I like that it has image generation without all the complication of ComfyUI. Can it load LoRA?
lone-cloud•2mo ago
Gerbil's built-in image generation is based on "StableUI" and I also prefer its super simple UI. Yes, you can load your own LoRa from the "Image Generation" tab. Gerbil also includes the optional ComfyUI integration from the settings for very advanced users. Its graph-based UI is a bit too advanced for me personally.
tell_me_whai•2mo ago
Does this allow for mixing LLMs and Image Gen? I find LLMs really useful to generate image prompts that diffusion models understand (which can be tedious to do manually). Although you need very detailed system prompts to teach what Image Gen models expect.
lone-cloud•2mo ago
That's how the pros do it. Yes, you can load both a text and image gen models at the same time. Needless to say you'll need a very beefy GPU(s) to do this so I wouldn't recommend it unless you know exactly what you're doing as generally you'll want to max out your VRAM for one model at a time for the highest quality results. Open webui and sillytavern allow both text and image gen from the same UI although I wouldn't recommend it for advanced users. Otherwise Gerbil will give you multiple pages to toggle through via the titlebar dropdown.