frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
255•isitcontent•18h ago•27 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
354•vecti•20h ago•160 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
10•sandGorgon•2d ago•2 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
327•eljojo•21h ago•198 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
79•phreda4•18h ago•14 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
93•antves•2d ago•70 comments

Show HN: MCP App to play backgammon with your LLM

https://github.com/sam-mfb/backgammon-mcp
3•sam256•2h ago•1 comments

Show HN: XAPIs.dev – Twitter API Alternative at 90% Lower Cost

https://xapis.dev
3•nmfccodes•36m ago•1 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
6•sakanakana00•3h ago•1 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•3h ago•1 comments

Show HN: BioTradingArena – Benchmark for LLMs to predict biotech stock movements

https://www.biotradingarena.com/hn
26•dchu17•23h ago•12 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
50•nwparker•1d ago•11 comments

Show HN: Artifact Keeper – Open-Source Artifactory/Nexus Alternative in Rust

https://github.com/artifact-keeper
152•bsgeraci•1d ago•64 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
17•denuoweb•2d ago•2 comments

Show HN: Gigacode – Use OpenCode's UI with Claude Code/Codex/Amp

https://github.com/rivet-dev/sandbox-agent/tree/main/gigacode
19•NathanFlurry•1d ago•9 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
2•melvinzammit•5h ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•6h ago•2 comments

Show HN: Compile-Time Vibe Coding

https://github.com/Michael-JB/vibecode
10•michaelchicory•7h ago•1 comments

Show HN: Slop News – HN front page now, but it's all slop

https://dosaygo-studio.github.io/hn-front-page-2035/slop-news
17•keepamovin•8h ago•5 comments

Show HN: Micropolis/SimCity Clone in Emacs Lisp

https://github.com/vkazanov/elcity
173•vkazanov•2d ago•49 comments

Show HN: Falcon's Eye (isometric NetHack) running in the browser via WebAssembly

https://rahuljaguste.github.io/Nethack_Falcons_Eye/
6•rahuljaguste•17h ago•1 comments

Show HN: Daily-updated database of malicious browser extensions

https://github.com/toborrm9/malicious_extension_sentry
14•toborrm9•23h ago•7 comments

Show HN: Horizons – OSS agent execution engine

https://github.com/synth-laboratories/Horizons
23•JoshPurtell•1d ago•5 comments

Show HN: Local task classifier and dispatcher on RTX 3080

https://github.com/resilientworkflowsentinel/resilient-workflow-sentinel
25•Shubham_Amb•1d ago•2 comments

Show HN: Fitspire – a simple 5-minute workout app for busy people (iOS)

https://apps.apple.com/us/app/fitspire-5-minute-workout/id6758784938
2•devavinoth12•11h ago•0 comments

Show HN: I built a RAG engine to search Singaporean laws

https://github.com/adityaprasad-sudo/Explore-Singapore
4•ambitious_potat•12h ago•4 comments

Show HN: Sem – Semantic diffs and patches for Git

https://ataraxy-labs.github.io/sem/
2•rs545837•13h ago•1 comments

Show HN: A password system with no database, no sync, and nothing to breach

https://bastion-enclave.vercel.app
12•KevinChasse•23h ago•16 comments

Show HN: GitClaw – An AI assistant that runs in GitHub Actions

https://github.com/SawyerHood/gitclaw
10•sawyerjhood•1d ago•0 comments

Show HN: Craftplan – I built my wife a production management tool for her bakery

https://github.com/puemos/craftplan
568•deofoo•5d ago•166 comments
Open in hackernews

Show HN: I'm tired of my LLM bullshitting. So I fixed it

5•BobbyLLM•2w ago
As a handsome local AI enjoyer™ you’ve probably noticed one of the big flaws with LLMs:

It lies. Confidently. ALL THE TIME.

I’m autistic and extremely allergic to vibes-based tooling, so … I built a thing. Maybe it’s useful to you too.

The thing: llama-conductor

llama-conductor is a router that sits between your frontend (eg: OWUI) & backend (llama.cpp + llama-swap). Local-first but it should talk to anything OpenAI-compatible if you point it there (note: experimental so YMMV).

LC is a glass-box that makes the stack behave like a deterministic system, instead of a drunk telling a story about the fish that got away.

TL;DR: “In God we trust. All others must bring data.”

Three examples:

1. KB mechanics (markdown, JSON, checksums)

You keep “knowledge” as dumb folders on disk. Drop docs (.txt, .md, .pdf`) in them. Then:

>>attach <kb> - attaches a KB folder

>>summ new - generates SUMM_.md files with SHA-256 provenance baked in + moves the original to a sub-folder

Now, when you ask something like:

> “yo, what did the Commodore C64 retail for in 1982?”

..it answers from the attached KBs only.

If the fact isn’t there, it tells you - explicitly - instead of winging it. Eg:

"The provided facts state the Commodore 64 launched at $595 and was reduced to $250, but do not specify a 1982 retail price. The Amiga’s pricing and timeline are also not detailed in the given facts.

Missing information includes the exact 1982 retail price for Commodore’s product line and which specific model(s) were sold then."

[Confidence: medium | Source: Mixed]

No vibes. Just: here’s what’s in your docs, here’s what’s missing, don't GIGO yourself into stupid.

Then, if you're happy with the summary, you can:

>>move to vault

2. Mentats: proof-or-refusal mode (Vault-only)

Mentats is the “deep think” pipeline against your curated sources.

* no chat history

* no filesystem KBs

* no Vodka

* Vault-only grounding (Qdrant)

It runs a triple-pass (thinker → critic → thinker). It’s slow on purpose. You can audit it. And if the Vault has nothing relevant? It refuses and tells you to go pound sand:

FINAL_ANSWER:

The provided facts do not contain information about the Acorn computer or its 1995 sale price.

Sources: Vault

FACTS_USED: NONE

[ZARDOZ HATH SPOKEN]

Also yes, it writes a mentats_debug.log. Go look at it any time you want.

The flow is basically:

Attach KBs → SUMM → Move to Vault → Mentats.

No mystery meat. No “trust me bro, embeddings.”

3. Vodka: deterministic memory on a potato budget

Potato PCs have two classic problems: goldfish memory + context bloat that murders your VRAM.

Vodka fixes both without extra model compute.

* !! stores facts verbatim (JSON on disk)

* ?? recalls them verbatim (TTL + touch limits so memory doesn’t become landfill)

* CTC (Cut The Crap)* hard-caps context (last N messages + char cap) and creates a concatenated summary (not LLM) so you don’t get VRAM spikes after 400 messages

So instead of:

“Remember my server is 203.0.113.42” → “Got it!” → [100 msgs later] → “127.0.0.1”

you get:

!! my server is 203.0.113.42` ?? server ip → 203.0.113.42 (with TTL/touch metadata)

And because context stays bounded: stable KV cache, stable speed, your potato PC stops crying.

There’s more (a lot more) in the README, but I’ve already over-autism’ed this post.

TL;DR:

If you want your local LLM to shut up when it doesn’t know and show receipts when it does, come poke it:

Primary (Codeberg) https://codeberg.org/BobbyLLM/llama-conductor

Mirror (GitHub): https://github.com/BobbyLLM/llama-conductor

PS: Sorry about the AI slop image. I can't draw for shit.

PPS: A human with ASD wrote this using Notepad++. If it the formatting or language are weird, now you know why.

Comments

gogopromptless•2w ago
How many facts do you have after using your tool for 2 weeks?
BobbyLLM•2w ago
Well, I've been using it more than two weeks (though I did just spend a month in Tokyo) so ... not sure how to answer that :)

Do you mean generally though - how many facts does it extract from a typical document of X length? Or do you mean what my own personal corpus currently is?

gogopromptless•2w ago
Size of your personal corpus is a rough signal of usefulness. If you've been using it a while and you have 500 facts, thats very different from 5000 facts.

My crappy personal system is a telegram channel "Today I Learned" that grows at roughly 5 messages per day. The search is obviously much worse than your tool.

The design of no rewrites for facts storage was an approach I have not seen from any of my friends' agent memory setups, so that difference struck my curiosity for how well its working in practice.

BobbyLLM•2w ago
Got it. Answer in 3 parts.

If you're talking about the "Fastrecall" part of it, it's clamped to around 3000 entries. Beyond that...I mean...it's just a JSON file. How much do you hate yourself? lol :)

I use mine as intended (well, by me) as a short term memory store with a TTL of 14 days. Right now it has...350ish stored facts with different expiration dates. You can play around with the settings and max resets (touches), but if ctx > max_items, janitor rolls in and the oldest get pruned anyway.

If you're talking about the chat bloat control part (cut the crap), I set mine to keep the last 2 user/assistant pairs and a soft cap of 2000 characters. That middle part obviously decays during the chat. I like to keep my chatty fast.

If you're asking about the KB (attach) system: generally speaking, the SUMM function seems to generate a summary of upto about 1500 characters. That's what...250 words? Short and sweet...because again, potato pc + smart grep = don't be stupid.

In theory, the KB thing is limited by you SSD and pain tolerance. I have about 1500 items I query against and it's still in the sub-secondish range.

Mentats / vault? Scales via Qdrant.

Does that answer it? None of this shit is enterprise grade; it's purely for personal use.

reify•2w ago
gooncoding

"bro I spent all weekend in claud code. Its incredible"

"Oh nice, what did you build?"

"Dude my set up is crazy. I've got all the vercel skills. plus custom hooks for every project"

"Sick, what are you building?"

"My set up is so optimized. I'm using, like 5 instances at once"

"Hello bro, stop playing with your dick, answer my questions bro!"

BobbyLLM•2w ago
Bro, what you get up to with your bro, is between you and your bro, bro.

Also, Claude said he likes me better than you and he'd know because he's French and old soul and shit.

PS: The concrete thing is a local control layer that makes my LLM shut up unless it can cite my docs. Plus I can run in on my p.o.s P1000 and not line NVIDIA shareholders pockets by buying more VRAM.

Repo’s over there if you want to kick the tires.

dankwizard•2w ago
I was sick of my AI hallucinating, so I added in the system prompts "Do not hallucinate". Just a quick glimpse into my prompt engineering mind
BobbyLLM•2w ago
No. Because that works about as well as telling a fat kid not to eat cake.

Prompts shape style, not epistemics.

What this does is moves the problem out of “please behave” and into hard constraints.

Nice drive by tho.

PS: You understand this is enforced outside the model, right? Or are you here just to try and dunk on someone?

BobbyLLM•2w ago
Here - an example. You can try it yourself. Pick any topic you want that has what you consider a definitive FAQ and attach to router to something with a brain.

I asked my router to ask: "WTF is AI Horde?" and I’m posting the full outputs because this shows (a) how it behaves in different modes, and (b) how and when it’s wrong in a loud, readable way.

1) Default / YOLO mode (no KB attached). Cold question. Model argues from its own internal priors.

AI-horde is a decentralized platform where users rent or sell computational power (GPU time) to run AI models. It connects AI model developers who need processing with individuals or organizations who have idle GPU hardware. The platform uses blockchain-like tokens to manage payments and incentives. It’s not a single AI system but a marketplace for GPU resources.

Missing details: Specific token mechanics, current model library, or user onboarding flow could affect accuracy. For example, if token economics changed recently, the explanation might not reflect current conditions.

Confidence: Medium | Source: Model

My take: Meh. Sort of. Wrong specifics. Fine for a cheap guess, not fine for “I’m betting money on this.” Note: it clearly identifies "meh, dunno" parts. Call it a 4/10 for factual correctness, with a clear shoulder shrug.

2) Web-enabled mode

Same question, but allowed to pull from the web. Now it sez -

An AI-horde is a free, open-source community service that allows anyone to contribute computing power to generate AI-generated images and text. It functions as a crowdsourced distributed cluster of workers focused on image and text generation tasks [1]. This model resembles collaborative projects like Folding@home, where users share computational resources for a common goal [2].

My take: Better. Still missing details, but broadly right. Let's call it ~7/10.

3) KB mode (FAQ scraped - >>summ new - query)

Here the model is constrained to just what’s in the attached KB summaries.

AI-horde is a decentralized platform for distributed AI computing. It allows users to rent out their idle GPU hardware to run AI models, connecting developers needing processing power with volunteers who provide it through open APIs. The system uses a non-cryptocurrency-based incentive called Kudos to prioritize contributions, where workers earn Kudos for completing tasks, and these are not tradable assets but rather internal metrics of service quality and project loyalty.

Access to AI-horde is managed via API keys; anonymous usage is allowed but receives the lowest priority. The project emphasizes volunteerism and discourages commercialization of Kudos, treating its sale or exchange for money or cryptocurrency as contrary to its ethos. Participation as a worker involves installing the AI-horde client, selecting task types (e.g., image or text generation), configuring supported models, and running the worker to begin earning Kudos.

Confidence: High | Source: Docs

My take: That seems pretty solid well source-grounded. Call it 8/10 based on docs.

Finally -

4) For fun, I then asked it to rank the three prior responses against the KB facts. Here's what it said -

[3] is most correct, scoring 9.5/10. It accurately reflects the core model: decentralized GPU renting via open APIs, Kudos as a non-crypto, non-tradable incentive tied to volunteerism, and the crowdsourced nature of contributions. It also correctly notes access via API keys and lower priority for anonymous use.

[2] is accurate but less complete, scoring 7.5/10. It correctly identifies the crowdsourced, open-source, volunteer-based nature and analogy to Folding@home. However, it omits key specifics like Kudos, API key access, and the explicit non-commercialization stance.

[1] is partially correct but misleading, scoring 6/10. It correctly describes GPU

Confidence: medium | Source: Docs

Point being: ungrounded self-eval is vibes. Grounded evaluation becomes boring and useful. Also, Qwen is an easy marker.

TL;DR

YOLO mode: plausible guesses, sometimes confidently wrong. Same old, same old.

Web mode: better, still imperfect.

KB mode: source-based answers with explicit limits/refusals

KB-grounded ranking: reproducible, source-anchored adjudication

Repeatable demo. Attach good model and try it yourself.