Use code SHOWHN100 to download the installer for free (free for next few days) from the link. Drag the app into Applications folder and run it.
WHAT IS AHAI:
ahai is a 100% local private Mac app to find ideas scattered across markdown files (for me it was code repo READMEs, Obsidian notes, clipped web articles and research paper abstracts in Obsidian).
TECH DETAILS:
- GUI - Pyside 6 (Qt for Python)
- AI in app - mlx_lm
- backend - Python
- AI coding assistant - Claude Opus 4.5, Claude Code, Claude Code on web
- AI assistants for content - Grok 4.1 Thinking, Gemini 3 Pro, Nano Banana 2 pro
- System requirements - Mac with Apple Silicon, minimum 16GB unified RAM
BACKSTORY:
I have been researching how to find the balance between quality and acceleration while building using AI. Full on vibe coding is not for me. And going full manual doesn't make sense either. I finally found a formula that worked, and wanted to find an idea to work on end to end. I had lots of prototypes in my git repos, with READMEs describing the project ideas. I had jotted down ideas and clipped research papers to Obsidian notes (also markdown files). Totally over 13k markdown files - it was impossible to find the markdown files containing ideas, and ideas within them with any heuristic. I needed AI. So I wrote a script to do this using mlx models on Mac. It did so well, I decided to make that my first product. That is how ahai was born.
HOW IT WORKS:
- you point ahai to some folder, and it starts finding markdown files, and then uses AI (an mlx_lm model) to find if it has ideas, and then to extract ideas with title and a brief description.
- Clicking on an idea takes you to the rendered markdown source of the idea.
- You can then reorder the ideas, hide some of them, etc. and export the list of ideas to markdown, html or json.
- You can only be running one folder at a time. You can pause and resume folders.
- First time model use, if model is not already on your machine, will take some time to download. Be patient.
- You can change the model in settings if you know how to do that - must be an mlx_lm compatible model to work.
- All files are output to an output folder that you can also configure in settings. Switching between output folders can enable managing different kinds of stuff in different places - if you already downloaded some content in a folder, switch out and back, it will take off where you left off.
- Known issue: The ideas have false positives and false negatives. This is AI generated, cannot be avoided, but can be improved with prompting. Even with some of these, I find it quite useful.
- Known issue: Processing folders will take time, which is tuned to some degree, but cannot be avoided. But as I said, you can always pause and resume.
HOW IT IS DIFFERENT:
- Most AI apps and buzz focus on complex problems that only the best frontier models can solve, if any. I am interested in what kind of useful problems small local models can solve reliably. This app solves a niche problem using smaller local models very well. Most upcoming apps will also have the same focus.
- A lot of work has gone into benchmarking different models on markdown files to see which ones work best for a given size of machine (the app requires minimum 16GB RAM, but depending on the machine, it will decide which model to use as default). A tech/power user can always change the model used in the settings - just has to be an mlx_lm compatible model that fits in their RAM (within about half the size of total RAM).
- I have been using AI for coding and research and evals and all that, but until recently, it became hard to get anything work end to end as an indie dev - from concept to dev to marketing. But recently, with Claude Code/Claude Code web/Claude Opus 4.5, as well as Gemini 3 Pro/Nano banana 2 pro/NotebookLM deep research - I was able to build this app - with diligence in high risk parts, more trustingly in low stakes pieces - verifying everything, questioning anything suspicious - from concept to launch in 10 days.
- I think local private experiences are going to become increasingly relevant, as proprietary models and AI based apps suck in our data and can misuse/abuse/expose it in many ways. So, I believe this is a good space to focus on - local private Mac apps using local models. This is the first app in that space.
PRICING:
It is free with the code SHOWHN100 for this community for now, will be revoked at some point. Regularly priced at $19+, and suggested $29 - one time fee, no subscription, get all updates from later. I asked a bunch of top models by describing my app and they came up with this ballpark. I personally felt this was too pricey, but they also said a lower price would indicate poor quality to the users. Am open to changing it if there is evidence this isn't the right price point.
It is still rough on the edges. Please let me know any issues and I will prioritize and fix them.
Please try it out and let me know any questions. AMA on my tech stack, process, anything.
rcanand2025•32m ago
Use code SHOWHN100 to download the installer for free (free for next few days) from the link. Drag the app into Applications folder and run it.
WHAT IS AHAI:
ahai is a 100% local private Mac app to find ideas scattered across markdown files (for me it was code repo READMEs, Obsidian notes, clipped web articles and research paper abstracts in Obsidian).
TECH DETAILS:
- GUI - Pyside 6 (Qt for Python)
- AI in app - mlx_lm
- backend - Python
- AI coding assistant - Claude Opus 4.5, Claude Code, Claude Code on web
- AI assistants for content - Grok 4.1 Thinking, Gemini 3 Pro, Nano Banana 2 pro
- System requirements - Mac with Apple Silicon, minimum 16GB unified RAM
BACKSTORY:
I have been researching how to find the balance between quality and acceleration while building using AI. Full on vibe coding is not for me. And going full manual doesn't make sense either. I finally found a formula that worked, and wanted to find an idea to work on end to end. I had lots of prototypes in my git repos, with READMEs describing the project ideas. I had jotted down ideas and clipped research papers to Obsidian notes (also markdown files). Totally over 13k markdown files - it was impossible to find the markdown files containing ideas, and ideas within them with any heuristic. I needed AI. So I wrote a script to do this using mlx models on Mac. It did so well, I decided to make that my first product. That is how ahai was born.
HOW IT WORKS:
- you point ahai to some folder, and it starts finding markdown files, and then uses AI (an mlx_lm model) to find if it has ideas, and then to extract ideas with title and a brief description.
- Clicking on an idea takes you to the rendered markdown source of the idea.
- You can then reorder the ideas, hide some of them, etc. and export the list of ideas to markdown, html or json.
- You can only be running one folder at a time. You can pause and resume folders.
- First time model use, if model is not already on your machine, will take some time to download. Be patient.
- You can change the model in settings if you know how to do that - must be an mlx_lm compatible model to work.
- All files are output to an output folder that you can also configure in settings. Switching between output folders can enable managing different kinds of stuff in different places - if you already downloaded some content in a folder, switch out and back, it will take off where you left off.
- Known issue: The ideas have false positives and false negatives. This is AI generated, cannot be avoided, but can be improved with prompting. Even with some of these, I find it quite useful.
- Known issue: Processing folders will take time, which is tuned to some degree, but cannot be avoided. But as I said, you can always pause and resume.
HOW IT IS DIFFERENT:
- Most AI apps and buzz focus on complex problems that only the best frontier models can solve, if any. I am interested in what kind of useful problems small local models can solve reliably. This app solves a niche problem using smaller local models very well. Most upcoming apps will also have the same focus.
- A lot of work has gone into benchmarking different models on markdown files to see which ones work best for a given size of machine (the app requires minimum 16GB RAM, but depending on the machine, it will decide which model to use as default). A tech/power user can always change the model used in the settings - just has to be an mlx_lm compatible model that fits in their RAM (within about half the size of total RAM).
- I have been using AI for coding and research and evals and all that, but until recently, it became hard to get anything work end to end as an indie dev - from concept to dev to marketing. But recently, with Claude Code/Claude Code web/Claude Opus 4.5, as well as Gemini 3 Pro/Nano banana 2 pro/NotebookLM deep research - I was able to build this app - with diligence in high risk parts, more trustingly in low stakes pieces - verifying everything, questioning anything suspicious - from concept to launch in 10 days.
- I think local private experiences are going to become increasingly relevant, as proprietary models and AI based apps suck in our data and can misuse/abuse/expose it in many ways. So, I believe this is a good space to focus on - local private Mac apps using local models. This is the first app in that space.
PRICING:
It is free with the code SHOWHN100 for this community for now, will be revoked at some point. Regularly priced at $19+, and suggested $29 - one time fee, no subscription, get all updates from later. I asked a bunch of top models by describing my app and they came up with this ballpark. I personally felt this was too pricey, but they also said a lower price would indicate poor quality to the users. Am open to changing it if there is evidence this isn't the right price point.
It is still rough on the edges. Please let me know any issues and I will prioritize and fix them.
Please try it out and let me know any questions. AMA on my tech stack, process, anything.