I’m the creator of GHOSTYPE.
I built this because I wanted a voice input workflow that didn't feel like talking to a chatbot. I wanted something that felt like a "neural extension" of my keyboard.
It’s a macOS-native app that sits between your voice and your active window. Here is what it actually does:
The Core Input Workflow:
Push-to-Talk & Smart Send: Hold a global shortcut to speak. It detects the active app to determine the correct send method (e.g., Cmd+Enter for Slack vs Enter for Discord).
Inline Editing: You can format while speaking. Need a line break, a specific spelling, or a bulleted list? You just say it, and it handles the formatting inside the sentence before outputting.
"Call Ghost": Post-processing commands (translate, polish, expand) are available immediately after speaking, before the text is typed out.
The Experimental Stuff (WIP):
1. Ghost Twin (Style Transfer): I call this a "Virtual Personality Engine" (a bit pretentious, I know). It analyzes your local writing history to build a style vector. It learns your tone—whether professional for emails or casual for Discord—so the output sounds like you, not a generic LLM.
Side note: I'm currently building the training UI to look like a retro CRT terminal because I miss that aesthetic.
2. Ghost Morph (Custom Skills): Trigger custom macros with a modifier key. For example, turn a raw voice thought directly into a formatted Twitter thread or a Jira ticket structure.
Privacy & Architecture:
Local-First: I’m an indie dev and literally cannot afford the server costs to store user typing data.
E2E Encrypted: Your "Ghost Twin" profile and skills are synced across devices via E2E encryption. Data stays on your machine.
Constraints:
macOS Only: I don't own a Windows PC, so I can't build for it yet. (If any Windows devs want to port this, let me know).
It's currently in pre-launch. I threw up a landing page at if you want to follow the progress. https://ghostype.one
Happy to answer questions about the local-first architecture or the style transfer logic!