Built for developer prompts
Describe a bug, outline a refactor, draft a terminal command, or shape an LLM prompt without dictating every comma perfectly.
Native macOS voice layer for developer AI workflows
Wispkin turns rough spoken thoughts into clear, ready-to-send prompts. Use your own AI provider, choose the models that fit your token budget, and keep control of the path between your microphone and your tools.
Bring your own provider keys. No Wispkin intermediary server required.
uh make the auth thing less flaky and explain the redirect bug
Investigate the flaky authentication flow and explain why redirects fail intermittently.
Why Wispkin
Describe a bug, outline a refactor, draft a terminal command, or shape an LLM prompt without dictating every comma perfectly.
Configure transcription and correction providers independently. Use smaller models for low-token cleanup or larger models when quality matters most.
Trigger Wispkin with a global hotkey and send cleaned text back to your active editor, chat window, terminal, or browser.
Wispkin does not need to run an extra hosted assistant layer between your voice, code-adjacent prompts, and selected AI provider.
Security-conscious by design
Many AI tools add another hosted service between your machine and the model. Wispkin is designed around user-controlled providers, including bring-your-own API keys and future local model workflows, so you can tune quality, latency, and token use.
Your chosen provider may still process audio and text under its own terms. Wispkin keeps that boundary clear instead of hiding it behind a bundled account.
Read the full privacy policyHow it works
Start recording from any macOS app without breaking your current focus.
Wispkin transcribes and cleans the command while preserving your intent.
The cleaned prompt appears at your cursor or falls back to the clipboard.
For developers building with AI every day