Native macOS voice layer for developer AI workflows

Voice commands, cleaned up for your LLM.

Wispkin turns rough spoken thoughts into clear, ready-to-send prompts. Use your own AI provider, choose the models that fit your token budget, and keep control of the path between your microphone and your tools.

Bring your own provider keys. No Wispkin intermediary server required.

Wispkin logo
Spoken command

uh make the auth thing less flaky and explain the redirect bug

Wispkin output

Investigate the flaky authentication flow and explain why redirects fail intermittently.

BYO models Pick smaller models for low-token cleanup.
Menu bar native Capture thoughts without switching apps.
No proxy layer Requests go to the provider you configure.

Why Wispkin

Speak naturally. Send clearly. Stay in flow.

01

Built for developer prompts

Describe a bug, outline a refactor, draft a terminal command, or shape an LLM prompt without dictating every comma perfectly.

02

Choose your AI path

Configure transcription and correction providers independently. Use smaller models for low-token cleanup or larger models when quality matters most.

03

Works where your cursor is

Trigger Wispkin with a global hotkey and send cleaned text back to your active editor, chat window, terminal, or browser.

04

Designed to reduce risk

Wispkin does not need to run an extra hosted assistant layer between your voice, code-adjacent prompts, and selected AI provider.

Security-conscious by design

No extra company in the middle.

Many AI tools add another hosted service between your machine and the model. Wispkin is designed around user-controlled providers, including bring-your-own API keys and future local model workflows, so you can tune quality, latency, and token use.

Your chosen provider may still process audio and text under its own terms. Wispkin keeps that boundary clear instead of hiding it behind a bundled account.

Read the full privacy policy

How it works

Three steps from thought to prompt.

  1. 1

    Press the hotkey

    Start recording from any macOS app without breaking your current focus.

  2. 2

    Speak the rough version

    Wispkin transcribes and cleans the command while preserving your intent.

  3. 3

    Send polished text

    The cleaned prompt appears at your cursor or falls back to the clipboard.

For developers building with AI every day

Make voice a practical interface for your LLM workflow.