Documentation

Everything you need to know about VoxyAI

Getting Started

VoxyAI is a voice dictation app for macOS that uses AI to intelligently format your speech. Simply speak naturally and VoxyAI will transform your words into properly formatted text.

Quick Start

  1. Install VoxyAI and grant the required permissions
  2. Configure your preferred AI provider (or use the free on-device options)
  3. Press fn + shift to start recording or fn + ctrl to type
  4. Speak naturally - no need to dictate punctuation
  5. Release the shortcut keys to stop - formatted text is automatically pasted

AI Providers

VoxyAI supports multiple AI providers for text formatting. Choose based on your needs:

Apple Intelligence

Free, on-device processing. Your data never leaves your Mac.

Ollama (Local Models)

Run open-source models locally. Free and private. Supports Llama, Gemma, DeepSeek, and more.

Ollama Setup Guide →

OpenAI

GPT-4o and GPT-4o-mini. Requires API key from openai.com

Anthropic Claude

Claude 3.5 Sonnet and Haiku. Requires API key from anthropic.com

Google Gemini

Gemini Pro and Flash models. Requires API key from Google AI Studio.

Groq

Ultra-fast inference. Free tier available. Requires API key from groq.com

Mistral

European AI leader. Powerful models with great multilingual support. Requires API key from mistral.ai

Perplexity

AI with real-time web search capabilities. Great for current information. Requires API key from perplexity.ai

Apple Intelligence Setup

What is Apple Intelligence? Apple Intelligence is Apple's on-device AI system built into macOS. VoxyAI can use Apple Intelligence for speech-to-text transcription, giving you a free, private, and fast option that requires no API keys or downloads.

Requirements

  • macOS Sequoia 15.1 or later
  • Apple Silicon Mac (M1, M2, M3, M4 or later)
  • At least 4GB of available storage for language models
  • Siri and device language must be set to a supported language

Enabling Apple Intelligence

  1. Open System Settings
  2. Click Apple Intelligence & Siri in the sidebar
  3. Toggle Apple Intelligence on
  4. You may be asked to join a waitlist or accept terms — follow the on-screen prompts
  5. Wait for the Apple Intelligence models to download (this happens in the background)

Note: You can check the download progress in System Settings → General → Storage. The Apple Intelligence models may take some time to download depending on your internet connection.

Setting the Language

Apple Intelligence requires Siri to be set to a supported language. To verify or change this:

  1. Open System Settings
  2. Click Apple Intelligence & Siri
  3. Under Language, make sure a supported language is selected (e.g., English, Spanish, French, German, Japanese, Korean, Chinese, Portuguese)

Verifying Apple Intelligence Is Ready

To confirm Apple Intelligence is fully set up:

  1. Go to System Settings → Apple Intelligence & Siri
  2. The toggle should be on and you should not see any pending downloads or waitlist messages
  3. Try using a built-in Apple Intelligence feature (such as summarizing text in Safari or Mail) to confirm it's working

Using with VoxyAI

Once Apple Intelligence is enabled on your Mac:

  1. Open VoxyAI settings
  2. Select Apple Intelligence as your AI provider
  3. No API key or additional configuration is needed

Tip: Apple Intelligence runs entirely on your device, so your voice data is never sent to external servers. This makes it an excellent choice for privacy-sensitive use cases.

Troubleshooting

Apple Intelligence option not available

  • Make sure your Mac has Apple Silicon (M1 or later) — Intel Macs are not supported
  • Verify you are running macOS Sequoia 15.1 or later (Apple menu → About This Mac)
  • Check that your region and language settings are set to a supported configuration

Apple Intelligence is stuck downloading

  • Ensure you have a stable internet connection
  • Check that you have at least 4GB of free storage
  • Try restarting your Mac and checking again

Apple Intelligence not showing in VoxyAI

  • Confirm Apple Intelligence is fully enabled in System Settings
  • Make sure VoxyAI is updated to the latest version
  • Restart VoxyAI after enabling Apple Intelligence

Chat History Migration

If you are moving to a new Mac, you may want to bring your VoxyAI chat history with you. This guide explains your options.

How chat history is stored VoxyAI stores your chat history in an encrypted database on your Mac. The encryption key is stored in your macOS Keychain. Both the database and the key need to be transferred to your new Mac for your history to be accessible.

Option 1: macOS Migration Assistant (Recommended)

The easiest way to migrate your chat history is to use macOS Migration Assistant when setting up your new Mac. Migration Assistant transfers your applications, files, and Keychain items — which means both the chat history database and its encryption key come along automatically.

  1. On your new Mac, open Migration Assistant (found in Applications → Utilities)
  2. Choose to transfer from your old Mac, a Time Machine backup, or a startup disk
  3. Make sure to include your user account and applications in the transfer
  4. Once the migration completes, open VoxyAI — your chat history should be available

Tip: Migration Assistant is also available during the initial setup of a new Mac. If you are setting up your new Mac for the first time, you will be prompted to transfer data from another Mac.

Option 2: Manual Migration (Advanced)

If you did not use Migration Assistant, you can manually copy the chat history database to your new Mac. Note that the encryption key in the Keychain does not transfer with a manual file copy, so you will need to transfer that separately.

Step 1: Copy the database file

The chat history database is located at:

~/Library/Application Support/VoxyAI/chat-history.db

Copy this file from your old Mac to the same location on your new Mac. You can use AirDrop, a USB drive, or any file transfer method you prefer.

Step 2: Transfer the encryption key

The encryption key is stored in your macOS Keychain. To transfer it:

  1. On your old Mac, open Keychain Access (found in Applications → Utilities)
  2. Search for VoxyAI
  3. Select the VoxyAI keychain item, then go to File → Export Items
  4. Save the exported file and transfer it to your new Mac
  5. On your new Mac, open Keychain Access and go to File → Import Items to import the key

Important: Without the encryption key, VoxyAI cannot decrypt your chat history. If the key is not transferred, your previous conversations will not be accessible on the new Mac.

Troubleshooting

Chat history is empty after migration

  • Verify the database file exists at ~/Library/Application Support/VoxyAI/chat-history.db
  • Make sure VoxyAI is not running while you copy the file
  • Restart VoxyAI after placing the file

Chat history shows but conversations cannot be read

  • This usually means the encryption key was not transferred
  • Follow the Keychain export/import steps above to transfer the key
  • Restart VoxyAI after importing the key

Clipboard History

Clipboard History automatically captures everything you copy across all apps and lets you browse, search, and reuse past clipboard items instantly. Open it anytime with a keyboard shortcut to find exactly what you need.

Quick Access Press Cmd+Shift+V from any app to open the Clipboard History overlay. Use arrow keys to navigate, Enter to paste, and Esc to close.

Browsing & Search

Clipboard History opens as a Spotlight-style overlay. Start typing to search instantly, or browse your recent items in a scrollable list.

  • Browse recent clipboard items sorted by time (newest first)
  • Instant text search across content and source app name
  • Filter by content type: Text, Code, or URLs
  • Pin important items to keep them at the top of the list

Automatic Content Detection

Each clipboard item is automatically categorized so you can filter and find items quickly.

Type Detection
Text Plain text that does not match other patterns
Code Content containing keywords (func, class, def), braces, semicolons, or indentation patterns
URL Valid URLs with schemes such as http, https, ftp, or ssh

AI-Powered Search

Toggle AI Search in the Clipboard History overlay to find items using natural language. Instead of exact keyword matching, AI Search uses your configured AI provider to semantically rank results by relevance to your query.

  • Uses your selected AI provider for semantic ranking
  • Falls back to standard text search when AI is unavailable

Tip: AI Search works best with descriptive queries. Instead of searching for an exact word, try describing what you are looking for, such as "the URL I copied from the documentation site" or "that Python function from earlier."

Source Attribution

Every clipboard item shows which application it was copied from and when, so you can quickly identify items by context. For example, a row might display "Safari — 5 min ago" beneath the content preview.

Keyboard Shortcuts

Shortcut Action
Cmd + Shift + V Open / close Clipboard History
↑ / ↓ Navigate items
Enter Paste selected item
Esc Close Clipboard History

Privacy & Security

All clipboard data is stored locally on your Mac and encrypted at rest.

  • AES-256-GCM encryption for all stored clipboard items
  • Encryption key stored securely in macOS Keychain
  • Exclude specific apps from clipboard monitoring by bundle ID

Note: Clipboard data never leaves your device unless you use AI Search, in which case only a truncated preview of candidate items is sent to your configured AI provider for ranking.

Settings

Configure Clipboard History from VoxyAI Settings.

Setting Default Description
Enabled On Enable or disable clipboard monitoring
Max Items 1,000 Maximum number of items to retain (100, 500, 1,000, 2,000, or 5,000)
Excluded Apps None Comma-separated list of app bundle IDs to exclude from monitoring (e.g., com.example.app)

Custom Commands

Custom commands let you define trigger phrases that run AI prompt templates on your text. Activate them by voice, by typing the trigger phrase, or with one-click quick action buttons. Each command has a prompt template with placeholders that get filled in automatically, and you choose what happens with the result.

Built-in Commands

VoxyAI includes 6 ready-to-use commands. Select text in any app and say one of the trigger phrases to run the command.

Command Trigger Phrases What It Does Output
Summarize "summarize", "summarize this", "give me a summary" Condenses text to key points Shows in chat
Bullet List "bullet list", "make bullets", "convert to bullets" Converts text into a clean bullet-point list Pastes into app
Make Shorter "make shorter", "shorten this", "condense this" Shortens text while preserving the meaning Pastes into app
Professionalize "professionalize", "make professional", "business tone" Rewrites text in a professional business tone Pastes into app
Proofread "proofread", "check grammar", "fix grammar" Fixes grammar, spelling, and punctuation errors Pastes into app
Explain Simply "explain simply", "simplify this", "in simple terms" Explains text in simple, easy-to-understand terms Shows in chat

Creating Your Own Commands

Open the Manage Commands window from the VoxyAI menu bar to create, edit, and organize your commands. Click the + button to add a new command, then fill in the following fields:

  • Name — A display name for your command (e.g. "Translate to Spanish").
  • Trigger Phrases — Comma-separated phrases that activate the command. For example: "translate to spanish, spanish translation". VoxyAI matches these against your voice input or typed text.
  • Prompt Template — The instruction sent to the AI. Use placeholders to inject dynamic content (see below).
  • Action Type — What happens with the AI result: paste into app, show in chat, or copy to clipboard.
  • Requires Selected Text — When enabled, the command captures highlighted text from the active app before running. Disable this for commands that only use dictated text.
  • Target Apps — Optional. Limit the command to specific apps (e.g. "Mail, Messages"). Leave empty to use the command in any app.
  • Enabled — Toggle a command on or off without deleting it.

Placeholders

Use these placeholders in your prompt template. They are replaced with real values when the command runs.

  • {{selected_text}} — The text currently highlighted in the active application.
  • {{dictated_text}} — The text you spoke or dictated via voice input.
  • {{app_name}} — The name of the frontmost application (e.g. "Safari", "Mail").

Example

A command that translates selected text to a spoken language:

Prompt Template:

Translate the following text to {{dictated_text}}: {{selected_text}}

With the trigger phrase "translate to", you would select text and say "translate to French". The placeholder {{dictated_text}} becomes "French" and {{selected_text}} becomes whatever you highlighted.

Output Actions

Each command has an action type that controls what happens with the AI result:

  • Paste into app — The result is pasted directly into the active application at the cursor position. Best for commands that transform selected text like Proofread or Professionalize.
  • Show in chat — The result appears in the VoxyAI chat window. Best for informational commands like Explain Simply where you want to read the response without changing your document.
  • Copy to clipboard — The result is copied to the clipboard silently. You can paste it wherever you like.

Quick Actions

Your custom commands appear as buttons in the VoxyAI menu bar popover. Select text in any app, click the VoxyAI icon in the menu bar, and tap a command button to run it instantly — no voice input needed.

Enterprise Deployment

Deploy VoxyAI across your organization with centralized configuration, volume licensing, and security controls via MDM or configuration files.

What You Can Do

  • Control LLM providers — Restrict which cloud AI providers users can access
  • Deploy API keys — Securely deliver encrypted API keys so users never see them
  • Enforce data loss prevention — Block secrets, keywords, and custom patterns from being sent to cloud LLMs
  • Deploy licenses via MDM — Users never need to enter a license key manually
  • Enable audit logging — Log all outbound requests to cloud LLM services
  • Disable features — Turn off specific capabilities organization-wide

Deployment Methods

MDM Managed Preferences (Recommended)

Deploy a configuration profile to:

/Library/Managed Preferences/com.voxyai.VoxyAI.plist

Compatible with Jamf Pro, Mosyle, Iru (formerly Kandji), Fleet, and any MDM that supports macOS managed preferences.

Jamf Pro Example

  1. Navigate to Computers > Configuration Profiles > New
  2. Select Application & Custom Settings payload
  3. Upload Property List or paste the XML
  4. Set preference domain to com.voxyai.VoxyAI
  5. Scope to target computers/groups
  6. Save and deploy

Local Config File

For config management tools (Chef, Puppet, Ansible, SaltStack, Munki) or manual testing, place the file at:

~/Library/Application Support/VoxyAI/enterprise-config.plist

MDM configuration always takes precedence over local config files.

Minimal Configuration

The simplest enterprise config only requires an organization name and license key. All other settings use their defaults (no restrictions, no DLP enforcement, no logging).

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
  "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>OrganizationName</key>
    <string>Acme Corporation</string>
    <key>LicenseKey</key>
    <string>YOUR-LICENSE-KEY</string>
</dict>
</plist>

Config Generation Tool

Use our CLI tool to generate signed and encrypted enterprise config plists. The tool encrypts your data locally and requests a signature from VoxyAI's signing service. Your API keys never leave your machine.

swift generate-enterprise-config.swift \
  --org "Acme Corporation" \
  --license "ABCD-1234-EFGH-5678" \
  --api-key anthropic:sk-ant-xxx \
  --api-key openai:sk-xxx \
  --allowed-providers anthropic,openai \
  --enforce-secret-scanning \
  --output enterprise-config.plist

See Configuration Reference for the full tool documentation and download link.

Backward Compatibility

When no enterprise configuration is present, VoxyAI operates normally as a single-user application with all features available and no restrictions.

Related sections: See Configuration Reference for all available settings, Security & DLP for secrets scanning and provider control, and Licensing for volume pricing and MDM license deployment.

Enterprise Configuration Reference

Complete reference for all enterprise configuration keys, encrypted payload fields, and example configurations.

Plain-Text Plist Keys

These keys are set directly in the plist file and are not encrypted.

Key Type Required Default Description
OrganizationName String Yes Company name displayed in the app
LicenseKey String Yes Volume license key from purchase
AllowedProviders Array No All allowed Cloud LLM providers users can access
EnforcedAIProvider String No None Lock all users to a single cloud provider
AllowUserOverride Boolean No true Whether users can change enterprise-managed settings
EnforceSecretScanning Boolean No false Block requests containing detected secrets (no user override)
EnableCloudLogging Boolean No false Log all outbound cloud LLM requests
DisabledFeatures Array No None Features to disable (codeGeneration, voiceInput, customPrompts, userMemory)
EncryptedPayload Data No None AES-256-GCM encrypted payload containing API keys and sensitive settings
PayloadSignature Data No None Ed25519 signature of the encrypted payload
ConfigVersion Integer No 1 Config format version for forward compatibility

AllowedProviders Values

Cloud providers that can be restricted:

anthropic — Anthropic (Claude)
openai — OpenAI (ChatGPT)
gemini — Google (Gemini)
groq — Groq
perplexity — Perplexity
mistral — Mistral

Local providers (Apple Intelligence and Ollama) are always available regardless of restrictions, since they don't send data off-device.

Encrypted Payload Contents

Sensitive data is stored in an AES-256-GCM encrypted payload to prevent casual inspection.

Key Type Description
apiKeys Object API keys by provider identifier (anthropic, openai, gemini, groq, perplexity, mistral)
allowedModels Array Restrict to specific model names (optional)
blockedKeywords Array Case-insensitive keywords to block from being sent to cloud LLMs
customScanPatterns Array Custom regex or literal patterns to scan for (each: name, pattern, isRegex)
configExpiration String ISO 8601 date when this config expires
customEndpoints Object Custom API endpoint URLs for enterprise proxy setups

Example Configurations

Restrict to Anthropic Only

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
  "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>OrganizationName</key>
    <string>Acme Corporation</string>
    <key>LicenseKey</key>
    <string>YOUR-LICENSE-KEY</string>
    <key>AllowedProviders</key>
    <array>
        <string>anthropic</string>
    </array>
    <key>AllowUserOverride</key>
    <false/>
</dict>
</plist>

Multiple Providers with DLP Enforcement

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
  "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>OrganizationName</key>
    <string>Acme Corporation</string>
    <key>LicenseKey</key>
    <string>YOUR-LICENSE-KEY</string>
    <key>AllowedProviders</key>
    <array>
        <string>anthropic</string>
        <string>openai</string>
        <string>gemini</string>
    </array>
    <key>EnforceSecretScanning</key>
    <true/>
    <key>EnableCloudLogging</key>
    <true/>
    <key>AllowUserOverride</key>
    <false/>
    <key>EncryptedPayload</key>
    <data><!-- Generated by config tool --></data>
    <key>PayloadSignature</key>
    <data><!-- Generated by config tool --></data>
    <key>ConfigVersion</key>
    <integer>1</integer>
</dict>
</plist>

Full Lockdown

Enforced single provider, secrets scanning blocks requests, audit logging enabled, custom prompts disabled, no user override.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
  "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>OrganizationName</key>
    <string>Acme Corporation</string>
    <key>LicenseKey</key>
    <string>YOUR-LICENSE-KEY</string>
    <key>EnforcedAIProvider</key>
    <string>anthropic</string>
    <key>AllowUserOverride</key>
    <false/>
    <key>EnforceSecretScanning</key>
    <true/>
    <key>EnableCloudLogging</key>
    <true/>
    <key>DisabledFeatures</key>
    <array>
        <string>customPrompts</string>
    </array>
    <key>EncryptedPayload</key>
    <data><!-- Generated by config tool --></data>
    <key>PayloadSignature</key>
    <data><!-- Generated by config tool --></data>
    <key>ConfigVersion</key>
    <integer>1</integer>
</dict>
</plist>

Security Architecture

Zero-Knowledge Config Signing

Enterprise configs are signed using hash-based remote signing. Your sensitive data (API keys, DLP patterns, blocked keywords) never leaves your machine.

Your Machine (CLI Tool)                    VoxyAI Signing Service
───────────────────────                    ─────────────────────
1. Enter config values (API keys, etc.)
2. CLI encrypts payload locally (AES-256-GCM)
3. CLI computes SHA-256 hash of encrypted payload
4. CLI sends ONLY:                  ───>   5. Validates license
   - SHA-256 hash (irreversible)           6. Signs hash with Ed25519
   - License key
   - Organization name
                                    <───   7. Returns signature
8. CLI assembles final plist with signature
Data Sent to VoxyAI? Notes
Organization name Yes Needed for license validation
License key Yes Needed for license validation
SHA-256 hash Yes Irreversible, cannot recover original data
API keys No Encrypted locally, never transmitted
DLP patterns No Encrypted locally, never transmitted
Blocked keywords No Encrypted locally, never transmitted

Payload Encryption

API keys and sensitive patterns are encrypted using AES-256-GCM. The encryption key is derived from SHA256(orgName + "|" + licenseKey + "|VoxyAI-Enterprise-Salt-2024"). The encrypted data format is: nonce (12 bytes) + ciphertext + authentication tag (16 bytes).

Signature Verification

VoxyAI verifies each config signature on launch. The signing service signs the SHA-256 hash of the encrypted payload with an Ed25519 private key held only on VoxyAI's server. The app verifies the signature using the embedded public key. If the signature is invalid or the payload has been tampered with, the config is rejected.

Security Guarantees

  • Zero-knowledge signing — VoxyAI's signing service never sees your API keys or any sensitive data. It only receives an irreversible SHA-256 hash.
  • Tamper protection — Any modification to the config after signing invalidates the signature.
  • License enforcement — Only holders of a valid enterprise/volume license can obtain signatures.
  • No key distribution — The signing private key never leaves VoxyAI's infrastructure.

Security Recommendations

  • Set spending limits on API keys via your LLM provider dashboards
  • Restrict API keys to office IP ranges where supported
  • Monitor API usage for anomalies
  • Use separate API keys per department for audit trails
  • Set AllowUserOverride to false to prevent users from changing managed settings

Config Generation Tool

Use the CLI tool to generate signed and encrypted enterprise config plists. The tool runs on any Mac, encrypts your data locally, and requests a signature from VoxyAI's signing service. Your API keys and sensitive data never leave your machine.

Download: generate-enterprise-config.swift

Requires macOS with Swift installed (included with Xcode or Xcode Command Line Tools).

Usage

swift generate-enterprise-config.swift \
  --org "Acme Corporation" \
  --license "ABCD-1234-EFGH-5678" \
  --api-key anthropic:sk-ant-your-key-here \
  --api-key openai:sk-your-key-here \
  --allowed-providers anthropic,openai \
  --enforce-secret-scanning \
  --enable-cloud-logging \
  --blocked-keyword "acmecorp" \
  --blocked-keyword "project-phoenix" \
  --custom-pattern "Internal Domain:.*\\.acmecorp\\.com:regex" \
  --no-user-override \
  --output enterprise-config.plist

Output:

Encrypting payload... done
Requesting signature from VoxyAI signing service... done
Config signed successfully.
Enterprise config written to: enterprise-config.plist

Arguments

Argument Required Description
--org Yes Organization name
--license Yes Enterprise/volume license key
--api-key No API key as provider:key (repeatable)
--allowed-providers No Comma-separated list of allowed providers
--enforced-provider No Lock to a single provider
--no-user-override No Prevent users from changing managed settings
--enforce-secret-scanning No Block requests containing detected secrets
--enable-cloud-logging No Enable audit logging of cloud LLM requests
--blocked-keyword No Keyword to block (repeatable)
--custom-pattern No Custom pattern as Name:pattern:regex|literal
--disabled-features No Comma-separated features to disable
--output No Output path (default: enterprise-config.plist)

Enterprise Security & DLP

Control which LLM providers users can access, enforce data loss prevention policies, and enable audit logging for compliance.

LLM Provider Control

AllowedProviders

When AllowedProviders is set, only the listed cloud providers appear in the app. Unlisted providers are completely hidden.

When absent, all providers are available.

<key>AllowedProviders</key>
<array>
    <string>anthropic</string>
    <string>openai</string>
</array>

EnforcedAIProvider

Locks all users to a single cloud provider. Overrides AllowedProviders if both are set. The provider picker shows only the enforced provider plus local providers.

Local Providers

Apple Intelligence and Ollama are always available regardless of provider restrictions, since they process data entirely on-device with no data leaving the machine.

Enterprise API Key Deployment

API keys deployed via the encrypted payload are:

  • Stored securely in the macOS Keychain on app launch
  • Shown as "Managed by your organization" in settings when AllowUserOverride is false
  • Not visible or editable by end users

Data Loss Prevention (Secrets Scanning)

Built-in Patterns

VoxyAI automatically detects 20+ types of secrets in selected code before sending to cloud LLMs:

• Hardcoded secrets and passwords
• Hardcoded URLs in code
• JWT tokens
• AWS access keys
• Google API keys
• GitHub tokens
• Slack tokens
• Stripe keys
• SendGrid keys
• Database connection strings
• Private keys (PEM format)
• OpenAI, Anthropic, Groq keys
• Long random strings (potential secrets)

Enforce Mode vs. Warn Mode

EnforceSecretScanning Behavior
false (default) User sees a warning with detection details and can choose "Send Anyway" to proceed
true Request is blocked. The alert says "Your organization's security policy blocks this request" with only an "OK" button. No way to override.

Custom Keyword Blocklist

The blockedKeywords field in the encrypted payload blocks any case-insensitive substring from being sent to cloud LLMs. Keywords are scanned in both selected code and the user's prompt text.

// In the encrypted payload JSON:
{
    "blockedKeywords": [
        "acmecorp",
        "project-phoenix",
        "confidential-2025"
    ]
}

Custom Regex Patterns

The customScanPatterns field allows regex or literal patterns. Each pattern has a display name, the pattern string, and whether it's a regex or literal match.

// In the encrypted payload JSON:
{
    "customScanPatterns": [
        {
            "name": "Internal Domain",
            "pattern": ".*\\.acmecorp\\.(com|internal)",
            "isRegex": true
        },
        {
            "name": "Project Codename",
            "pattern": "Project Phoenix",
            "isRegex": false
        }
    ]
}

Note: Blocked keywords and custom scan patterns are stored in the encrypted payload so they are not visible in the plain-text plist file.

Audit Logging

Cloud audit logging is disabled by default. When enabled, VoxyAI logs every outbound request to a cloud LLM service.

Enabling

<key>EnableCloudLogging</key>
<true/>

Log Location

~/Library/Logs/VoxyAI/voxyai-cloud-audit-YYYY-MM-DD.log

One log file is created per day.

What Gets Logged

Field Description
TimestampISO 8601 timestamp
UserFull name and username
Computer NameLocal hostname
Serial NumberMac hardware serial number
IP AddressesAll non-loopback network interfaces
ServiceCloud LLM service name (e.g., OpenAI, Anthropic)
App ContextActive application name and type
PromptThe text/question sent to the LLM
Selected CodeCode selected by the user (if any)

What Is NOT Logged

Requests to local providers (Apple Intelligence, Ollama) are never logged. Only outbound requests to cloud LLM services are captured.

Log Format

Logs use a structured plaintext format suitable for SIEM integration:

================================================================
TIMESTAMP: 2025-01-15T10:30:45.123-0500
USER: John Smith (jsmith)
COMPUTER_NAME: Johns-MacBook-Pro
SERIAL_NUMBER: C02XX123XXXX
IP_ADDRESSES: en0: 192.168.1.100, en1: 10.0.0.50
SERVICE: Anthropic
APP_CONTEXT: Xcode
APP_TYPE: IDE
----------------------------------------------------------------
PROMPT:
Fix the authentication bug in the login function
----------------------------------------------------------------
SELECTED_CODE:
func login(email: String, password: String) { ... }
================================================================

Enterprise Licensing

Volume licensing, MDM license deployment, pricing tiers, and troubleshooting.

Volume Licensing

Volume licenses allow a single license key to activate multiple machines:

  • A company purchases N seats and receives a single license key
  • The key allows N concurrent machine activations
  • Each machine is identified by its hardware UUID
  • Machines can be deactivated to free seats for reuse
  • The app shows "X of Y seats activated" for volume licenses

Pricing Tiers

Tier Seats Price per Seat Savings
Standard 1 – 9 $29.99 / year
Team 10 – 49 $24.99 / year Save 17%
Business 50 – 99 $19.99 / year Save 33%
Enterprise 100+ $14.99 / year Save 50%

Purchase volume licenses.

Automatic License Activation via MDM

When a LicenseKey is present in the enterprise config, VoxyAI automatically activates on launch:

  1. VoxyAI reads the license key from the enterprise config on launch
  2. Validates the key against the licensing server
  3. Activates the machine if not already activated (consumes one seat)
  4. Stores the activation securely in the macOS Keychain

Users see "Managed by [Organization Name]" in the license view. The license entry field, deactivate button, and purchase links are hidden for enterprise-managed licenses.

Machine Fingerprinting

Each machine is identified by its hardware UUID, obtained from the IOKit IOPlatformUUID property. This is unique per Mac and persists across OS reinstalls.

Offline Grace Period

Activated licenses work offline for up to 7 days after the last successful validation. After 7 days without connectivity, the app requires an internet connection to re-validate.

Troubleshooting

Symptom Cause Solution
Config not loading Wrong file path MDM: /Library/Managed Preferences/com.voxyai.VoxyAI.plist
Local: ~/Library/Application Support/VoxyAI/enterprise-config.plist
Config not loading Invalid plist Validate with plutil -lint config.plist
"Invalid signature" Tampered plist or version mismatch Re-generate the config with the CLI tool. Do not modify the plist after generation.
"Decryption failed" OrgName or LicenseKey mismatch Ensure the values match exactly between the config tool and the plist (watch for whitespace)
Provider not showing Not in AllowedProviders Add the provider identifier to the AllowedProviders array
API key not applied Missing from encrypted payload Add the key to the apiKeys object in the payload and regenerate
License activation fails No available seats Deactivate unused machines or purchase additional seats
Logging not working Not enabled Add EnableCloudLogging set to true
"MACHINE_LIMIT_EXCEEDED" All seats are taken Deactivate machines that are no longer in use, or purchase more seats

Checking Enterprise Mode Status

In the VoxyAI license window, enterprise mode displays:

  • Organization name
  • "Managed by [Organization Name]" indicator
  • Seat count (X of Y seats activated) for volume licenses
  • Configuration source (MDM or Local)

Need help? Contact VoxyAI Support for assistance with enterprise deployment, licensing, or configuration.

IDE Integration

No plugins or extensions needed! VoxyAI works everywhere on your Mac. It uses the system clipboard and paste functionality, so it works with any application - no installation of IDE plugins required.

VoxyAI detects when you are in a code editor and adjusts its behavior accordingly:

  • Generates properly formatted code blocks
  • Respects your current language context
  • Can analyze selected code for fixes or explanations
  • Generates documentation and comments

Works With Any Application

VoxyAI works with all applications including:

  • Code editors: VS Code, Xcode, IntelliJ IDEA, PyCharm, Sublime Text, Vim, Emacs
  • Email clients: Mail, Outlook, Gmail in browser
  • Communication: Slack, Discord, Messages, Teams
  • Documents: Pages, Word, Google Docs, Notion
  • Terminal: Terminal.app, iTerm2, Warp
  • Any app that accepts text input

Installation

System Requirements

  • macOS 26 or later
  • Microphone access
  • Accessibility permission (for auto-paste)

Required Permissions

VoxyAI requires the following permissions:

  • Microphone - To capture your voice for dictation
  • Speech Recognition - To convert speech to text using macOS built-in recognition
  • Accessibility - To automatically paste formatted text into the active application

Keyboard Shortcuts

Shortcut Action
fn + shift Start voice recording (hold to record voice)
fn + ctrl Type command (no voice)

Ollama Setup Guide

What is Ollama? Ollama is a free, open-source tool that lets you run large language models locally on your Mac. This means your data stays on your device, you have no API costs, and you can use AI even without an internet connection.

Installing Ollama

  1. Download from ollama.com/download
  2. Open the downloaded .zip file
  3. Drag Ollama to your Applications folder
  4. Open Ollama from your Applications folder
  5. When prompted, click "Open" to allow the app to run (it's from an identified developer)

Note: After installation, Ollama runs as a menu bar application. You'll see a small llama icon in your menu bar when it's running.

Verify Installation

Open Terminal and run:

ollama --version

You should see the version number displayed, confirming Ollama is installed correctly.

Browse Available Models

You can browse all available models on the Ollama library:

ollama.com/library

The library includes models from various providers including Meta (Llama), Google (Gemma), Mistral, DeepSeek, and many more.

Installing Models

Open Terminal and use the pull command:

ollama pull <model-name>

Example: Installing DeepSeek

DeepSeek offers excellent coding and reasoning capabilities. To install it:

ollama pull deepseek-r1

DeepSeek R1 comes in several sizes. You can specify a particular size:

# 7 billion parameters (requires ~5GB RAM)
ollama pull deepseek-r1:7b

# 14 billion parameters (requires ~9GB RAM)
ollama pull deepseek-r1:14b

# 32 billion parameters (requires ~20GB RAM)
ollama pull deepseek-r1:32b

Memory Requirements: Larger models require more RAM. As a general rule, you need about 1GB of RAM for every 1 billion parameters. Choose a model size that fits comfortably within your Mac's available memory.

Recommended Models

DeepSeek R1

Excellent for complex reasoning tasks and code generation. Shows step-by-step thinking process.

ollama pull deepseek-r1

Llama 3.3

Meta's latest model. Great balance of speed and capability for everyday tasks.

ollama pull llama3.3

Qwen 2.5 Coder

Specialized for code generation and programming tasks. Very capable for its size.

ollama pull qwen2.5-coder

Mistral

Fast and efficient model suitable for most general-purpose tasks.

ollama pull mistral

Gemma 2

Google's open model. Available in smaller sizes, good for Macs with limited RAM.

ollama pull gemma2

Memory Requirements

  • 8GB RAM: 7b models or smaller
  • 16GB RAM: 7b-14b models work well
  • 32GB+ RAM: Larger 32b models are usable

Managing Models

List Installed Models

To see all models you have installed:

ollama list

Remove a Model

To free up disk space by removing a model you no longer need:

ollama rm <model-name>

Update a Model

To update to the latest version of a model:

ollama pull <model-name>

Running pull again will download any updates if available.

Using with VoxyAI

  1. Make sure Ollama is running (look for the llama icon in your menu bar)
  2. Open VoxyAI settings
  3. Select "Ollama" as your AI provider
  4. Choose your installed model from the dropdown list
  5. VoxyAI will automatically connect to your local Ollama instance

Tip: Ollama runs on port 11434 by default. VoxyAI connects to http://localhost:11434 automatically. No API key is needed for local models.

Troubleshooting

Ollama not responding

  • Check if Ollama is running in your menu bar
  • Try restarting Ollama from the menu bar icon
  • Run "ollama serve" in Terminal to start it manually

Model runs slowly

  • Try a smaller model variant (e.g., 7b instead of 14b)
  • Close other memory-intensive applications
  • Macs with Apple Silicon (M1/M2/M3/M4) run models much faster than Intel Macs

Not enough memory

  • Use a smaller model size appropriate for your RAM
  • For 8GB RAM: stick to 7b models or smaller
  • For 16GB RAM: 7b-14b models work well
  • For 32GB+ RAM: larger models like 32b are usable

Personal Knowledge

Personal Knowledge lets you index your own documents so that relevant snippets are automatically included in your AI prompts. Powered by RAG (Retrieval-Augmented Generation), this feature gives your AI assistant context from your files — project notes, documentation, code, and more — so it can provide more accurate and relevant responses.

What is RAG? Retrieval-Augmented Generation (RAG) is a technique that enhances AI responses by retrieving relevant information from your own documents and injecting it into the prompt. Instead of relying solely on the model's training data, the AI can reference your specific files and notes to give more accurate, contextual answers.

Example Use Cases

Enforce a Code Style Guide

Index a style guide that defines how functions should be commented and variables declared. When you dictate code, the AI automatically follows your conventions without you having to repeat the rules every time.

style-guide.md

# Swift Style Guide

## Variable Declarations
- Use `let` over `var` whenever possible
- Use camelCase for variable and function names
- Use PascalCase for type names and protocols
- Prefer explicit types for public API, inferred types for local variables

## Function Comments
Every public function must have a doc comment with this format:

    /// Brief one-line description of what the function does.
    /// - Parameters:
    ///   - paramName: Description of the parameter.
    /// - Returns: Description of the return value.
    /// - Throws: Description of errors thrown (if applicable).

## Error Handling
- Always use custom error types conforming to `LocalizedError`
- Never use force unwraps (`!`) outside of tests

README Template

Define the structure and rules for your README files. When you ask VoxyAI to generate a README for a project, it follows your template automatically.

readme-rules.md

# README Format Rules

## Required Sections (in this order)
1. Project name as H1, followed by a one-line description
2. Screenshot or demo GIF (if applicable)
3. "Getting Started" — prerequisites, install steps, and how to run
4. "Usage" — basic examples showing the most common use case
5. "Configuration" — environment variables or config files
6. "Contributing" — link to CONTRIBUTING.md
7. "License" — license type with link to LICENSE file

## Style Rules
- Keep the description under 20 words
- Use code blocks for all terminal commands
- Add copy-pasteable commands (no placeholder paths)
- Write install steps as a numbered list, not paragraphs
- Include the minimum supported language/runtime version

## Things to Avoid
- Badges (no build status, coverage, or download badges)
- "Table of Contents" section (keep READMEs short enough to not need one)
- Changelog in the README (use CHANGELOG.md instead)

API Response Format

Index a document that defines how your API responses should be structured. When you ask the AI to write endpoint handlers, it follows your conventions for status codes, error shapes, and pagination.

api-conventions.md

# API Response Conventions

## Success Responses
Always wrap data in a top-level "data" key:

    { "data": { ... } }

For lists, include pagination:

    { "data": [...], "meta": { "page": 1, "perPage": 20, "total": 58 } }

## Error Responses
Use a consistent error shape with a machine-readable code:

    { "error": { "code": "INVALID_EMAIL", "message": "..." } }

## Status Codes
- 200 — Success (GET, PATCH)
- 201 — Created (POST)
- 204 — No Content (DELETE)
- 400 — Validation error
- 401 — Not authenticated
- 403 — Not authorized
- 404 — Resource not found
- 429 — Rate limited

Commit Message Guidelines

Index your team's commit message format so the AI generates properly structured messages when you ask it to write one.

commit-guidelines.md

# Commit Message Format

## Structure
    <type>(<scope>): <subject>

    <body>

## Types
- feat: A new feature
- fix: A bug fix
- docs: Documentation only
- refactor: Code change that neither fixes a bug nor adds a feature
- test: Adding or updating tests
- chore: Build process, CI, or tooling changes

## Rules
- Subject line: imperative mood, lowercase, no period, max 50 chars
- Body: wrap at 72 chars, explain "what" and "why" (not "how")
- Reference issue numbers at the end: "Closes #42"

Email Writing Style

Index a document describing your preferred email tone and structure. When you dictate an email, the AI formats it to match your style automatically.

email-style.md

# My Email Style

## Tone
- Friendly but professional — never stiff or overly formal
- Use the recipient's first name in the greeting
- Keep paragraphs short (2-3 sentences max)

## Structure
- Open with a brief, warm greeting (not "I hope this email finds you well")
- Get to the point in the first sentence after the greeting
- Use bullet points for action items or multiple topics
- End with a clear next step or call to action
- Sign off with "Best," followed by my name

## Things to Avoid
- Corporate jargon ("circle back", "synergize", "leverage")
- Passive voice when making requests
- Exclamation marks (one per email maximum)
- Apologizing unnecessarily ("Sorry to bother you")

Adding Documents

To add documents to your personal knowledge base, open RAG Settings from the VoxyAI menu. You can add individual files or entire folders. When you add a folder, all supported files inside it are indexed recursively.

Tip: Try indexing your project folder to give the AI full context about your codebase. This works great for asking questions about your code, generating documentation, or getting help with debugging.

Supported File Types

Category Extensions
Text .txt, .md, .rtf, .pdf
Code .swift, .py, .js, .ts, .java, .c, .cpp, .h, .rb, .go, .rs
Data .json, .yaml, .yml, .xml, .csv
Web .html, .css, .scss

Privacy Mode

Privacy Mode controls whether your retrieved knowledge snippets are sent to cloud AI providers.

  • ON (Default) — Knowledge snippets are only used with local models (Ollama). Cloud providers never receive your document content.
  • OFF — Knowledge snippets are included in prompts sent to any provider, including cloud services like OpenAI and Anthropic.

Warning: When Privacy Mode is off, snippets from your indexed documents will be sent to cloud AI providers as part of your prompts. Only disable Privacy Mode if you are comfortable sharing your document content with third-party services.

Managing Sources

The RAG Settings panel lets you manage all your indexed sources:

  • View Status — See each source's indexing status and the number of chunks generated from it.
  • Remove Sources — Delete individual sources and their chunks from the index.
  • Reindex All — Re-process all sources to pick up file changes. Useful after editing your documents.
  • Persistence — Your knowledge index is saved to disk and persists between app restarts. You do not need to re-index after quitting VoxyAI.

User Memory

User Memory lets VoxyAI learn about you over time. As you dictate, chat, and rewrite text, VoxyAI automatically picks up your writing patterns, preferences, terminology, and personal details. These facts are then injected into AI prompts so responses feel more natural and personalized to you.

How it works VoxyAI observes patterns in your dictation, chat conversations, Writing Coach usage, and optionally your clipboard. It extracts facts like your email signature, preferred greetings, technical terminology, and favorite rewrite actions. These facts are stored locally, encrypted at rest, and automatically included in future AI prompts to personalize responses.

What VoxyAI Learns

VoxyAI learns from four sources, each picking up different types of information:

Source What is learned
Dictation & Typing Email signatures, greetings, terminology corrections, and formatting preferences
Chat Conversations Tech stack, personal preferences, contact names, and terminology you mention in conversations with the AI
Writing Coach Your most-used rewrite actions and custom instructions for each app context
Clipboard Writing style patterns from text you copy in email, messaging, and notes apps (opt-in, off by default)

Memory Categories

Each learned fact is assigned a category so you can easily browse and manage your memory:

Writing Style — Tone and formatting preferences
Email Signature — Your sign-off patterns
Greeting — How you open emails and messages
Tech Stack — Languages, frameworks, and tools you use
Preference — General preferences mentioned in conversations
Contact Name — Names of people you interact with
Terminology — Domain-specific terms and abbreviations
Workflow — Frequently used actions and patterns

Settings

Configure User Memory from VoxyAI Settings. All settings take effect immediately.

Setting Default Description
Enable Memory Off Turn User Memory on or off. When off, no facts are learned or injected into prompts. All other memory settings are disabled until this is turned on.
Privacy Mode On Controls whether memory facts are sent to cloud AI providers. When on, memory is only used with local models (Apple Intelligence, Ollama). When off, memory facts are included in prompts sent to all providers, including cloud services.
Learn from Clipboard Off When enabled, VoxyAI analyzes text you copy from writing apps (email, messaging, notes) to learn your writing style. Only plain text longer than 50 characters from supported apps is analyzed. Code editors and terminal apps are excluded.
Maximum Facts 200 The maximum number of memory facts to store. When the limit is reached, the lowest-confidence facts are automatically removed to make room for new ones.

Privacy Mode

Privacy Mode controls whether your memory facts are sent to cloud AI providers.

  • ON (Default) — Memory facts are only used with local models (Apple Intelligence and Ollama). Cloud providers never receive your personal information.
  • OFF — Memory facts are included in prompts sent to any provider, including cloud services like OpenAI and Anthropic.

Warning: When Privacy Mode is off, your personal memory facts (email signatures, names, preferences, terminology) will be sent to cloud AI providers as part of your prompts. Only disable Privacy Mode if you are comfortable sharing this information with third-party services.

Encryption & Security

All memory facts are encrypted at rest using AES-256-GCM. The encryption key is stored securely in your macOS Keychain, so only your user account can access your memory data.

  • AES-256-GCM encryption for all stored memory facts
  • Encryption key stored in macOS Keychain
  • Memory database stored locally in Application Support
  • Facts never leave your device when Privacy Mode is on

Smart Token Budgets

VoxyAI automatically adjusts how many memory facts are included in each prompt based on the AI model being used. Smaller models get fewer facts to avoid overwhelming their context window, while larger models can handle more.

Provider Token Budget
Apple Intelligence 500 tokens
Ollama (small models) 500 tokens
Ollama (large models) 1,500 tokens
Cloud providers 2,000 tokens

Tip: When Personal Knowledge (RAG) is also active, memory token budgets are automatically reduced by 30% to leave room for document snippets. The most relevant facts are always prioritized.

Automatic Maintenance

VoxyAI automatically maintains your memory to keep it relevant and up to date:

  • Duplicate detection prevents the same fact from being stored multiple times
  • Confidence decay gradually reduces the priority of facts that have not been observed recently
  • Stale facts (not observed for 90+ days with low confidence) are automatically removed
  • Facts that exceed the maximum limit are pruned starting with the lowest confidence

Each fact has a confidence score that increases when the pattern is observed again and decreases over time if it is not. This ensures your memory stays current as your habits evolve.

Managing Memory

Click View Memory in VoxyAI Settings to open the Memory window, where you can:

  • Browse all stored facts, grouped by category
  • Search for specific facts by content or category
  • Edit the content of any fact
  • Toggle individual facts active or inactive without deleting them
  • Delete individual facts or clear all memory at once

Tip: If VoxyAI learned something incorrect, you can edit the fact directly in the Memory window rather than deleting and re-teaching it.

Enterprise Controls

Enterprise administrators can control User Memory through MDM configuration:

  • Disable User Memory entirely by adding "userMemory" to the DisabledFeatures array
  • When disabled via MDM, the memory settings are hidden and no facts are learned or injected

See the Enterprise Configuration Reference for details.

Terminal Commands

When Terminal is your active application, VoxyAI automatically converts natural language to shell commands.

Examples

You say: "list all files modified in the last week"

find . -type f -mtime -7

You say: "find all Python files containing the word config"

grep -r "config" --include="*.py" .

Tone Settings

Adjust the tone of your formatted text to match the situation:

Formal

Professional language for business communications

Assertive

Confident, direct communication style

Empathetic

Warm, understanding tone for sensitive topics

Concise

Brief, to-the-point messages

Detailed

Thorough explanations with full context

Humorous

Light-hearted, casual communication

Voice Commands

VoxyAI understands natural language. Just speak normally and the AI will format appropriately. Some special commands:

  • "translate to [language]" - Translates your text to the specified language
  • "write a function that..." - Generates code based on your description
  • "fix this code" - Analyzes and fixes code in your clipboard or selection
  • "generate a script to..." - Creates a complete script with documentation

You can also create your own voice-activated commands with custom trigger phrases and AI prompt templates. Learn more about Custom Commands.

🏠
العربية Català Čeština Dansk Deutsch Ελληνικά English Español Suomi Français עברית हिन्दी Hrvatski Magyar Bahasa Indonesia Italiano 日本語 한국어 Bahasa Melayu Norsk Bokmål Nederlands Polski Português (Brasil) Português (Portugal) Română Русский Slovenčina Svenska ไทย Türkçe Українська Tiếng Việt 简体中文 繁體中文