MDM Configuration for AI Apps: An IT Admin's Guide

February 6, 2026 · 12 min read

The Challenge: AI Tools in Enterprise Environments

AI tools are transforming how knowledge workers write, code, analyze data, and communicate. But for IT administrators, this rapid adoption creates a real tension: how do you let employees benefit from AI-powered productivity tools while maintaining security, compliance, and visibility across your macOS fleet?

The biggest risk many organizations face today is shadow AI. Employees download AI applications on their own, paste sensitive company data into unknown cloud services, and store API keys in plaintext files. A 2025 Gartner survey found that over 55% of enterprise employees have used at least one unapproved AI tool for work tasks. That is a data governance nightmare waiting to happen.

The answer is not to block AI entirely. That approach fails because employees will find workarounds, and you lose legitimate productivity gains. Instead, the answer is managed deployment: use your existing MDM infrastructure to deploy approved AI apps with proper configuration, enforce policies centrally, and maintain audit visibility. If you are already managing macOS devices with Jamf, Mosyle, or Kandji, you have all the tools you need.

Managed App Configuration (AppConfig) Explained

macOS supports a powerful mechanism called managed app configuration that lets MDM administrators push settings directly to applications via configuration profiles. When an app is deployed through MDM, it can read managed preferences (plist keys) that the admin has defined. The app receives these settings automatically, with no user interaction required.

Under the hood, this works through the com.apple.ManagedClient.preferences domain. Your MDM server pushes a configuration profile containing a plist payload targeted at the app's bundle identifier. The app reads these values using UserDefaults or CFPreferences, and can distinguish managed values from user-set values. Managed values take precedence and cannot be overridden by the end user.

This approach gives IT admins several advantages:

  • Zero-touch configuration - settings are applied automatically at enrollment or profile installation
  • Centralized policy enforcement - change a setting once, it propagates to every device in scope
  • User-proof settings - managed preferences cannot be altered by the end user
  • Granular scoping - different configurations for different departments or security tiers

Deploying AI Apps via MDM

The general workflow for deploying an AI application through MDM follows a familiar pattern, regardless of which MDM solution you use:

  1. Upload the app package - Add the .pkg or .dmg to your MDM's app catalog. For apps distributed outside the Mac App Store, you will typically upload a signed installer package.
  2. Create a configuration profile - Define the managed preferences (plist keys) that control the app's behavior. This is where you set allowed providers, API keys, feature flags, and security policies.
  3. Assign to device groups - Scope the app and its configuration profile to the appropriate smart groups or device groups based on department, security tier, or other criteria.
  4. Deploy - Push the app installation and configuration profile. The app installs silently, reads the managed configuration, and is ready to use with your organization's policies in place.

MDM-Specific Notes

Jamf Pro offers the most mature macOS management experience. Upload packages to Jamf Cloud Distribution Point or your on-premises distribution point, create configuration profiles in the Profiles section, and scope using Smart Groups. Jamf's API also lets you automate profile deployment via scripts.

Mosyle provides a streamlined interface for deploying custom apps and configuration profiles. Use the Management > Profiles section to create custom plist payloads. Mosyle's tag-based scoping system works well for tiered AI deployment.

Kandji uses a Blueprint-based approach. Add the app as a Custom App library item, then create a Custom Profile for the managed configuration. Assign both to the appropriate Blueprint.

Configuration Profiles for AI Apps

When deploying AI applications via managed app configuration, there are several categories of settings that IT admins typically need to control:

Allowed AI Providers

Restrict which AI providers (cloud or local) the app can connect to. This prevents users from routing company data through unapproved services. You define an allow-list, and the app blocks any provider not on that list.

API Key Distribution

Instead of having employees enter their own API keys (which leads to key sprawl and security issues), distribute organization-owned API keys through the configuration profile or, better yet, reference keys stored in a managed Keychain. This centralizes key rotation and revocation.

Feature Toggles

Enable or disable specific features based on your organization's risk tolerance. For example, you might enable local AI processing but disable cloud-based processing for devices that handle regulated data.

Data Loss Prevention Rules

Configure rules that prevent sensitive data patterns (credit card numbers, social security numbers, proprietary code markers) from being sent to AI providers. The app can scan outbound content and block or warn before transmission.

Here is a minimal example of what a configuration profile plist payload looks like for an AI app:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
  "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>AllowedProviders</key>
    <array>
        <string>ollama</string>
        <string>apple-intelligence</string>
    </array>
    <key>CloudProvidersEnabled</key>
    <false/>
</dict>
</plist>

Restricting AI Providers and Enforcing Policies

Not all AI providers are equal from a data governance perspective. Some route data through servers in jurisdictions your legal team has not approved. Others have training-on-input clauses that could expose your intellectual property. Restricting providers is one of the most impactful things you can do.

For high-security environments (government, defense, healthcare), consider restricting to local-only processing. Providers like Ollama run models entirely on-device. Data never leaves the Mac. Apple Intelligence also processes on-device for many tasks. This configuration eliminates cloud data exposure entirely:

<key>AllowedProviders</key>
<array>
    <string>ollama</string>
    <string>apple-intelligence</string>
</array>
<key>CloudProvidersEnabled</key>
<false/>

For standard enterprise environments, you might allow specific cloud providers where your organization has a Data Processing Agreement (DPA) in place. For example, if you have an enterprise agreement with OpenAI or Anthropic that includes data retention and training opt-out guarantees:

<key>AllowedProviders</key>
<array>
    <string>ollama</string>
    <string>apple-intelligence</string>
    <string>openai</string>
    <string>anthropic</string>
</array>
<key>CloudProvidersEnabled</key>
<true/>

The key principle is that the allow-list is enforced at the app level. Even if a user manually adds a provider not on the list, the app refuses to send data to it. Combined with MDM-managed preferences that the user cannot override, this creates a robust policy enforcement layer.

Data Loss Prevention Considerations

AI tools create a new class of data loss vector. When an employee pastes code, a document, or a spreadsheet into an AI prompt, that data is transmitted to the provider's API. If the provider is a cloud service, your data is now outside your security perimeter. Even with a trusted provider, you may not want certain data types sent externally.

Effective DLP for AI apps should address several layers:

  • Pattern-based blocking - Detect and block content matching sensitive patterns before it is sent to any provider. Common patterns include credit card numbers, SSNs, AWS access keys, database connection strings, and private key material.
  • Secrets scanning - Specifically detect API keys, tokens, and credentials in content being sent to AI providers. This catches the common scenario where a developer pastes a config file containing secrets into an AI prompt for debugging help.
  • Content classification - Some organizations label documents with classification levels. DLP rules can block content from classified documents from being sent to cloud AI providers while still allowing local processing.
  • Clipboard monitoring - Advanced DLP can monitor clipboard contents before they are pasted into AI apps, providing a warning or block when sensitive data is detected. This is more invasive and should be weighed against employee privacy expectations and local regulations.

From a managed configuration standpoint, IT admins can enable or disable the secrets scanner, define custom regex patterns for sensitive data, and set the enforcement mode (block, warn, or log-only) depending on the organization's maturity and risk tolerance.

Audit Logging and Compliance

Deploying AI tools is only half the equation. For compliance frameworks like HIPAA, SOC 2, and GDPR, you also need visibility into how those tools are being used.

Key data points to capture in your audit logs:

  • Provider usage - Which AI providers are being used, and how frequently. This helps you verify that only approved providers are in use and identify any policy violations.
  • Request volume - The number and size of requests sent to each provider. Unusual spikes may indicate bulk data exfiltration or misuse.
  • Blocked content events - Every time the DLP engine blocks or warns on content, that event should be logged with enough context for investigation without logging the sensitive content itself.
  • Configuration changes - Any changes to the managed configuration profile, including who made the change and when.

For HIPAA, you need to demonstrate that protected health information (PHI) cannot be transmitted to unauthorized AI providers. Restricting to local-only providers and enabling DLP with pattern matching for PHI identifiers (MRNs, patient names in conjunction with health data) addresses this requirement.

For SOC 2, auditors will want to see that you have controls around data processing by third parties, which includes AI providers. Your managed app configuration and DLP logs serve as evidence of these controls.

For GDPR, any data sent to an AI provider may constitute processing of personal data. Your configuration should ensure that personal data is only sent to providers with whom you have an appropriate DPA, and that data subjects' rights can be exercised (including deletion requests to providers).

Example: Deploying VoxyAI Across a Fleet

Let us walk through a concrete example: deploying VoxyAI across a macOS fleet using Jamf Pro with managed app configuration.

Step 1: Upload the Installer

Download the VoxyAI .pkg installer and upload it to your Jamf Pro distribution point. Create a new Policy with the package attached, set the trigger to Enrollment Complete and Recurring Check-in, and scope it to your target Smart Group.

Step 2: Create the Configuration Profile

In Jamf Pro, go to Computers > Configuration Profiles > New. Select Application &amp; Custom Settings and upload the following plist payload targeting VoxyAI's bundle identifier (com.jefferyabbott.voxyai):

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
  "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <!-- Restrict to approved AI providers -->
    <key>AllowedProviders</key>
    <array>
        <string>ollama</string>
        <string>apple-intelligence</string>
        <string>anthropic</string>
    </array>

    <!-- Allow cloud providers (Anthropic) -->
    <key>CloudProvidersEnabled</key>
    <true/>

    <!-- Deploy API key via managed Keychain reference -->
    <key>APIKeyKeychainLabel</key>
    <string>com.jefferyabbott.voxyai.managed-api-key</string>

    <!-- Enable the built-in secrets scanner -->
    <key>SecretsScanner</key>
    <true/>

    <!-- DLP enforcement mode: block, warn, or log -->
    <key>DLPMode</key>
    <string>block</string>

    <!-- Custom DLP patterns (regex) -->
    <key>DLPPatterns</key>
    <array>
        <string>\b\d{3}-\d{2}-\d{4}\b</string>
        <string>AKIA[0-9A-Z]{16}</string>
        <string>-----BEGIN (RSA |EC )?PRIVATE KEY-----</string>
    </array>

    <!-- Disable features not approved for this org -->
    <key>EnableCloudSync</key>
    <false/>
</dict>
</plist>

Step 3: Understanding Each Key

Key Type Description
AllowedProviders Array of Strings The list of permitted AI providers. Any provider not in this array is blocked. Valid values include ollama, apple-intelligence, openai, anthropic, and google.
CloudProvidersEnabled Boolean Master toggle for cloud-based AI providers. When false, only local providers (Ollama, Apple Intelligence) are allowed regardless of the AllowedProviders list.
APIKeyKeychainLabel String The Keychain item label where the organization's API key is stored. Deploy the key separately via a Keychain configuration profile or script to avoid embedding secrets in the plist.
SecretsScanner Boolean Enables the built-in secrets scanner that detects API keys, tokens, and credentials in content before it is sent to an AI provider.
DLPMode String Controls DLP enforcement behavior. block prevents transmission, warn shows a dialog but allows override, and log silently records the event.
DLPPatterns Array of Strings Custom regex patterns for detecting sensitive content. Matched content triggers the DLP action defined by DLPMode.
EnableCloudSync Boolean Controls whether the app can sync settings or history to cloud storage. Disable for environments where data residency is a concern.

Step 4: Scope to Device Groups

In Jamf Pro, scope the configuration profile using Smart Groups. For example, create tiered deployment groups:

  • AI-LocalOnly - Devices in high-security departments (Legal, HR, Finance). Configuration restricts to ollama and apple-intelligence only with CloudProvidersEnabled set to false.
  • AI-Standard - General employee devices. Configuration allows approved cloud providers with DLP in block mode and secrets scanning enabled.
  • AI-Developer - Engineering devices with broader provider access but strict secrets scanning to prevent accidental credential leaks during code-related prompts.

Create separate configuration profiles for each tier and scope them to the corresponding Smart Group. This gives you granular control without managing individual devices.

Step 5: Verify Deployment

After deploying, verify that the managed configuration is applied correctly. On a target device, open Terminal and run:

defaults read com.jefferyabbott.voxyai AllowedProviders

You should see the array of allowed providers. You can also check the managed profile is installed by running:

profiles list -verbose | grep -A 5 "voxyai"

With the configuration profile in place, VoxyAI respects the managed settings on every launch. Users get a productive AI-powered dictation tool, and IT maintains full control over which providers handle company data, what security scanning is active, and how policy violations are handled. That is the power of MDM-managed AI app deployment: productivity and security working together, not against each other.

Try VoxyAI Free

Voice dictation with AI-powered formatting for macOS. Works with free local models or bring your own API keys.

Download VoxyAI
🏠
العربية Català Čeština Dansk Deutsch Ελληνικά English Español Suomi Français עברית हिन्दी Hrvatski Magyar Bahasa Indonesia Italiano 日本語 한국어 Bahasa Melayu Norsk Bokmål Nederlands Polski Português (Brasil) Português (Portugal) Română Русский Slovenčina Svenska ไทย Türkçe Українська Tiếng Việt 简体中文 繁體中文