AI Platform Assistant

AI-powered personal agent for platform engineers — policy development, testing, and Kubernetes operations from your terminal.

The Nirmata Personal Agent (nctl ai) runs on your workstation and integrates directly into your development workflow, offering specialized guidance and automation without requiring cluster access or cloud services.

nctl ai is built with a security-first design – it only accesses directories you explicitly allow, loads only the skills you provide, and asks for your confirmation before performing any operation. See Security for details.

nctl ai is embedded in nctl, Nirmata’s powerful CLI tool for managing the security posture of your clusters and applications.

Quick Start

Install nctl using Homebrew:

brew tap nirmata/tap
brew install nctl

For more installation options, see nctl installation.

Run the personal agent in interactive mode:

nctl ai

You will be prompted to enter your business email to:

  • sign up for a free trial
  • or sign in to your account
Using nctl AI requires authentication with Nirmata Control Hub to access 
AI-enabled services. Please enter your business email to sign up for a 
free trial, or sign in to your account

Enter email: ****@******.com

A verification code has been sent to your email.
Enter verification code: ******

Email verified successfully!
Your credentials have been fetched and successfully saved.

👋 Hi, I am your Nirmata AI Platform Engineering Assistant!

I can help you automate security, compliance, and operational best practices 
across your clusters and pipelines.

💡 Here are some tasks I can do for you, or ask anything:
  ▶ scan clusters
  ▶ generate policies and tests
  ▶ optimize costs

💡 type 'help' to see commands for working in nctl ai

───────────────────────────────────────────────────────────────────────────────────────
>
───────────────────────────────────────────────────────────────────────────────────────

Try some sample prompts like:

  • scan my cluster
  • generate a policy to require pod labels
  • summarize violations across my clusters
  • perform a Kyverno health check

Non-Interactive Mode:

You can also provide a prompt directly for single shot requests:

nctl ai --prompt "create a policy that requires all pods to have resource limits"

See Command Reference for full details.

Key Capabilities

nctl ai is a personal agent specializing in Kubernetes, Policy as Code and Platform Engineering. It provides comprehensive support across these domains:

Policy as Code

  • Generate Kyverno policies from natural language descriptions
  • Create and execute comprehensive Kyverno CLI and Chainsaw tests
  • Generate policy exceptions for failing workloads
  • Upgrade Kyverno policies from older versions to CEL
  • Convert policies from OPA/Sentinel to Kyverno

Platform Engineering

  • Troubleshoot Kyverno engine, webhook, and controller issues
  • Get policy recommendations for your environments
  • Manage compliance across clusters
  • Manage Nirmata agents across your clusters
  • Install and configure Kyverno and other controllers

Security

nctl ai is built with a security-first approach. The agent operates within strict boundaries and always asks for permission before performing operations.

Allowed Directories

By default, nctl ai can only access the current working directory. To grant access to additional directories, use the --allowed-dirs flag:

nctl ai --allowed-dirs "/path/to/policies,/tmp"

You can also set the NIRMATA_AI_ALLOWED_DIRS environment variable:

export NIRMATA_AI_ALLOWED_DIRS="/path/to/policies,/tmp"
nctl ai

The agent will refuse to read, write, or execute files outside of the allowed directories, ensuring your filesystem remains protected.

Permission Checks

Before performing any operation that modifies your system (writing files, executing commands, applying Kubernetes resources), nctl ai prompts for explicit confirmation. This ensures you remain in control of all changes.

For automated workflows where manual confirmation is not practical, you can disable permission checks:

nctl ai --skip-permission-checks --prompt "scan my cluster"

To allow destructive operations (e.g., deleting resources) in non-interactive mode, both --prompt and --skip-permission-checks must be combined with the --force flag:

nctl ai --force --skip-permission-checks --prompt "delete unused configmaps"

Warning: Use --skip-permission-checks and --force with caution. These flags bypass safety prompts and should only be used in trusted automation pipelines.

Security Summary

FeatureDefault BehaviorOverride
File system accessCurrent working directory only--allowed-dirs
Tool executionRequires user confirmation--skip-permission-checks
Destructive operationsBlocked in non-interactive mode--force (requires --skip-permission-checks and --prompt)
Skill loadingBuilt-in skills only--skills

Session & Task Management

nctl ai provides built-in session and task management so you can pause, resume, and track work across multiple interactions.

Session Management

Sessions automatically capture your conversation history, tool calls, and results. You can resume any previous session to continue where you left off.

Interactive commands:

CommandDescription
sessionsList all available sessions
saveSave current session
newCreate a new session
resume <id>Resume a specific session (or latest)
exit / quit / qSave session and exit
exit-nosaveExit without saving

CLI flags:

# Resume the most recent session
nctl ai --resume-session latest

# Resume a specific session by ID
nctl ai --resume-session 20260210-0206

# List all available sessions
nctl ai --list-sessions

# Delete a session by ID
nctl ai --delete-session 20260210-0206

Sessions work with any provider (Nirmata, Anthropic, Bedrock, etc.) and are saved periodically during conversation. Use Ctrl+D to explicitly save and exit, or Ctrl+C to exit without saving (the session ID is displayed for later resuming).

Task Management

nctl ai tracks tasks automatically during complex, multi-step operations. The agent creates and updates a task list as it works, giving you visibility into progress.

Interactive commands:

CommandDescription
tasksShow current todo list and task progress
task <N>Show detailed information for task N (including worker output)

The task list updates in real time as the agent works through multi-step workflows like cluster scanning, policy generation, or compliance assessments.

AI Provider Configuration

By default, nctl ai uses Nirmata Control Hub as its AI provider. However, you can configure it to work with other AI providers using the --provider flag.

Nirmata (Default)

The default provider uses Nirmata Control Hub for AI services. This requires authentication as described in the Quick Start section.

nctl ai --prompt "generate a policy to require pod labels"

Anthropic Claude

Configuration:

Set the environment variable with your Anthropic API key:

export ANTHROPIC_API_KEY=<your-api-key>

Usage:

nctl ai --provider anthropic --prompt "What is Kubernetes? Answer in one sentence."

Notes:

Google Gemini

Configuration:

Set the environment variable with your Google AI API key:

export GEMINI_API_KEY=<your-api-key>

Usage:

nctl ai --provider gemini --prompt "what is 5+5? answer in one word"

Notes:

  • Environment variable is GEMINI_API_KEY (not GOOGLE_API_KEY)
  • Default model: gemini-2.5-pro
  • Free tier rate limit: approximately 2 requests per minute
  • Get your API key from Google AI Studio

Azure OpenAI

Configuration:

Set the environment variables with your Azure OpenAI endpoint and API key:

export AZURE_OPENAI_ENDPOINT="https://<your-resource-name>.openai.azure.com/"
export AZURE_OPENAI_API_KEY="<your-api-key>"

Usage:

nctl ai --provider azopenai --model gpt-4o --prompt "what is 5+5? answer in one word"

Notes:

  • Requires both endpoint URL and API key to be configured
  • You must specify the model with the --model flag (e.g., gpt-4o, gpt-4, gpt-35-turbo)
  • Get your credentials from Azure Portal

Amazon Bedrock

Amazon Bedrock uses AWS credentials for authentication. Ensure you have a valid AWS profile configured with appropriate Bedrock access permissions.

Configuration:

Step 1: Login to AWS SSO (if using SSO):

aws sso login --profile your-profile-name

Step 2: Set your AWS profile environment variable:

export AWS_PROFILE=your-profile-name

Step 3: Verify your credentials are working:

aws sts get-caller-identity

You should see output similar to:

{
    "UserId": "AROA4JFRUINQC7VCOQ7UD:user@example.com",
    "Account": "123456789012",
    "Arn": "arn:aws:sts::123456789012:assumed-role/YourRole/user@example.com"
}

Usage:

nctl ai --provider bedrock --model us.anthropic.claude-sonnet-4-5-20250929-v1:0 --prompt "Your prompt here"

Notes:

  • Requires valid AWS credentials with Bedrock access permissions
  • Supports Claude models from Anthropic available through Bedrock
  • Ensure your AWS account has Bedrock model access enabled in the target region
  • You must specify the model with the --model flag (defaults to Claude Sonnet 4 if not specified)
  • Model IDs follow the format region.provider.model-name-version:variant (e.g., us.anthropic.claude-sonnet-4-5-20250929-v1:0). Model IDs MUST start with us. prefix (e.g., us.anthropic.claude-...). Without the prefix, you’ll get an “on-demand throughput isn’t supported” error.
  • For more information, see Amazon Bedrock Documentation

Provider Comparison

ProviderEnvironment VariablesModel SelectionNotes
Nirmata (default)Authentication via nctl loginAutomaticIncludes access to Nirmata platform features
AnthropicANTHROPIC_API_KEYAutomaticBest for Claude-specific features
Google GeminiGEMINI_API_KEYDefault: gemini-2.5-proFree tier available with rate limits
Azure OpenAIAZURE_OPENAI_ENDPOINT
AZURE_OPENAI_API_KEY
Required via --modelEnterprise-ready with Azure integration
Amazon BedrockAWS_PROFILE (or AWS credentials)Required via --modelAWS-native with IAM authentication

Using AI/LLM Proxies

You can configure nctl ai to route requests through AI/LLM proxy services. This is useful for:

  • Centralizing API key management
  • Implementing rate limiting and cost controls
  • Adding observability and monitoring
  • Load balancing across multiple providers
  • Using self-hosted AI gateways

Each provider supports proxy configuration through a base URL environment variable:

Anthropic with Proxy:

export ANTHROPIC_API_KEY=<your-api-key>
export ANTHROPIC_BASE_URL=http://your-proxy:8000

nctl ai --provider anthropic --prompt "Your prompt here"

Google Gemini with Proxy:

export GEMINI_API_KEY=<your-api-key>
export GEMINI_BASE_URL=http://your-proxy:8000

nctl ai --provider gemini --prompt "Your prompt here"

Azure OpenAI with Proxy:

export AZURE_OPENAI_API_KEY=<your-api-key>
export AZURE_OPENAI_ENDPOINT=http://your-proxy:8000

nctl ai --provider azopenai --model gpt-4o --prompt "Your prompt here"

Notes:

  • The proxy must be compatible with the provider’s API format
  • Popular proxy solutions include LiteLLM, OpenLLM, and enterprise gateways
  • Ensure your proxy is properly configured to forward requests to the actual AI provider
  • The base URL should include the protocol (http:// or https://) and port if needed
  • When using a proxy, set AZURE_OPENAI_ENDPOINT to your proxy URL instead of your Azure endpoint.

Available Tools

Command Execution

  • bash - Execute shell commands
  • kubectl - Run Kubernetes commands against your cluster

Kyverno Policy Tools

  • generate_policy - Generate a Kyverno policy
  • generate_kyverno_tests - Generate Kyverno CLI tests for a policy, including kyverno-test.yaml, resources.yaml, and optionally variables.yaml files
  • generate_chainsaw_tests - Generate or update Chainsaw tests for Kyverno policies
  • run_kyverno_tests - Test Kyverno policies using Kyverno CLI test command
  • remediate - Fix policy violations for a resource
  • scan_kubernetes_resources - Scan Kubernetes resource files against policies and return the results
  • scan_kubernetes_cluster - Scan Kubernetes resources in a cluster against policies and return the results
  • scan_terraform - Scan Terraform resources against policies and return the results

File System Operations

  • read_file, read_multiple_files - Read file contents
  • write_file - Create a new file or overwrite an existing file with new content
  • modify_file - Update file by finding and replacing text
  • copy_file, move_file - Copy, move, or rename files and directories
  • delete_file - Delete a file or directory from the file system
  • create_directory - Create a new directory or ensure a directory exists
  • list_directory, tree - Browse directory structure and get hierarchical directory representations
  • get_file_info - Retrieve detailed metadata about a file or directory
  • search_files - Recursively search for files and directories matching a pattern
  • search_within_files - Search for text within file contents, reporting file paths and line numbers where matches are found
  • add_allowed_directory - Add directories for filesystem access
  • list_allowed_directories - View allowed directories

Utility Tools

  • todo - Manage a task list to track tasks (add, remove, update, list items). Automatically prevents duplicate items
  • worker - Manage background workers for concurrent task processing
  • email - Send emails via NCH

Slack Integration

Integration with Slack allows you to list channels and send messages directly from nctl ai.

Configuration:

Slack integration must be configured in Nirmata Control Hub. No additional environment variables are required once configured in NCH.

Available Tools:

  • list_slack_channels - List all Slack channels the user has access to
  • send_slack_message - Send a message to a Slack channel via NCH

Examples:

List available channels:

nctl ai --prompt "list my slack channels"

Send a message:

nctl ai --prompt "scan my cluster and send the report to dev-general channel"

Available Skills

nctl ai loads specialized knowledge dynamically based on your needs:

Policy & Security

  • kyverno-policies - Generate Kyverno policies from natural language
  • converting-policies - Convert policies between formats (ClusterPolicy to ValidatingPolicy, OPA Rego, etc.)
  • kyverno-tests - Generate Kyverno CLI unit tests
  • chainsaw-tests - Generate Chainsaw E2E integration tests
  • converting-chainsaw-tests - Convert Chainsaw tests to ValidatingPolicy format

Cluster Management & Assessment

  • quickstart - First-run cluster assessment and security scanning
  • recommend-policies - Analyze clusters and recommend appropriate Kyverno policies
  • kyverno-compliance-management - Install Kyverno/N4K with compliance dashboards

Troubleshooting & Operations

  • troubleshooting-kyverno - Diagnose Kyverno webhook, performance, and policy issues
  • troubleshooting-workloads - Debug Kubernetes pods and application failures

Enterprise Features

  • cost-management - Install OpenCost, Grafana dashboards, and cost guardrails
  • installing-remediator-agent - Set up AI-powered policy violation remediation

Development & Branding

  • cluster-setup - Set up local development environment (Docker, Kind, Kyverno)
  • brand-guidelines - Apply Nirmata branding to generated content

You can also add your own Skills to customize the agent.

Adding Tools

The Model Context Protocol (MCP) allows you to extend nctl ai with additional capabilities by connecting external MCP servers. These servers can provide specialized tools, resources, and functionality beyond the built-in features.

Configuration

To configure MCP servers, create a configuration file at ~/.nirmata/nctl/mcp.yaml:

servers:
  - name: resend-email
    command: node
    args:
      - /path/to/directory/mcp-send-email/build/index.js
    env:
      RESEND_API_KEY: your_api_key_here
      SENDER_EMAIL_ADDRESS: example@email.com
      REPLY_TO_EMAIL_ADDRESS: another_example@email.com
    capabilities:
      tools: true
      prompts: false
      resources: false
      attachments: true

Configuration Options

  • name: Unique identifier for the MCP server
  • command: Executable command to start the server (e.g., node, python, binary path)
  • args: Array of command-line arguments passed to the server
  • env: Environment variables required by the server (API keys, configuration values, etc.)
  • capabilities: Defines what features the server provides:
    • tools: Server provides callable tools/functions
    • prompts: Server provides prompt templates
    • resources: Server provides data resources
    • attachments: Server can handle file attachments

Common Use Cases

MCP servers can extend nctl ai with capabilities like:

  • Sending emails and notifications
  • Interacting with external APIs and services
  • Accessing databases and data sources
  • Integration with cloud platforms
  • Custom business logic and workflows

Note: Make sure the MCP server executable is installed and accessible at the specified path before adding it to the configuration.

Adding Skills

You can extend nctl ai with custom domain knowledge and best practices by creating skill files. Skills provide specialized guidance that the personal agent dynamically loads based on the task context.

Loading Custom Skills

Use the --skills flag to load skills from any local directory:

nctl ai --skills "/path/to/custom-skills"

You can load multiple skill directories:

nctl ai --skills "/path/to/team-skills,/path/to/project-skills"

You can also set the NIRMATA_AI_SKILLS environment variable to always load your custom skills:

export NIRMATA_AI_SKILLS="/path/to/custom-skills"
nctl ai

Default Skills Directory

Skills placed in the ~/.nirmata/nctl/skills directory are loaded automatically without requiring the --skills flag:

~/.nirmata/nctl/skills/
  └── kyverno-cli-tests/
      └── SKILL.md
  └── my-custom-skill/
      └── SKILL.md

Creating a Skill File

Each skill is a Markdown file (named SKILL.md) containing domain knowledge, instructions, and best practices. Here’s an example:

Example: ~/.nirmata/nctl/skills/kyverno-cli-tests/SKILL.md

# Kyverno Tests (Unit Tests)

Kyverno CLI tests are used to validate policy behaviors against sample "good" and "bad" resources. Carefully follow the instructions and best practices below when running Kyverno CLI tests:

- Always use the supplied tools to generate and execute Kyverno tests.
- **Testing:** When creating test files for Kyverno policies, always name the test file as "kyverno-test.yaml".
- **Test Execution:** After generating a Kyverno policy, test file (kyverno-test.yaml), and Kubernetes resource files, always run the "kyverno test" command to validate that the policy works correctly with the test scenarios.
- **Test Results:** All Kyverno tests must `Pass` for a successful outcome. Stop when all tests pass.
- Only test for `Audit` mode. Do not try to update policies and test for `Enforce` mode.

## Test File Organization

Organize Kyverno CLI test files in a `.kyverno-test` sub-directory where the policy YAML is contained.

```
pod-security/
  ├── disallow-privileged-containers/
  │   ├── disallow-privileged-containers.yaml
  │   └── .kyverno-tests/
  │       ├── kyverno-test.yaml
  │       ├── resources.yaml
  │       └── variables.yaml
  └── other-policies/
```

Skills can also include executable scripts (bash, Python, etc.) that the agent can run locally on your workstation for custom automation and validation workflows.

Skill Best Practices

  • Clear Structure: Use headings and lists to organize information
  • Actionable Guidance: Provide specific, actionable instructions
  • Examples: Include code examples and sample outputs
  • Context: Explain when and why to use specific approaches
  • Avoid Ambiguity: Be explicit about requirements and expectations
  • Executable Scripts: Include scripts that can be run locally to automate workflows

How Skills Work

When you interact with nctl ai, the personal agent automatically:

  1. Analyzes your request to determine the relevant domain
  2. Loads applicable skills from the default directory and any --skills paths
  3. Applies the guidance and best practices from those skills
  4. Provides responses aligned with your custom knowledge base

Note: Skills are loaded dynamically based on context. You don’t need to restart nctl ai after adding or modifying skill files.

Accessing Nirmata Control Hub

After successful authentication, you can also access the Nirmata Control Hub web interface:

  1. Navigate to https://nirmata.io
  2. Use the same email address you provided during nctl setup
  3. Use the password you created in the authentication process

Alternatively, you can sign up for a 15-day free trial and log in manually using the CLI:

nctl login --userid YOUR_USER_ID --token YOUR_API_TOKEN

Integrating with MCP clients like Cursor, Claude Code, etc.

Run the agent as an MCP server using stdio transport (default):

nctl ai --mcp-server

For Cursor and Claude Desktop, edit ~/.cursor/mcp.json or ~/Library/Application Support/Claude/claude_desktop_config.json:

{
  "mcpServers": {
    "nctl": {
      "command": "nctl",
      "args": ["ai", "--mcp-server", "--token", "YOUR_NIRMATA_TOKEN"]
    }
  }
}

You can also run the MCP server over HTTP for remote or networked setups:

nctl ai --mcp-server --mcp-server-transport http --mcp-server-port 8080

Command Reference

Run help for a full list of commands and capabilities or view the Command Reference documentation:

nctl ai --help
Agentic AI powered workflows

Usage:
  nctl ai [flags]

Examples:

  # Run an interactive AI workflow.
  nctl ai

  # Run an interactive AI workflow with a specific prompt.
  nctl ai --prompt "generate a Kyverno policy that enforces all pods have a 'team' label"

  # Use a different LLM provider (e.g., Gemini, Anthropic, or Bedrock).
  nctl ai --provider gemini --model gemini-2.5-pro
  nctl ai --provider anthropic --model claude-sonnet-4-20250514
  nctl ai --provider bedrock --model us.anthropic.claude-sonnet-4-20250514-v1:0

  # Allow AI to access additional directories.
  nctl ai --allowed-dirs "/path/to/policies,/tmp" --prompt "create pod security policies in /path/to/policies"

  # Load custom skills from local path
  nctl ai --skills "/path/to/custom-skill" --prompt "use custom skill"

  # Resume a previous session.
  nctl ai --resume-session latest
  nctl ai --resume-session 20251125-0120

  # List all available sessions.
  nctl ai --list-sessions

  # Use a custom MCP configuration file.
  nctl ai --mcp-config "/path/to/custom/mcp.yaml"

  # Start nctl as an MCP server for external AI clients.
  nctl ai --mcp-server

  # Start MCP server with verbose logging.
  nctl ai --mcp-server -v 1


Flags:
      --allowed-dirs strings          additional directories the AI can access (comma-separated, env: NIRMATA_AI_ALLOWED_DIRS)
      --delete-session string         delete a session by ID
      --force                         allow destructive operations in non-interactive mode (requires both --prompt and --skip-permission-checks)
  -h, --help                          help for ai
      --insecure                      allow connection to a Nirmata server with a insecure certificate (not recommended)
      --list-sessions                 list all available sessions
      --max-background-workers int    maximum number of background workers that can be spawned in a single tool call (default 3)
      --max-tool-calls int            maximum number of tool calls to make (default 200)
      --mcp-config string             path to MCP configuration file (default: ~/.nirmata/nctl/mcp.yaml)
      --mcp-server                    run a MCP (Model Context Protocol) server for Nirmata AI tools
      --mcp-server-port int           port to run the MCP server on when using http transport (default 8080)
      --mcp-server-transport string   transport to use for the MCP server (stdio or http) (default "stdio")
      --new-session                   create a new session
      --prompt string                 prompt for the AI workflow
      --resume-session string         ID of session to resume (use 'latest' for the most recent session)
      --skills strings                load custom skills from local paths (comma-separated, env: NIRMATA_AI_SKILLS)
      --skip-permission-checks        skip permission checks for tools (not recommended)
      --token string                  the Nirmata API Login Key (env NIRMATA_TOKEN)
      --url string                    the Nirmata server base URL (env NIRMATA_URL)
      --usage-details                 show AI usage details and exit

Global Flags:
  -v, --v Level   number for the log level verbosity