Cluster Onboarding
Applies to: Nirmata Control Hub 4.0 and later
Prerequisites
Before onboarding your Kubernetes cluster to Nirmata Control Hub, ensure that your cluster is CNCF-compliant. You can onboard both cloud-provided and local Kubernetes clusters, such as kind and minikube clusters.
Onboarding Workflow - UI Wizard
Step 1: Add Cluster
- Navigate to the Clusters page in Nirmata Control Hub.
- Click on the Add Cluster button to open the onboarding wizard.
- Enter Cluster Information:
- Provide a name for your cluster.
- Optionally, add labels to your cluster for better identification.
Step 2: Choose Onboarding Method
You have two options for onboarding:
- NCTL (Nirmata CLI): Recommended for users who want a streamlined process.
- Helm: For users who prefer to use Helm charts. You can switch to the Helm tab for detailed instructions.
NOTE: We recommend using NCTL if you are just trying out Nirmata, with version 4.7.0 or higher required for a smooth onboarding experience.
Follow the steps mentioned in the wizard and once the command runs successfully, run the I have run the commands - Verify Kyverno button.
Step 3: Verify Kyverno Health
In this stage, we check the health of Kyverno running in the cluster to ensure it is optimally configured:
- No Greenfield Cluster Required: If your cluster is running an older version of Nirmata Enterprise for Kyverno or even open-source Kyverno, it can still be onboarded without issues.
- We will also recommend newer Nirmata Enterprise for Kyverno versions if an update is needed for optimal performance.
Step 4: Select PolicySets
Nirmata provides several built-in policy sets that you can deploy to your cluster:
- Pod Security Standards (17 controls in total) are available by default during onboarding.
- You can choose to deploy these policies immediately or select them later if you prefer to manage policies on your own.
NOTE: Deploying policy sets during onboarding is optional, and you can skip this step if you already have your own set of policies.
Step 5: Final Verification
Once the above steps are completed, the final stage ensures that all related components are properly installed and running:
- Kyverno (opensource or enterprise).
- Kyverno Operator for health monitoring and policy management.
- PolicySets (optional. Only if you had installed policysets in previous step.)
- Nirmata kube-controller, the agent that communicates with Nirmata SaaS and monitors your cluster.
Onboarding with the Helm chart
Add and update Helm repo
Add the Nirmata Helm chart repository.
helm repo add nirmata https://nirmata.github.io/kyverno-charts/
helm repo update nirmata
```text
### Install Nirmata Kube Controller
#### Using a User API Token
```bash
helm install nirmata-kube-controller nirmata/nirmata-kube-controller -n nirmata --create-namespace \
--set cluster.name=test \
--set namespace=nirmata \
--set apiToken=<nirmata-api-token> \
--set features.policyExceptions.enabled=true \
--set features.policySets.enabled=true
```text
#### Using a Service Account Token (Recommended for Automation)
For GitOps pipelines and automated cluster registration workflows, you can authenticate using an Nirmata Control Hub Service Account token instead of a user API token. The `serviceAccountToken` field replaces `apiToken` and accepts the Service Account secret generated in Nirmata Control Hub.
```bash
helm install nirmata-kube-controller nirmata/nirmata-kube-controller -n nirmata --create-namespace \
--set cluster.name=<cluster-name> \
--set serviceAccountToken=<nch-service-account-secret> \
--set features.policyExceptions.enabled=true \
--set features.policySets.enabled=true \
--set clusterOnboardingToken=<onboarding-token> \
--set nirmataURL=wss://nirmata.io/tunnels
```text
To create a Service Account and generate a token:
1. Log in to [Nirmata Control Hub](https://nirmata.io)
2. Navigate to **Identity & Access** from the left sidebar
3. Go to the **Service Accounts** section and create a new Service Account with the appropriate cluster registration permissions
4. Copy the generated secret and use it as the `serviceAccountToken` value
>NOTE: You will have a clusterOnboardingToken only if you are installing from the UI wizard. If you are making this a part of automation, you can skip this field.
### Install Nirmata Enterprise for Kyverno Operator
The enterprise kyverno operator is used to monitor Kyverno, and its policies. It is also used to prevent tampering of Kyverno configuration and policies in the cluster.
To install the enterprise kyverno operator, run the following commands.
```bash
helm install kyverno-operator nirmata/nirmata-kyverno-operator -n nirmata-system \
--create-namespace \
--set enablePolicyset=true
```text
>NOTE: To install reports server along with enterprise kyverno, follow the documentation [here](../../controllers/n4k/reports-server/#installation). The below command installs **only** enterprise kyverno (without reports-server).
### Install Nirmata Enterprise for Kyverno
```bash
helm install kyverno nirmata/kyverno -n kyverno --create-namespace \
--set features.policyExceptions.namespace="kyverno" \
--set crds.reportsServer.enabled=false \
--set features.policyExceptions.enabled=true
Secure Installation Tips
Configure Nirmata Permissions
See Cluster Deployment Options to choose between Read-Only mode (you manage resources with your own tools) and Read-Write mode (Nirmata deploys Policies and Policy Exceptions directly).
Cluster Deployment Options
Choose how Nirmata Control Hub manages resources in your cluster — read-only or read-write permission modes.