Getting Started
This section provides quick start guides for using the Cloud Controller, including its core features like the cloud admission controller and cloud scanner. These guides are designed for proof-of-concept and testing purposes only and are not suitable for production environments. For production-ready installation instructions, please refer to the installation page.
Prerequisites
Before you begin, ensure you have the following prerequisites:
-
A Kubernetes cluster. You can quickly create a cluster using the following command:
kind create cluster
-
Install the
ClusterPolicyReport
andClusterEphemeralReport
CRDs:kubectl apply -f https://raw.githubusercontent.com/kyverno/kyverno/refs/heads/main/config/crds/policyreport/wgpolicyk8s.io_clusterpolicyreports.yaml kubectl apply -f https://raw.githubusercontent.com/kyverno/kyverno/refs/heads/main/config/crds/reports/reports.kyverno.io_clusterephemeralreports.yaml
-
Set up AWS credentials to scan the account. You can update the
scanner.awsConfig
section in thevalues.yaml
file as shown below:scanner: awsConfig: accessKeyId: <AWS_ACCESS_KEY_ID> secretAccessKey: <AWS_SECRET_ACCESS_KEY> sessionToken: <AWS_SESSION_TOKEN>
Replace
<AWS_ACCESS_KEY_ID>
,<AWS_SECRET_ACCESS_KEY>
, and<AWS_SESSION_TOKEN>
with your AWS credentials. -
Deploy the Cloud Control Helm chart into your Kubernetes cluster:
helm install cloud-control ./charts/cloud-controller --create-namespace --namespace nirmata
-
Verify Installation:
kubectl get pods -n nirmata
The output should show the cloud controller pods deployed in the
nirmata
namespace.NAME READY STATUS RESTARTS AGE cloud-control-admission-controller-57cf7b745b-8bhkj 1/1 Running 0 103s cloud-control-reports-controller-864bcbc488-j5xkr 1/1 Running 0 103s cloud-control-scanner-7b7c8fd977-hmlqp 1/1 Running 0 103s
Cloud Admission Controller
Cloud Scanner
This section provides a step-by-step guide on how to run configure the scanner to scan your AWS account. In this example, we are going to scan ECS services in the us-east-1
region.
ValidatingPolicies
We will create two ValidatingPolicies; one of which matches ECS Clusters and the other matches ECS Task Definitions. Both policies check for the presence of the group
tag.
apiVersion: nirmata.io/v1alpha1
kind: ValidatingPolicy
metadata:
name: check-task-definition-tags
spec:
scan: true
rules:
- name: check-task-definition-tags
identifier: payload.family
match:
all:
- (metadata.provider): AWS
- (metadata.region): us-east-1
- (metadata.service): ecs
- (metadata.resource): TaskDefinition
assert:
all:
- message: >-
ECS task definitions must have a 'group' tag
check:
payload:
(tags[?key=='group'] || `[]`):
(length(@) > `0`): true
---
apiVersion: nirmata.io/v1alpha1
kind: ValidatingPolicy
metadata:
name: check-ecs-cluster-tags
spec:
scan: true
rules:
- name: check-tags
identifier: payload.clusterName
match:
all:
- (metadata.provider): "AWS"
- (metadata.region): us-east-1
- (metadata.service): "ecs"
- (metadata.resource): "Cluster"
assert:
all:
- message: A 'group' tag is required
check:
payload:
(tags[?key=='group'] || `[]`):
(length(@) > `0`): true
ECS Clusters and Task Definitions
To test the scanner, we will create ECS resources, both compliant and non-compliant with the ValidatingPolicies that check the group
tag, by creating ECS clusters and task definitions, some with and some without the required group
tag.
-
Create an ECS cluster named
bad-cluster
without thegroup
tag:aws ecs create-cluster --cluster-name bad-cluster
-
Register a task definition named
bad-task
without thegroup
tag:aws ecs register-task-definition \ --family bad-task \ --container-definitions '[{"name": "my-app", "image": "nginx:latest", "essential": true, "portMappings": [{"containerPort": 80, "hostPort": 80}]}]' \ --requires-compatibilities FARGATE \ --cpu 256 \ --memory 512 \ --network-mode awsvpc
-
Create an ECS cluster named
good-cluster
with thegroup
tag:aws ecs create-cluster --cluster-name good-cluster --tags key=group,value=development
-
Register a task definition named
good-task
with thegroup
tag:aws ecs register-task-definition \ --family good-task \ --container-definitions '[{"name": "my-app", "image": "nginx:latest", "essential": true, "portMappings": [{"containerPort": 80, "hostPort": 80}]}]' \ --requires-compatibilities FARGATE \ --cpu 256 \ --memory 512 \ --network-mode awsvpc \ --tags '[{"key": "group", "value": "production"}]'
AWSAccountConfiguration
To scan AWS resources, you need to define the scope of the scan by creating an AWSAccountConfig
custom resource. This configuration specifies the target AWS account ID, regions, and services. It’s important to create the necessary policies before applying the AWSAccountConfig
to ensure they are ready when the scanner starts.
apiVersion: nirmata.io/v1alpha1
kind: AWSAccountConfig
metadata:
name: aws-scan
spec:
scanInterval: 1h
accountID: "123456789012"
accountName: "mariamfahmy"
regions:
- us-east-1
services:
- ECS
Upon the creation of the AWSAccountConfig
resource, the scanner will be triggered and scan the specified AWS account for ECS services in the us-east-1
region. As a result, policy reports will be generated for the scanned resources.
View Reports
In this example, the scanner will generate four ClusterPolicyReports: two for the bad-cluster
and bad-task
resources, and two for the good-cluster
and good-task
resources. The reports will show the compliance status of the resources based on the ValidatingPolicies.
To view the generated reports, run the following command:
kubectl get clusterpolicyreports
The output should show the generated reports:
NAME KIND NAME PASS FAIL WARN ERROR SKIP AGE
1a468eba2818db9333ede8428bf6c910d467db5d5fc1b36adc535ce32cea2c5 ECSCluster good-cluster 1 0 0 0 0 4s
1c4a23fad560fe66ee7d139b456451d8aaf7c75a9aaa12007a37f1d6f3056b9 ECSCluster bad-cluster 0 1 0 0 0 4s
91696bc8dbb327de99c4d34c579de8bd71e2ef45ad325d10d39d690ad14776c ECSTaskDefinition bad-task__2 0 1 0 0 0 4s
cf987d912032e51712ad73a2067a1c5ffee16d8872575166c0739ffedfc0766 ECSTaskDefinition good-task__2 1 0 0 0 0 4s
To view the details of a specific report, run the following command:
kubectl get clusterpolicyreports 1c4a23fad560fe66ee7d139b456451d8aaf7c75a9aaa12007a37f1d6f3056b9 -o yaml
apiVersion: wgpolicyk8s.io/v1alpha2
kind: ClusterPolicyReport
metadata:
labels:
app.kubernetes.io/managed-by: cloud-control-point
cloud.policies.nirmata.io/account-id: "123456789012"
cloud.policies.nirmata.io/account-name: mariamfahmy
cloud.policies.nirmata.io/last-modified: "1731585775"
cloud.policies.nirmata.io/provider: AWS
cloud.policies.nirmata.io/region: us-east-1
cloud.policies.nirmata.io/resource-id: 1c4a23fad560fe66ee7d139b456451d8aaf7c75a9aaa12007a37f1d6f3056b9
cloud.policies.nirmata.io/resource-name: bad-cluster
cloud.policies.nirmata.io/resource-type: Cluster
cloud.policies.nirmata.io/service: ecs
cloud.policies.nirmata.io/ttl: 1h10m0s
name: 1c4a23fad560fe66ee7d139b456451d8aaf7c75a9aaa12007a37f1d6f3056b9
results:
- message: |-
-> A 'group' tag is required
-> all[0].check.payload.(tags[?key=='group'] || `[]`).(length(@) > `0`): Invalid value: false: Expected value: true
policy: check-ecs-cluster-tags
result: fail
rule: check-tags
scored: true
source: cloud-control
timestamp:
nanos: 0
seconds: 1731585775
scope:
apiVersion: nirmata.io/v1alpha1
kind: ECSCluster
name: bad-cluster
summary:
error: 0
fail: 1
pass: 0
skip: 0
warn: 0