LLM Providers
The following configurations can be used when creating the LLMConfig CR. These will be referenced from the Remediator CR.
Default Provider: Nirmata AI
Nirmata AI is the default and recommended LLM provider. If you’re installing via Helm, you only need to create a secret with your Nirmata API token:
kubectl create secret generic nirmata-api-token \
--from-literal=api-token=<YOUR_NIRMATA_API_TOKEN> \
--namespace nirmata
The Helm chart will automatically create the LLMConfig with the following configuration:
apiVersion: serviceagents.nirmata.io/v1alpha1
kind: LLMConfig
metadata:
name: remediator-agent-llm
namespace: nirmata
spec:
type: nirmataAI
nirmataAI:
endpoint: https://nirmata.io
model: "" # Optional: specify a model, otherwise uses default
apiKeySecretRef:
name: nirmata-api-token
key: api-token
namespace: nirmata
Helm Configuration
If you’re using Helm, you can configure the LLM provider in your values.yaml
:
llm:
provider: "nirmataAI" # Options: nirmataAI, bedrock, azure-openai
bedrock:
model: ""
region: ""
secretRef:
name: ""
key: "aws_access_key_id" # Optional for some auth methods
azureOpenAI:
endpoint: ""
deploymentName: ""
secretRef:
name: ""
key: "api-key" # Defaults to standard header key
nirmataAI:
model: "" # Optional: specify a model, otherwise uses default
Alternative Providers
If you prefer to use other LLM providers, you can configure them as follows:
AWS Bedrock
Using Pod Identity Agent (Recommended if running in an EKS cluster):
Create an IAM role with a trust policy for the Pod Identity Agent.
aws iam create-role \
--role-name remediator-agent-role \
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "Service": "pods.eks.amazonaws.com" },
"Action": [ "sts:AssumeRole", "sts:TagSession" ]
}
]
}'
Give the role permission to invoke Bedrock models.
aws iam put-role-policy \
--role-name remediator-agent-role \
--policy-name BedrockInvokePolicy \
--policy-document '{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "BedrockInvoke",
"Effect": "Allow",
"Action": [
"bedrock:InvokeModel",
"bedrock:InvokeModelWithResponseStream"
],
"Resource": "arn:aws:bedrock:<AWS_REGION>:<AWS_ACCOUNT_ID>:application-inference-profile/<BEDROCK_INFERENCE_PROFILE>"
}
]
}'
Bind the IAM role to your Kubernetes ServiceAccount using Pod Identity. Replace <CLUSTER_NAME> and <ACCOUNT_ID> with your actual cluster name and accound id.
aws eks create-pod-identity-association \
--cluster-name <CLUSTER_NAME> \
--namespace nirmata \
--service-account remediator-agent \
--role-arn arn:aws:iam::<ACCOUNT_ID>:role/remediator-agent-role
Verify the association.
aws eks list-pod-identity-associations \
--cluster-name <CLUSTER_NAME>
Create the LLMConfig
CR.
apiVersion: serviceagents.nirmata.io/v1alpha1
kind: LLMConfig
metadata:
name: remediator-agent-llm
namespace: nirmata
spec:
type: bedrock
bedrock:
model: MODEL_ARN_OR_INFERENCE_ARN
region: AWS_REGION
Using credentials:
Create a Kubernetes secret in the nirmata
namespace with your AWS credentials.
kubectl create secret generic aws-bedrock-credentials \
--from-literal=aws_access_key_id=AWS_ACCESS_KEY_ID \
--from-literal=aws_secret_access_key=AWS_SECRET_ACCESS_KEY \
--from-literal=aws_session_token=AWS_SESSION_TOKEN
-n nirmata
Create the LLMConfig
CR.
apiVersion: serviceagents.nirmata.io/v1alpha1
kind: LLMConfig
metadata:
name: remediator-agent-llm
namespace: nirmata
spec:
type: bedrock
bedrock:
model: ARN_OR_MODEL_NAME
region: us-west-2
credentialsSecretRef:
name: aws-bedrock-credentials
key: aws_access_key_id
Azure OpenAI
Create a Kubernetes secret in the nirmata
namespace with your Azure credentials.
kubectl create secret generic azure-openai-credentials \
--from-literal=api-key=AZURE_API_KEY \
-n nirmata
Create the LLMConfig
CR.
apiVersion: serviceagents.nirmata.io/v1alpha1
kind: LLMConfig
metadata:
name: remediator-agent-llm
namespace: nirmata
spec:
type: azure-openai
azureOpenAI:
endpoint: https://YOUR_RESOURCE_NAME.openai.azure.com/
deploymentName: DEPLOYMENT_NAME
apiKeySecretRef:
name: azure-openai-api-key
key: api-key
namespace: nirmata
Quick Start with Helm
For the easiest setup using the default Nirmata AI provider:
Create the API token secret:
kubectl create secret generic nirmata-api-token \ --from-literal=api-token=<YOUR_NIRMATA_API_TOKEN> \ --namespace nirmata
Install with Helm: The LLMConfig will be automatically created with the default Nirmata AI configuration.
Optional: Customize the provider in your
values.yaml
if you want to use a different LLM provider.
LLMConfig CRD Reference
This section provides a comprehensive reference for all fields in the LLMConfig Custom Resource Definition (CRD).
LLMConfigSpec
The LLMConfigSpec
defines the desired state of a LLMConfig resource.
type
(required)
Defines the type of provider.
Type: string
Valid Values:
bedrock
- Amazon Bedrock providerazure-openai
- Azure OpenAI providernirmata
- Nirmata AI provider
Example:
spec:
type: bedrock
bedrock
(optional)
Configuration for Amazon Bedrock provider. Required when type is “bedrock”.
Type: BedrockConfig
bedrock.model
(required)
The Bedrock model to use (can be model ID or inference profile ARN).
Type: string
bedrock.region
(required)
The AWS region for Bedrock service.
Type: string
bedrock.credentialsSecretRef
(optional)
Reference to a secret containing AWS credentials.
Type: SecretRef
Fields:
name
(required) - Name of the secretnamespace
(optional) - Namespace of the secretkey
(required) - Key within the secret
bedrock.roleArn
(optional)
The ARN of the IAM role to assume for accessing Bedrock.
Type: string
bedrock.externalId
(optional)
The external ID for role assumption.
Type: string
Example:
bedrock:
model: arn:aws:bedrock:us-west-2:844333597536:application-inference-profile/cpl55iltpz6n
region: us-west-2
credentialsSecretRef:
name: aws-bedrock-credentials
key: aws_access_key_id
azureOpenAI
(optional)
Configuration for Azure OpenAI provider. Required when type is “azure-openai”.
Type: AzureOpenAIConfig
azureOpenAI.endpoint
(required)
The Azure OpenAI endpoint URL.
Type: string
azureOpenAI.deploymentName
(required)
The name of the deployment.
Type: string
azureOpenAI.apiKeySecretRef
(required)
Reference to a secret containing Azure OpenAI API key.
Type: SecretRef
Example:
azureOpenAI:
endpoint: https://YOUR_RESOURCE_NAME.openai.azure.com/
deploymentName: DEPLOYMENT_NAME
apiKeySecretRef:
name: azure-openai-api-key
key: api-key
namespace: nirmata
nirmataAI
(optional)
Configuration for Nirmata AI provider. Required when type is “nirmata”.
Type: NirmataAIConfig
nirmataAI.endpoint
(required)
The Nirmata endpoint URL.
Type: string
nirmataAI.model
(required)
The Nirmata model to use.
Type: string
nirmataAI.apiKeySecretRef
(required)
Reference to a secret containing Nirmata API key.
Type: SecretRef
Example:
nirmataAI:
endpoint: https://api.nirmata.com
model: nirmata-model-v1
apiKeySecretRef:
name: nirmata-api-key
key: api-key
namespace: nirmata
LLMConfigStatus
The LLMConfigStatus
defines the observed state of a LLMConfig resource.
Complete Examples
AWS Bedrock with Pod Identity
apiVersion: serviceagents.nirmata.io/v1alpha1
kind: LLMConfig
metadata:
name: remediator-agent-llm
namespace: nirmata
spec:
type: bedrock
bedrock:
model: arn:aws:bedrock:us-west-2:844333597536:application-inference-profile/cpl55iltpz6n
region: us-west-2
AWS Bedrock with Credentials
apiVersion: serviceagents.nirmata.io/v1alpha1
kind: LLMConfig
metadata:
name: remediator-agent-llm
namespace: nirmata
spec:
type: bedrock
bedrock:
model: arn:aws:bedrock:us-west-2:844333597536:application-inference-profile/cpl55iltpz6n
region: us-west-2
credentialsSecretRef:
name: aws-bedrock-credentials
key: aws_access_key_id
Azure OpenAI
apiVersion: serviceagents.nirmata.io/v1alpha1
kind: LLMConfig
metadata:
name: remediator-agent-llm
namespace: nirmata
spec:
type: azure-openai
azureOpenAI:
endpoint: https://YOUR_RESOURCE_NAME.openai.azure.com/
deploymentName: DEPLOYMENT_NAME
apiKeySecretRef:
name: azure-openai-api-key
key: api-key
namespace: nirmata
Provider-Specific Configuration
The LLMConfig CRD supports multiple providers, and exactly one provider configuration must be specified based on the type
field:
- When
type: "bedrock"
, thebedrock
field must be configured - When
type: "azure-openai"
, theazureOpenAI
field must be configured - When
type: "nirmata"
, thenirmataAI
field must be configured
This ensures that the LLMConfig configuration is consistent and valid for the specified provider.