🐳 Kafka Deployment in Kubernetes Using Bitnami Helm Chart
Namespace: ai-assistant
Mode: PLAINTEXT (No TLS, No SASL)
Compatibility: Works with and without Istio Sidecar
📦 Step 1: Add Bitnami Helm Repository
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update🔄 Step 2: Clean Up Existing Kafka Installation (if applicable)
helm uninstall kafka -n ai-assistant
kubectl delete pvc --all -n ai-assistant🚀 Step 3: Install Kafka (PLAINTEXT only)
helm install kafka bitnami/kafka \
--namespace ai-assistant \
--create-namespace \
--set auth.enabled=false \
--set tls.enabled=false \
--set provisioning.enabled=false \
--set listeners.client.protocol=PLAINTEXT \
--set listeners.controller.protocol=PLAINTEXT \
--set listeners.interBroker.protocol=PLAINTEXT \
--set controller.listeners=PLAINTEXT \
--set interBrokerProtocol=PLAINTEXT \
--set externalAccess.enabled=false
# --set controller.replicaCount=1
# --set config.kafka.num.partitions=10
# --set broker.replicaCount=1for EKS specific to use default storage class
kubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
helm install kafka bitnami/kafka \
--namespace ai-assistant \
--create-namespace \
--set auth.enabled=false \
--set tls.enabled=false \
--set provisioning.enabled=false \
--set listeners.client.protocol=PLAINTEXT \
--set listeners.controller.protocol=PLAINTEXT \
--set listeners.interBroker.protocol=PLAINTEXT \
--set controller.listeners=PLAINTEXT \
--set interBrokerProtocol=PLAINTEXT \
--set externalAccess.enabled=false \
--set persistence.size=10Gi \
--set persistence.storageClass=gp2💡 Note: With only
controller.replicaCountdefined andbroker.replicaCountunset, Bitnami chart runs in combined mode (controller + broker in same pod).
🧪 Step 4: Topic Management
# List Topics
kubectl exec -n ai-assistant kafka-controller-0 -- \
kafka-topics.sh --bootstrap-server localhost:9092 --list
# Describe Topic
kubectl exec -n ai-assistant kafka-controller-0 -- \
kafka-topics.sh --bootstrap-server localhost:9092 --topic OrderTopic --describe
# Create Topic
kubectl exec -n ai-assistant kafka-controller-0 -- \
kafka-topics.sh --bootstrap-server localhost:9092 \
--create --if-not-exists \
--topic OrderTopic \
--partitions 10 \
--replication-factor 3
# Alter Topic
kubectl exec -n ai-assistant kafka-controller-0 -- \
kafka-topics.sh --alter --topic OrderTopic \
--partitions 10 \
--bootstrap-server kafka.ai-assistant.svc.cluster.local:9092
# Scale Kafka
helm upgrade kafka bitnami/kafka \
--set replicaCount=5 \
--namespace ai-assistant✅ Step 5: Verify Kafka Pods
kubectl get pods -n ai-assistant -l app.kubernetes.io/name=kafkaAll pods should report
2/2in the READY column if Istio sidecar is injected.
📡 Step 6: Kafka Broker Address
KAFKA_BROKER=kafka.ai-assistant.svc.cluster.local:9092🔐 Step 7: Validate SASL (Should Be Disabled)
kubectl exec -n ai-assistant kafka-controller-0 -- \
cat /opt/bitnami/kafka/config/server.properties | grep -i sasl💻 Step 8: Launch Kafka Client Pod
kubectl run kafka-client --restart='Never' \
--image docker.io/bitnami/kafka:4.0.0-debian-12-r3 \
--namespace ai-assistant \
--command -- sleep 600📝 Step 9: Produce Messages to Topic test
kubectl exec -n ai-assistant -it kafka-client -- \
kafka-console-producer.sh --bootstrap-server $KAFKA_BROKER --topic testType messages and press Enter to send.
📬 Step 10: Consume Messages from Topic test
kubectl exec -n ai-assistant -it kafka-client -- \
kafka-console-consumer.sh \
--bootstrap-server $KAFKA_BROKER \
--topic test \
--from-beginning✅ Kafka is now set up in PLAINTEXT mode and working inside a namespace with Istio injection.
Default Installation Logs
# helm install kafka bitnami/kafka --namespace ai-assistant
NAME: kafka
LAST DEPLOYED: Thu May 15 10:00:32 2025
NAMESPACE: ai-assistant
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: kafka
CHART VERSION: 32.2.4
APP VERSION: 4.0.0
Did you know there are enterprise versions of the Bitnami catalog? For enhanced secure software supply chain features, unlimited pulls from Docker, LTS support, or application customization, see Bitnami Premium or Tanzu Application Catalog. See https://www.arrow.com/globalecs/na/vendors/bitnami for more information.
** Please be patient while the chart is being deployed **
Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:
kafka.ai-assistant.svc.cluster.local
Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:
kafka-controller-0.kafka-controller-headless.ai-assistant.svc.cluster.local:9092
kafka-controller-1.kafka-controller-headless.ai-assistant.svc.cluster.local:9092
kafka-controller-2.kafka-controller-headless.ai-assistant.svc.cluster.local:9092
The CLIENT listener for Kafka client connections from within your cluster have been configured with the following security settings:
- SASL authentication
To connect a client to your Kafka, you need to create the 'client.properties' configuration files with the content below:
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
username="user1" \
password="$(kubectl get secret kafka-user-passwords --namespace ai-assistant -o jsonpath='{.data.client-passwords}' | base64 -d | cut -d , -f 1)";
To create a pod that you can use as a Kafka client run the following commands:
kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:4.0.0-debian-12-r3 --namespace ai-assistant --command -- sleep infinity
kubectl cp --namespace ai-assistant /path/to/client.properties kafka-client:/tmp/client.properties
kubectl exec --tty -i kafka-client --namespace ai-assistant -- bash
PRODUCER:
kafka-console-producer.sh \
--producer.config /tmp/client.properties \
--bootstrap-server kafka.ai-assistant.svc.cluster.local:9092 \
--topic test
CONSUMER:
kafka-console-consumer.sh \
--consumer.config /tmp/client.properties \
--bootstrap-server kafka.ai-assistant.svc.cluster.local:9092 \
--topic test \
--from-beginning
WARNING: There are "resources" sections in the chart not set. Using "resourcesPreset" is not recommended for production. For production installations, please set the following values according to your workload needs:
- controller.resources
- defaultInitContainers.prepareConfig.resources
+info https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
devops@devops@T_10:00@M_35.33%%@[~] ~