In Progress: Setting up EKS with Fluent Bit for Log Aggregation
helm repo add fluent https://fluent.github.io/helm-charts
helm repo update
> values.yaml `https://github.com/fluent/helm-charts/blob/main/charts/fluent-bit/values.yaml`
helm show values fluent/fluent-bit > tracemypods-values.yaml
helm upgrade --install fluent-bit fluent/fluent-bit --namespace kube-system -f values.yaml
kubectl delete configmap kibana-kibana-helm-scripts -n logging
kubectl delete serviceaccount pre-install-kibana-kibana -n logging
kubectl delete role pre-install-kibana-kibana -n logging
kubectl delete rolebinding pre-install-kibana-kibana -n logging
kubectl delete job pre-install-kibana-kibana -n logging
kubectl delete secret kibana-kibana-es-token -n logging
kubectl delete sa post-delete-kibana-kibana -n logging
kubectl create secret generic elasticsearch-master-certs \
--from-literal=ca.crt="" \
-n logging
kubectl create secret generic elasticsearch-master-credentials \
--from-literal=username=elastic \
--from-literal=password=changeme \
-n logging
helm install kibana elastic/kibana -n logging -f kibana-values.yaml --no-hooks
helm upgrade kibana elastic/kibana -n logging -f kibana-values.yaml --no-hooks
## ✅ Corrected and Final Working Flow
### 🔹 Step 1: Create Namespace
```bash
kubectl create namespace logging🔹 Step 2: Add Helm Repo
helm repo add elastic https://helm.elastic.co
helm repo update🔹 Step 3: Elasticsearch Deployment
Update your elasticsearch-values.yaml:
clusterName: "elasticsearch"
nodeGroup: "master"
secret:
enabled: true
password: "changeme"
createCert: trueNow deploy:
helm upgrade --install elasticsearch elastic/elasticsearch \
-n logging -f elasticsearch-values.yamlWait until pods are ready:
kubectl get pods -n logging🔹 Step 4: Verify Secrets
Find the correct names:
kubectl get secrets -n loggingExample output may show:
elasticsearch-master-certs✅ (notminikube-es-master-certs)elasticsearch-master-credentials✅
Then extract and recreate CA secret:
kubectl get secret elasticsearch-master-certs -n logging \
-o jsonpath="{.data.ca\.crt}" | base64 --decode > ca.crt
kubectl create secret generic elasticsearch-certificates \
--from-file=ca.crt=ca.crt \
-n logging🔹 Step 5: kibana-values.yaml (Corrected)
elasticsearchHosts: "https://elasticsearch-master.logging.svc:9200"
protocol: https
elasticsearchCertificateSecret: elasticsearch-certificates
elasticsearchCertificateAuthoritiesFile: ca.crt
elasticsearchCredentialSecret: elasticsearch-master-credentials
createElasticsearchToken: false
This disables the failing token creation job and uses your existing Elasticsearch credentials.
🔹 Step 6: Deploy Kibana
helm install kibana elastic/kibana \
-n logging -f kibana-values.yamlThen confirm:
kubectl get pods -n logging
kubectl get svc -n logging✅ Done!
Kibana should now come up without:
- The token error
- The DNS error (
ENOTFOUND elasticsearch-master...)