创建 DaemonSet
daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
namespace: kube-system
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd-elasticsearch
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
tolerations:
# 这些容忍度设置是为了让该守护进程集在控制平面节点上运行
# 如果你不希望自己的控制平面节点运行 Pod,可以删除它们
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: fluentd-elasticsearch
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
# 可能需要设置较高的优先级类以确保 DaemonSet Pod 可以抢占正在运行的 Pod
# priorityClassName: important
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
创建DaemonSet
使用 kubectl create 命令创建该 DaemonSet:
$ kubectl create -f daemonset.yaml
daemonset.apps/fluentd-elasticsearch created
查看相关信息
$ kubectl get ds -n kube-system -l k8s-app=fluentd-logging
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
fluentd-elasticsearch 5 5 5 5 5 <none> 12m
$ kubectl get pod -n kube-system -l name=fluentd-elasticsearch -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
fluentd-elasticsearch-j8hq8 1/1 Running 0 17m 172.18.195.1 k8s-master03 <none> <none>
fluentd-elasticsearch-psfkr 1/1 Running 0 17m 172.25.92.71 k8s-master02 <none> <none>
fluentd-elasticsearch-r6v22 1/1 Running 0 17m 172.17.125.46 k8s-node01 <none> <none>
fluentd-elasticsearch-swp4z 1/1 Running 0 17m 172.25.244.193 k8s-master01 <none> <none>
fluentd-elasticsearch-tftmd 1/1 Running 0 17m 172.27.14.240 k8s-node02 <none> <none>
最后更新于
这有帮助吗?