k8s学习(31):EFK-日志系统

  • Elasticsearch:是一个搜索引擎,负责存储日志并提供查询接口;
  • Fluentd:负责从 Kubernetes 搜集日志,每个node节点上面的fluentd监控并收集该节点上面的系统日志,并将处理过后的日志信息发送给Elasticsearch;
  • Kibana:提供了一个 Web GUI,用户可以浏览和搜索存储在 Elasticsearch 中的日志。

部署EFK

这里采用的是Helm部署的方式

首先新建一个目录

1
2
mkdir efk
cd efk

添加Google incubator仓库

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@k8s-master efk]# helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
"incubator" has been added to your repositories
[root@k8s-master efk]# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
...Successfully got an update from the "incubator" chart repository
Update Complete. ⎈ Happy Helming!⎈
[root@k8s-master efk]# helm repo list
NAME URL
stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
local http://127.0.0.1:8879/charts
incubator http://storage.googleapis.com/kubernetes-charts-incubator

部署Elasticsearch

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
## 创建名称空间
[root@k8s-master efk]# kubectl create namespace efk
namespace/efk created

## fetch
[root@k8s-master efk]# helm fetch incubator/elasticsearch
## 解压
[root@k8s-master efk]# ls
elasticsearch-1.10.2.tgz
[root@k8s-master efk]# tar -zxvf elasticsearch-1.10.2.tgz
[root@k8s-master efk]# cd elasticsearch


## 修改values.yaml
MINIMUM_MASTER_NODES:最大的master节点数
所有的replicas改为1
所有的 persistence.enabled=false ## 关闭持久卷



## 安装
[root@k8s-master elasticsearch]# helm install --name els1 --namespace=efk -f values.yaml .
NAME: els1
LAST DEPLOYED: Fri Feb 7 04:20:28 2020
NAMESPACE: efk
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
els1-elasticsearch 4 1s

==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
els1-elasticsearch-client-59bcdcbfb7-hwbc2 0/1 Init:0/1 0 1s
els1-elasticsearch-data-0 0/1 Init:0/2 0 1s
els1-elasticsearch-master-0 0/1 Init:0/2 0 0s

==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
els1-elasticsearch-client ClusterIP 10.108.189.128 <none> 9200/TCP 1s
els1-elasticsearch-discovery ClusterIP None <none> 9300/TCP 1s

==> v1beta1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
els1-elasticsearch-client 0/1 1 0 1s

==> v1beta1/StatefulSet
NAME READY AGE
els1-elasticsearch-data 0/1 1s
els1-elasticsearch-master 0/1 1s


NOTES:
The elasticsearch cluster has been installed.

***
Please note that this chart has been deprecated and moved to stable.
Going forward please use the stable version of this chart.
***

Elasticsearch can be accessed:

* Within your cluster, at the following DNS name at port 9200:

els1-elasticsearch-client.efk.svc

* From outside the cluster, run these commands in the same shell:

export POD_NAME=$(kubectl get pods --namespace efk -l "app=elasticsearch,component=client,release=els1" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:9200 to use Elasticsearch"
kubectl port-forward --namespace efk $POD_NAME 9200:9200



## 查看pod、svc
[root@k8s-master elasticsearch]# kubectl get pods -n efk
NAME READY STATUS RESTARTS AGE
els1-elasticsearch-client-59bcdcbfb7-hwbc2 0/1 Running 0 43s
els1-elasticsearch-data-0 0/1 Running 0 43s
els1-elasticsearch-master-0 1/1 Running 0 42s
[root@k8s-master elasticsearch]# kubectl get svc -n efk
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
els1-elasticsearch-client ClusterIP 10.108.189.128 <none> 9200/TCP 48s
els1-elasticsearch-discovery ClusterIP None <none> 9300/TCP 48s


## 验证
[root@k8s-master k8s-install]# kubectl run cirros-$RANDOM --rm -it --image=cirros -- /bin/sh
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
/ # curl 10.108.189.128:9200/_cat/nodes
10.244.3.22 16 97 14 1.01 0.97 0.67 i - els1-elasticsearch-client-59bcdcbfb7-hwbc2
10.244.3.23 5 97 10 1.01 0.97 0.67 di - els1-elasticsearch-data-0
10.244.3.24 16 97 8 1.01 0.97 0.67 mi * els1-elasticsearch-master-0
/ #

部署Fluentd

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
[root@k8s-master efk]# helm fetch incubator/fluentd-elasticsearch
[root@k8s-master efk]# tar -zxvf fluentd-elasticsearch-2.0.7.tgz
[root@k8s-master efk]# cd fluentd-elasticsearch

vim values.yaml
# 更改其中Elasticsearch访问地址`elasticsearch.host` 更改为 els1-elasticsearch-client的clusterip地址
[root@k8s-master fluentd-elasticsearch]# helm install --name flu2 --namespace=efk -f values.yaml .
NAME: flu2
LAST DEPLOYED: Fri Feb 7 04:24:24 2020
NAMESPACE: efk
STATUS: DEPLOYED

RESOURCES:
==> v1/ClusterRole
NAME AGE
flu2-fluentd-elasticsearch 1s

==> v1/ClusterRoleBinding
NAME AGE
flu2-fluentd-elasticsearch 1s

==> v1/ConfigMap
NAME DATA AGE
flu2-fluentd-elasticsearch 6 1s

==> v1/DaemonSet
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
flu2-fluentd-elasticsearch 1 1 0 1 0 <none> 1s

==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
flu2-fluentd-elasticsearch-pnsdm 0/1 ContainerCreating 0 1s

==> v1/ServiceAccount
NAME SECRETS AGE
flu2-fluentd-elasticsearch 1 1s


NOTES:
1. To verify that Fluentd has started, run:

kubectl --namespace=efk get pods -l "app.kubernetes.io/name=fluentd-elasticsearch,app.kubernetes.io/instance=flu2"

THIS APPLICATION CAPTURES ALL CONSOLE OUTPUT AND FORWARDS IT TO elasticsearch . Anything that might be identifying,
including things like IP addresses, container images, and object names will NOT be anonymized.

部署Kibana

这里E和K的版本要一致

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
## helm fetch stable/kibana --version 0.14.8  视频上说要指定版本跟E一致,但是我下载不来,先不指定看看能不能跑起来
[root@k8s-master efk]# helm fetch stable/kibana
[root@k8s-master efk]# tar -zxvf kibana-0.2.2.tgz

# 修改values.yaml,files.kibana.yml.elasticsearch.url
[root@k8s-master kibana]# helm install --name kib1 --namespace=efk -f values.yaml .
NAME: kib1
LAST DEPLOYED: Fri Feb 7 04:33:38 2020
NAMESPACE: efk
STATUS: DEPLOYED

RESOURCES:
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
kib1-kibana-5786c8c9fd-kd94n 0/1 ContainerCreating 0 0s

==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kib1-kibana ClusterIP 10.107.183.205 <none> 443/TCP 0s

==> v1beta1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
kib1-kibana 0/1 1 0 0s


NOTES:
To verify that kib1-kibana has started, run:

kubectl --namespace=efk get pods -l "app=kib1-kibana"

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
root@k8s-master kibana]# kubectl get pods -n efk
NAME READY STATUS RESTARTS AGE
els1-elasticsearch-client-59bcdcbfb7-hwbc2 1/1 Running 0 15m
els1-elasticsearch-data-0 1/1 Running 0 15m
els1-elasticsearch-master-0 1/1 Running 0 15m
flu2-fluentd-elasticsearch-r4j2w 1/1 Running 0 4m28s
kib1-kibana-5786c8c9fd-kd94n 1/1 Running 0 118s
[root@k8s-master kibana]# kubectl get svc -n efk
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
els1-elasticsearch-client ClusterIP 10.108.189.128 <none> 9200/TCP 15m
els1-elasticsearch-discovery ClusterIP None <none> 9300/TCP 15m
kib1-kibana ClusterIP 10.107.183.205 <none> 443/TCP 2m6s

## 修改kib1-kibana的svc类型为NodePort
[root@k8s-master kibana]# kubectl get svc -n efk
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
els1-elasticsearch-client ClusterIP 10.108.189.128 <none> 9200/TCP 15m
els1-elasticsearch-discovery ClusterIP None <none> 9300/TCP 15m
kib1-kibana NodePort 10.107.183.205 <none> 443:31709/TCP 2m49s

浏览器访问 NodeIP:NodePort

这里#是192.168.128.140:31709

可以看到,由于版本不对,所以导致状态为Red

image-20200207043800405

解决过程

1 首先修改kibana的镜像版本

2 查看到Elasticsearch镜像版本为6.4.2,于是我改了Kibana的镜像版本为6.4.2,但是状态还是red,看red的点是没连接上Elasticsearch service,怀疑应该是Kibana的values文件没有写对服务的IP

image-20200207045158441

3 有可能是E或者F有问题 再看

/ # curl 10.108.189.128:9200/_cat/health
1581024538 21:28:58 elasticsearch red 3 1 0 0 0 0 0 0 - NaN%

应该是E有问题,删了再创建看看