k8s学习(14):Service实例

ClusterIP

ClusterIP主要在每个Node节点使用iptables(在我们现在的环境是IPVS),将发向ClusterIP对应端口的数据,转发到kube-proxy中。然后kube-proxy自己内部实现有负载均衡的方法,并可以查询到这个Service下对应Pod的地址和端口,进而把数据转发给度踹常能的Pod的地址和端口

image-20200201141409970

为了实现图上的功能,主要需要以下几个组件的协同工作:

  • apiServer 用户通过kubectl命令向apiserver发送创建service的命令,apiserver接收到请求后将数据存储到etcd中
  • kube-proxy kubernetes的每个节点中都有一个叫做kube-proxy的进程,这个进程负责感知service,pod的变化,并将变化的信息写入本地的iptables规则中
  • iptables 使用NAT等技术将virtualIP的流量转至endpoint中

举个栗子

首先创建deployment

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
apiVersion: apps/v1
kind: Deployment
metadata:
name:myapp-deploy
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: myapp
release: stable
template:
metadata:
labels:
app: myapp
release: stable
env: test
spec:
containers:
- name: myapp-container
image: hub.test.com/library/myapp:v1
ports:
- name: http
containerPort: 80

接着创建Service

1
2
3
4
5
6
7
8
9
10
11
12
13
14
apiVersion: v1
kind: Service
metadata:
name: myapp
namespace: default
spec:
selector:
app: myapp
release: stable
type: ClusterIP
ports:
- name: http
port: 80
targetPort: 80
1
2
3
4
5
6
7
8
9
10
# pod查看
kubectl get pod
# deployment查看
kubectl get deployment
# svc查看
kubectl get svc
# ipvs查看
ipvsadm -Ln
# 访问IP
curl x.x.x./hostname.html

deployment的selector需要加matchLabels

svc直接在selector下加标签即可

Headless Service

有时不需要或不想要负载均衡,以及单独的Service IP。遇到这种情况,可以通过制定Cluster IPspec.clusterIP的值为None来创建Headless Service。这类Service并不会分配ClusterIP,kube-proxy不会处理它们,而且平台也不会为它们进行负载均衡和路由

通俗来讲:就是创建一个没有IP的ClusterIP服务 ,但是可以通过coreDNS的域名方式进行访问

举个栗子

1
2
3
4
5
6
7
8
9
10
11
12
13
14
apiVersion: v1
kind: Service
metadata:
name: myapp-headless
namespace: default
spec:
selector:
app: myapp
env: test
clusterIP: None
ports:
- port: 80
targetPort: 80

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
# 安装dig
yum install -y bind-utils
# 查看kube-dns的ClusterIP的IP地址
kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 4d
# 测试coreDNS域名访问
dig -t A myapp-headless.default.svc.cluster.local. @10.96.0.10

; <<>> DiG 9.11.4-P2-RedHat-9.11.4-9.P2.el7 <<>> -t A myapp-headless.default.svc.cluster.local. @10.96.0.10
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 38148
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;myapp-headless.default.svc.cluster.local. IN A

;; ANSWER SECTION:
myapp-headless.default.svc.cluster.local. 30 IN A 10.244.2.156
myapp-headless.default.svc.cluster.local. 30 IN A 10.244.2.157
myapp-headless.default.svc.cluster.local. 30 IN A 10.244.1.101

;; Query time: 1 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Sat Feb 01 16:53:08 +08 2020
;; MSG SIZE rcvd: 237

# 上面例子是通过kube-dns的IP,也就是svc来访问的
# 也可以通过pod的IP来访问
kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-5c98db65d4-gh2qc 1/1 Running 3 4d 10.244.0.7 k8s-master <none> <none>
coredns-5c98db65d4-pczm8 1/1 Running 3 4d 10.244.0.6 k8s-master <none> <none>
etcd-k8s-master 1/1 Running 3 4d 192.168.128.140 k8s-master <none> <none>
kube-apiserver-k8s-master 1/1 Running 3 4d 192.168.128.140 k8s-master <none> <none>
kube-controller-manager-k8s-master 1/1 Running 6 4d 192.168.128.140 k8s-master <none> <none>
kube-flannel-ds-amd64-2lrvk 1/1 Running 3 4d 192.168.128.142 k8s-node02 <none> <none>
kube-flannel-ds-amd64-bst48 1/1 Running 3 4d 192.168.128.140 k8s-master <none> <none>
kube-flannel-ds-amd64-jdqq8 1/1 Running 3 4d 192.168.128.141 k8s-node01 <none> <none>
kube-proxy-dm5p6 1/1 Running 2 4d 192.168.128.140 k8s-master <none> <none>
kube-proxy-dwdvl 1/1 Running 3 4d 192.168.128.141 k8s-node01 <none> <none>
kube-proxy-qhst5 1/1 Running 3 4d 192.168.128.142 k8s-node02 <none> <none>
kube-scheduler-k8s-master 1/1 Running 5 4d 192.168.128.140 k8s-master <none> <none>
# 测试coreDNS域名访问

; <<>> DiG 9.11.4-P2-RedHat-9.11.4-9.P2.el7 <<>> -t A myapp-headless.default.svc.cluster.local. @10.96.0.10
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 38148
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;myapp-headless.default.svc.cluster.local. IN A

;; ANSWER SECTION:
myapp-headless.default.svc.cluster.local. 30 IN A 10.244.2.156
myapp-headless.default.svc.cluster.local. 30 IN A 10.244.2.157
myapp-headless.default.svc.cluster.local. 30 IN A 10.244.1.101

;; Query time: 1 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Sat Feb 01 16:53:08 +08 2020
;; MSG SIZE rcvd: 237

1
2
3
4
myapp-headless.default.svc.cluster.local.
myapp-headless:svc名称
default:名称空间名称
svc.cluster.local:集群的域名

NodePort

NodePort的原理在于在node上开了一个端口,将向该端口的流量导入到kube-proxy,然后由kube-proxy进一步到给对应的pod

举个栗子

1
2
3
4
5
6
7
8
9
10
11
12
13
14
apiVersion: v1
kind: Service
metadata:
name: myapp-nodeport
namespace: default
spec:
type: NodePort
sector:
app: myapp
release: stable
ports:
- name: http
port: 80
targetPort: 80
1
2
3
4
5
6
7
8
9
# 查看svc给的随机端口
kubectl get svc
# curl验证
curl NodeIP:NodePort
# 查询规则
iptables -t nat -nvL
KUBE-NODEPORTS
# 如果是ipvs
ipvsadm -Ln

LoadBalancer

LoadBalancer和NodePort其实是同一种方式。区别在于LoadBalancer比NodePort多了一步,就是可以调用cloud provider去创建LB来向节点导流

image-20200201173438372

这东西要花钱

ExternalName

这种类型的Service通过返回CNAME和它的值,可以将服务映射到externalName字段的内容(例如:hub.test.com)。ExternalName Service是Service的特例,它没有selector,也没有定义任何的端口和Endpoint。相反的,对于运行在集群外部的服务,它通过返回该外部服务的别名这种方式来提供服务

1
2
3
4
5
6
7
8
apiVersion: v1
kind: Service
metadata:
name: my-servie-1
namespace: default
spec:
type: ExternalName
externalName: hub.test.com # 也可以是IP地址

当查询主机my-service-1.default.svc.cluster.local(SVC_NAME.NAMESPACE.svc.cluster.local)时,集群的DNS服务将返回一个值my.database.example.com的CNAME记录。访问这个服务的工作方式和其他的相同,唯一不同的是重定向发生在DNS层,而且不会进行代理或转发。

1
dig -t A my-service-1.default.svc.cluster.local @x.x.x.x

其他

1 可以通过iptables-save命令打印出当前接待你的iptables规则