Fork me on GitHub

kubernetes系列之《Ingress》

1、 Pod与lngress的关系

  • 通过Service相关联
  • 铜鼓lngress Contrller实现Pod的负载均衡
    • 支持TCP/UDP 4层和HTTP 7层

2、 lngress Controller

用户访问lngress控制器,由控制器根据规则来实现对Service的访问,从而实现访问k8s内部的pod

2.1、 部署Controller

部署文档:https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md
ingress的控制器由很多,官方维护的控制器是ingress-nginx,因此大多数情况下都是使用这个控制器,

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[root@k8s-master-128 ~]# mkdir ingress-nginx && cd ingress-nginx
[root@k8s-master-128 ingress-nginx]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
[root@k8s-master-128 ingress-nginx]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml


[root@k8s-master-128 ingress-nginx]# vim mandatory.yaml
211 serviceAccountName: nginx-ingress-serviceaccount
212 hostNetwork: true # 使当前的Pod使用宿主机的网络
213 containers:
214 - name: nginx-ingress-controller
215 image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.24.1 # 如果下载不了需要换成国内源
216 args:
217 - /nginx-ingress-controller
218 - --configmap=$(POD_NAMESPACE)/nginx-configuration
219 - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
220 - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
221 - --publish-service=$(POD_NAMESPACE)/ingress-nginx
222 - --annotations-prefix=nginx.ingress.kubernetes.io

注意:
- 镜像如果下载不了需要换成国内的
- 使用宿主机网络:hostNetwork: true # https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/baremetal.md

[root@k8s-master-128 ingress-nginx]# kubectl create -f mandatory.yaml
[root@k8s-master-128 ingress-nginx]# kubectl create -f cloud-generic.yaml

部署报错:Error creating: pods “nginx-ingress-controller-565dfd6dff-g977n” is forbidden: SecurityContext.RunAsUser is forbidden
解决:
需要对Apiserver配置的准入控制器进行修改,然后重启Apiserver

1
2
3
4
5
6
$ vim /opt/kubernetes/cfg/kube-apiserver
将:
--admission-control= NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction
更换为:
--admission-control= NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction
不启用SecurityContextDeny就行了。

2.1.1、理解准入控制器

要查看启用了哪些准入插件:
kube-apiserver -h | grep enable-admission-plugins
1.14版本默认启动的控制器:
NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, Priority, DefaultTolerationSeconds, DefaultStorageClass, PersistentVolumeClaimResize, MutatingAdmissionWebhook, ValidatingAdmissionWebhook, ResourceQuota

  • NamespaceLifecycle:此准入控制器强制执行正在终止的命令空间中不能创建新对象,并确保Namespace拒绝不存在的请求。此准入控制器还防止缺失三个系统保留的命名空间default、kube-system、kube-public。
  • LimitRanger:此准入控制器将确保所有资源请求不会超过 namespace 的 LimitRange。
  • SecurityContextDeny:此准入控制器将拒绝任何试图设置某些升级的SecurityContext字段的pod 。
  • ServiceAccount:此准入控制器实现serviceAccounts的自动化。
  • ResourceQuota:此准入控制器将观察传入请求并确保它不违反命名空间的ResourceQuota对象中列举的任何约束。
  • NodeRestriction:该准入控制器限制了 kubelet 可以修改的Node和Pod对象。
  • NamespaceExists:此许可控制器检查除 Namespace 其自身之外的命名空间资源上的所有请求。如果请求引用的命名空间不存在,则拒绝该请求。

2.1.2、检测部署情况

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@k8s-master-128 ingress-nginx]# kubectl get deploy -n ingress-nginx  # 我配置了两个副本
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-ingress-controller 2/2 2 2 46m
[root@k8s-master-128 ingress-nginx]# kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
nginx-ingress-controller-76f9fddcf8-7hn2k 1/1 Running 0 13h
nginx-ingress-controller-76f9fddcf8-9lg7d 1/1 Running 0 13h

[root@k8s-master-128 ingress-nginx]# POD_NAMESPACE=ingress-nginx
[root@k8s-master-128 ingress-nginx]# POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}')
[root@k8s-master-128 ingress-nginx]# kubectl exec -it $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: 0.24.1
Build: git-ce418168f
Repository: https://github.com/kubernetes/ingress-nginx
-------------------------------------------------------------------------------

NGINX Ingress controller安装正常,

上面在部署的时候配置了hostNetwork: true参数,即共用宿主机网络,因此在Node节点上会启动一个80和443端口

1
2
3
4
5
6
[root@k8s-node-129 ~]# netstat -lntup|egrep '80|443'
tcp 0 0 172.16.194.129:2380 0.0.0.0:* LISTEN 6346/etcd
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 28589/nginx: master
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 28589/nginx: master
tcp6 0 0 :::80 :::* LISTEN 28589/nginx: master
tcp6 0 0 :::443 :::* LISTEN 28589/nginx: master

Node节点上的80和443端口就是ingress规则的访问入口,官网架构图:

2.2、 创建 ingress规则

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# 查看到Service 服务有nginx,并且在集群内暴露了80端口
[root@k8s-master-128 ingress-nginx]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 16d
nginx NodePort 10.0.0.86 <none> 80:43364/TCP 15d

# 编写一个ingress规则,代理上面的nginx Service
[root@k8s-master-128 ingress-nginx]# vim ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: example-ingress # 集群内服务标识
spec:
rules:
- host: example.foo.com # 对外提供的域名(类似于nginx的 server_name 字段)
http:
paths:
- backend:
serviceName: nginx # Service服务名称
servicePort: 80 # 集群内暴露的端口

[root@k8s-master-128 ingress-nginx]# kubectl create -f ingress.yaml
[root@k8s-master-128 ingress-nginx]# kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
example-ingress example.foo.com 80 32s

2.3、 测试访问

通过绑定本机hosts来测试访问服务

1
2
3
4
5
6
7
$ sudo vim /etc/hosts
172.16.194.129 example.foo.com

$ ping example.foo.com
PING example.foo.com (172.16.194.129): 56 data bytes
64 bytes from 172.16.194.129: icmp_seq=0 ttl=64 time=0.445 ms
64 bytes from 172.16.194.129: icmp_seq=1 ttl=64 time=0.722 ms

本机浏览器访问:

查看k8s集群的Pod响应日志

1
2
[root@k8s-master-128 ingress-nginx]# kubectl logs -f nginx-7db9fccd9b-9bcpv
172.17.4.0 - - [23/May/2019:03:44:32 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36" "172.16.194.1"

3、配置HTTPS

之前在部署kubernetes Dashboard时是通过NodePort暴露的端口,端口为443,现在测试使用ingress方式代理这个服务;
首先给dashboard生成一个证书:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
[root@k8s-master-128 ingress-nginx]# mkdir https && cd https
[root@k8s-master-128 https]# cat dashboard-cret.sh
#!/bin/bash

# 创建kubernetes dashboard证书
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"dashboard": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat > ca-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -


#-----------------------
# 颁发域名证书
cat > dashboard.kubernetes.com-csr.json <<EOF
{
"CN": "dashboard.kubernetes.com",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=dashboard dashboard.kubernetes.com-csr.json | cfssljson -bare dashboard.kubernetes.com
[root@k8s-master-128 https]#
[root@k8s-master-128 https]# ./dashboard-cret.sh
[root@k8s-master-128 https]# ls
ca-config.json ca-csr.json ca.pem dashboard.kubernetes.com.csr dashboard.kubernetes.com-key.pem
ca.csr ca-key.pem dashboard-cret.sh dashboard.kubernetes.com-csr.json dashboard.kubernetes.com.pem

接下来根据上面生成的CA证书创建一个secret资源,来存储秘钥和证书。如果你的证书是.pem的,也是一样的,将下面的your_cert.crt改为your_cert.pem即可。

1
kubectl create secret tls tls_secret_name --key your_key.key --cert your_cert.crt

创建secret资源:

1
2
3
4
5
[root@k8s-master-128 https]# kubectl -n kube-system create secret tls dashboard-secret --key dashboard.kubernetes.com-key.pem --cert dashboard.kubernetes.com.pem
secret/dashboard.kubernetes.com created
[root@k8s-master-128 https]#
[root@k8s-master-128 https]# kubectl -n kube-system get secret|grep dashboard-secret
dashboard-secret kubernetes.io/tls 2 18s

创建ingress服务:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
[root@k8s-master-128 ingress-nginx]# vim dashboard-ingress.ymal
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kube-system
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
tls:
- hosts:
- dashboard.kubernetes.com
secretName: dashboard-secret
rules:
- host: dashboard.kubernetes.com
http:
paths:
- path: /
backend:
serviceName: kubernetes-dashboard
servicePort: 443

[root@k8s-master-128 ingress-nginx]# kubectl create -f dashboard-ingress.ymal
ingress.extensions/dashboard-ingress created
[root@k8s-master-128 ingress-nginx]# kubectl get ingress -n kube-system
NAME HOSTS ADDRESS PORTS AGE
dashboard-ingress dashboard.kubernetes.com 80, 443 12s
[root@k8s-master-128 ingress-nginx]#
[root@k8s-master-128 ingress-nginx]# kubectl describe ingress dashboard-ingress -n kube-system
Name: dashboard-ingress
Namespace: kube-system
Address:
Default backend: default-http-backend:80 (<none>)
TLS:
dashboard-secret terminates dashboard.kubernetes.com
Rules:
Host Path Backends
---- ---- --------
dashboard.kubernetes.com
/ kubernetes-dashboard:443 (172.17.63.8:8443)
Annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/ssl-passthrough: true
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 10m nginx-ingress-controller Ingress kube-system/dashboard-ingress
Normal CREATE 10m nginx-ingress-controller Ingress kube-system/dashboard-ingress

测试访问:

1
[root@k8s-master-128 ingress-nginx]# kubectl describe secrets -n kube-system $(kubectl get secrets -n kube-system |awk '/dashboard-admin/{print $1}')

访问效果

在这个过程中遇到了坑,总结记录如下:
1、参考“kubernetes1.13.1部署ingress-nginx并配置https转发dashboard”文章进行试验,在yaml部分直接拷贝过来用,发现代理不成功,查看Pod日志提示错误如下:

1
2
3
4
5
2019/05/24 08:13:24 http: TLS handshake error from 172.17.63.1:53934: tls: first record does not look like a TLS handshake
2019/05/24 08:13:28 http: TLS handshake error from 172.17.63.1:53960: tls: first record does not look like a TLS handshake
2019/05/24 08:13:28 http: TLS handshake error from 172.17.63.1:53964: tls: first record does not look like a TLS handshake
2019/05/24 08:13:30 http: TLS handshake error from 172.17.63.1:53980: tls: first record does not look like a TLS handshake
2019/05/24 08:13:32 http: TLS handshake error from 172.17.63.1:53996: tls: first record does not look like a TLS handshake

2、问题定位:于是怀疑yaml是否有写错,一步步确认yaml的写法,最终定位到annotations这个配置,
此配置的参数查看官网:https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md
与官网一番对比后,只因之前yaml配置文件里参数nginx.ingress.kubernetes.io/secure-backends: "true"在官网显示已被淘汰;
官网解释:

3、理解参数意义
Backend Protocol意既:后端协议;
什么意识呢?就是指明后端的代理协议,如果Pod提供的直接就是需要通过HTTPS来访问,则在配置Ingress的时候需要配置Backend Protocol才可代理访问(HTTP不用配置),否则就会代理失效。切记,切记!

参考博文:

-------------本文结束感谢您的阅读-------------