上一节我们学习了使用Istio Gateway将集群内的http服务暴露到集群外部。 Istio Gateway在接入集群外部流量时与K8S的Ingress类似istio-ingressgateway组件相当于k8s里的ingress-controller。 Istio Gateway对于K8S的不同之处在于在Istio Gateway资源中只定义对外暴露的端口、协议、域名,对于路由信息需要使用Istio的虚拟服务VirtualService的路由规则来配置,这样就可以将Istio提供的各种各样的功能应用到接入集群的流量上。
本节将首先学习如何使用Istio Gateway将集群内的tcp服务暴露到集群外部,同时体验Istio流量管理的TCP流量转移功能,体验如何使用Istio将TCP流量从微服务的一个版本逐步迁移到另一个版本。
准备测试环境,部署两个版本的tcp-echo服务
创建名称为istio-io-tcp-traffic-shifting的命名空间,并将其标记为自动注入Istio Sidecar。
kubectl create namespace istio-io-tcp-traffic-shifting
kubectl label namespace istio-io-tcp-traffic-shifting istio-injection=enabled
部署v1和v2两个版本的tcp-echo微服务:
kubectl apply -f samples/tcp-echo/tcp-echo-services.yaml -n istio-io-tcp-traffic-shifting
上面命令向k8s部署了以下内容:
kubectl get svc,deploy,pod -n istio-io-tcp-traffic-shifting
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/tcp-echo ClusterIP 10.109.148.168 9000/TCP,9001/TCP 7m40s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/tcp-echo-v1 1/1 1 1 7m40s
deployment.apps/tcp-echo-v2 1/1 1 1 7m40s
NAME READY STATUS RESTARTS AGE
pod/tcp-echo-v1-7dd5c5dcfb-l886c 2/2 Running 0 7m40s
pod/tcp-echo-v2-56cd9b5c4f-slsb7 2/2 Running 0 7m40s
tcp-echo-services.yaml的内容如下:
apiVersion: v1
kind: Service
metadata:
name: tcp-echo
labels:
app: tcp-echo
service: tcp-echo
spec:
ports:
- name: tcp
port: 9000
- name: tcp-other
port: 9001
# Port 9002 is omitted intentionally for testing the pass through filter chain.
selector:
app: tcp-echo
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tcp-echo-v1
labels:
app: tcp-echo
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: tcp-echo
version: v1
template:
metadata:
labels:
app: tcp-echo
version: v1
spec:
containers:
- name: tcp-echo
image: docker.io/istio/tcp-echo-server:1.2
imagePullPolicy: IfNotPresent
args: [ "9000,9001,9002", "one" ]
ports:
- containerPort: 9000
- containerPort: 9001
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tcp-echo-v2
labels:
app: tcp-echo
version: v2
spec:
replicas: 1
selector:
matchLabels:
app: tcp-echo
version: v2
template:
metadata:
labels:
app: tcp-echo
version: v2
spec:
containers:
- name: tcp-echo
image: docker.io/istio/tcp-echo-server:1.2
imagePullPolicy: IfNotPresent
args: [ "9000,9001,9002", "two" ]
ports:
- containerPort: 9000
- containerPort: 9001
使用Istio Gateway将tcp服务tcp-echo暴露到集群外部
查看一下istio-ingressgateway组件的service:
kubectl get svc istio-ingressgateway -n istio-system -o yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: istio-ingressgateway
name: istio-ingressgateway
namespace: istio-system
spec:
clusterIP: 10.106.130.216
clusterIPs:
- 10.106.130.216
externalIPs:
- 192.168.96.50
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: status-port
nodePort: 31723
port: 15021
protocol: TCP
targetPort: 15021
- name: http2
nodePort: 30709
port: 80
protocol: TCP
targetPort: 8080
- name: https
nodePort: 31482
port: 443
protocol: TCP
targetPort: 8443
- name: tcp
nodePort: 32432
port: 31400
protocol: TCP
targetPort: 31400
- name: tls
nodePort: 31499
port: 15443
protocol: TCP
targetPort: 15443
selector:
app: istio-ingressgateway
istio: ingressgateway
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
注意istio-ingressgateway的Service的ports中name为tcp的端口31400,这个端口就是用来接入tcp流量到集群内部的。 暴露tcp服务与http服务不同,多个http服务可以使用80和443结合具体hostname域名的路由统一搞定; 而暴露每个tcp服务都需要一个单独的端口, 默认部署的istio-ingressgateway Service中已经为我们设置好了一个31400,可以直接使用,但如果还想暴露另外一个tcp服务到集群外部的话,就需要修改istio-ingressgateway Service,添加新的tcp端口。
注意因为istio-ingressgateway Service具有externalIP, 这里是192.168.96.50,我们可以直接使用192.168.96.50:31400访问暴露的服务,如果没有externalIP的话,就要使用31400对应的NodePort 32343来问。
搞清楚istio-ingressgateway的作用后,再看一下如何创建istio Gateway,samples/tcp-echo/tcp-echo-all-v1.yaml的内容如下:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: tcp-echo-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 31400
name: tcp
protocol: TCP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: tcp-echo-destination
spec:
host: tcp-echo
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: tcp-echo
spec:
hosts:
- "*"
gateways:
- tcp-echo-gateway
tcp:
- match:
- port: 31400
route:
- destination:
host: tcp-echo
port:
number: 9000
subset: v1
应用上面的tcp-echo-all-v1.yaml到k8s中:
kubectl apply -f samples/tcp-echo/tcp-echo-all-v1.yaml -n istio-io-tcp-traffic-shifting
上面的命令执行后,创建了目标规则tcp-echo-destination,此目标规则应用到了目标服务k8s serivcetcp-echo上,并配置了服务子集(subset),v1和v2两个服务子集分别对应labelversion=v1和labelversion=v2的服务实例。 上面的命令执行后还创建了Gateway tcp-echo-gateway,这个gateway匹配了任意主机名的31400端口,但光有Gateway不行,还需要使用虚拟服务配置路由规则,这里的路由规则是将tcp-echo-gateway gateway 31400端口的流量路由到目标k8s服务tcp-echo服务子集的9000端口上。
在集群外部使用nc命令测试:
for i in {1..10}; do \
sh -c "(date; sleep 1) 关键词:istio 1.10学习笔记08: Istio流量管理之TCP流量转移