k8s创建flink

安装 kubectl

1
2
3
4
5
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl

chmod +x ./kubectl

sudo mv ./kubectl /usr/local/bin/kubectl

1.安装minikube

1
2
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \
&& chmod +x minikube

2.启动minikube

1
minikube start --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --kubernetes-version v1.15.0

k8s相关指令操作

1
2
3
4
5
6
7
8
9
10
11
12
13
kubectl get pods --all-namespaces
kubectl get pods -A

kubectl describe pod ${podName}
#进入pod容器
kubectl exec -ti <your-pod-name> -n <your-namespace> -- /bin/sh

#查看指定分区的pod
kubectl get pod -n flink
#查看创建的service
kubectl get service -n flink
#修改创建的pod配置信息
kubectl edit svc -n ding-flink-test flink-jobmanager

3.部署flink集群

验证minikube 信息

1
minikube ssh 'sudo ip link set docker0 promisc on'

创建命名空间

1
kubectl create -f namespace.yaml namespace/flink created

其中namespace.yaml文件为:

1
2
3
4
5
6
kind: Namespace
apiVersion: v1
metadata:
name: flink
labels:
name: flink

查询minikube集群的的命名空间:

1
2
3
4
5
# kubectl get namespaces
NAME STATUS AGE
flink Active 1m
kube-public Active 254d
kube-system Active 254d

创建 flink-conf/flink-jobmanager/task-manager
(yaml详情信息见文章附录)

1
2
3
4
kubectl create -f flink-configuration-configmap.yaml
kubectl create -f jobmanager-service.yaml
kubectl create -f jobmanager-deployment.yaml
kubectl create -f taskmanager-deployment.yaml

4.做pod端口代理转发到本地

1
kubectl port-forward service/flink-jobmanager 8081:8081

查看服务启动信息

1
kubectl get svc

您可以通过不同的方式访问Flink UI:

1
./bin/flink run -m localhost:8081 ./examples/streaming/WordCount.jar
  • NodePort在jobmanager的其余服务上创建服务:
    1. 运行kubectl create -f jobmanager-rest-service.yamlNodePort在jobmanager上创建服务。的示例jobmanager-rest-service.yaml可以在附录中找到。
    2. 运行kubectl get svc flink-jobmanager-rest以了解node-port该服务的,并在浏览器中导航到 http://: .
    3. port-forward解决方案类似,您还可以使用以下命令将作业提交到集群:
1
./bin/flink run -m <public-node-ip>:<node-port> ./examples/streaming/WordCount.jar

5.终止指令:

1
2
3
4
kubectl delete -f jobmanager-deployment.yaml
kubectl delete -f taskmanager-deployment.yaml
kubectl delete -f jobmanager-service.yaml
kubectl delete -f flink-configuration-configmap.yaml

附录:flink创建及启动yaml详情

flink-configuration-configmap.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
apiVersion: v1
kind: ConfigMap
metadata:
name: flink-config
labels:
app: flink
data:
flink-conf.yaml: |+
jobmanager.rpc.address: flink-jobmanager
taskmanager.numberOfTaskSlots: 1
blob.server.port: 6124
jobmanager.rpc.port: 6123
taskmanager.rpc.port: 6122
jobmanager.heap.size: 1024m
taskmanager.heap.size: 1024m
log4j.properties: |+
log4j.rootLogger=INFO, file
log4j.logger.akka=INFO
log4j.logger.org.apache.kafka=INFO
log4j.logger.org.apache.hadoop=INFO
log4j.logger.org.apache.zookeeper=INFO
log4j.appender.file=org.apache.log4j.FileAppender
log4j.appender.file.file=${log.file}
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
log4j.logger.org.apache.flink.shaded.akka.org.jboss.netty.channel.DefaultChannelPipeline=ERROR, file

jobmanager-deployment.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: flink-jobmanager
spec:
replicas: 1
template:
metadata:
labels:
app: flink
component: jobmanager
spec:
containers:
- name: jobmanager
image: flink:1.8.2
workingDir: /opt/flink
command: ["/bin/bash", "-c", "$FLINK_HOME/bin/jobmanager.sh start;\
while :;
do
if [[ -f $(find log -name '*jobmanager*.log' -print -quit) ]];
then tail -f -n +1 log/*jobmanager*.log;
fi;
done"]
ports:
- containerPort: 6123
name: rpc
- containerPort: 6124
name: blob
- containerPort: 8081
name: ui
livenessProbe:
tcpSocket:
port: 6123
initialDelaySeconds: 30
periodSeconds: 60
volumeMounts:
- name: flink-config-volume
mountPath: /opt/flink/conf
volumes:
- name: flink-config-volume
configMap:
name: flink-config
items:
- key: flink-conf.yaml
path: flink-conf.yaml
- key: log4j.properties
path: log4j.properties
hostAliases:
- ip: "192.168.66.192"
hostnames:
- "cdh-master"
- ip: "192.168.66.193"
hostnames:
- "cdh-slave1"
- ip: "192.168.66.194"
hostnames:
- "cdh-slave2"
- ip: "192.168.66.195"
hostnames:
- "cdh-slave3"

taskmanager-deployment.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: flink-taskmanager
spec:
replicas: 2
template:
metadata:
labels:
app: flink
component: taskmanager
spec:
containers:
- name: taskmanager
image: flink:1.8.2
workingDir: /opt/flink
command: ["/bin/bash", "-c", "$FLINK_HOME/bin/taskmanager.sh start; \
while :;
do
if [[ -f $(find log -name '*taskmanager*.log' -print -quit) ]];
then tail -f -n +1 log/*taskmanager*.log;
fi;
done"]
ports:
- containerPort: 6122
name: rpc
livenessProbe:
tcpSocket:
port: 6122
initialDelaySeconds: 30
periodSeconds: 60
volumeMounts:
- name: flink-config-volume
mountPath: /opt/flink/conf/
volumes:
- name: flink-config-volume
configMap:
name: flink-config
items:
- key: flink-conf.yaml
path: flink-conf.yaml
- key: log4j.properties
path: log4j.properties
hostAliases:
- ip: "192.168.66.192"
hostnames:
- "cdh-master"
- ip: "192.168.66.193"
hostnames:
- "cdh-slave1"
- ip: "192.168.66.194"
hostnames:
- "cdh-slave2"
- ip: "192.168.66.195"
hostnames:
- "cdh-slave3"

jobmanager-service.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: v1
kind: Service
metadata:
name: flink-jobmanager
spec:
type: ClusterIP
ports:
- name: rpc
port: 6123
- name: blob
port: 6124
- name: ui
nodePort: 30080
port: 8081
protocol: TCP
targetPort: 8081
selector:
app: flink
component: jobmanager

jobmanager-rest-service.yaml

(可选服务,将jobmanager rest端口公开为公共Kubernetes节点的端口)

1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: v1
kind: Service
metadata:
name: flink-jobmanager-rest
spec:
type: NodePort
ports:
- name: rest
port: 8081
targetPort: 8081
selector:
app: flink
component: jobmanager

host-edit.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
apiVersion: v1
kind: Pod
metadata:
name: hostaliases-pod #pod名称
spec:
hostAliases:
- ip: "192.168.66.192"
hostnames:
- "cdh-master"
- ip: "192.168.66.193"
hostnames:
- "cdh-slave1"
- ip: "192.168.66.194"
hostnames:
- "cdh-slave2"
- ip: "192.168.66.195"
hostnames:
- "cdh-slave3"
containers:
- name: cat-hosts
image: flink:1.8.2
command:
- cat
args:
- "/etc/hosts"
打赏
  • 版权声明: 本博客所有文章除特别声明外,著作权归作者所有。转载请注明出处!
  • Copyrights © 2018-2020 丁振莹
  • 访问人数: | 浏览次数:

你的每一分支持,是我努力下去的最大的力量 ٩(๑❛ᴗ❛๑)۶

支付宝
微信