kubernetes部署文档

目录

一:安装docker: 4

1.1:rpm包安装: 4

1.1.1:官方rpm包下载: 4

1.1.2:执行安装: 4

1.1.3:验证docker版本: 4

1.1.4:启动docker服务: 4

1.2:二进制方式安装docker: 4

1.2.1:解压二进制安装包: 5

1.2.2:复制可执行文件: 5

1.2.3:准备docker环境配置文件: 5

1.2.4:准备docker服务启动脚本: 5

1.2.5:验证docker版本: 6

1.2.6:启动docker服务: 6

二:etcd集群部署: 7

2.1:各节点安装部署etcd服务: 7

2.1.1:节点1部署etcd: 7

2.1.2:节点2: 9

2.1.3:节点3: 10

2.2:启动etcd并验证集群服务: 11

2.2.1:各节点启动etcd服务: 11

2.2.2:列出所有etcd节点: 11

2.2.3:验证leader服务切换: 11

2.2.4:查询集群集群状态: 11

2.2.5:测试key/value数据提交: 12

2.2.6:其他节点测试key/value数据获取: 12

2.2.7:查看etcd版本: 12

三:kubernetes Master集群部署: 12

3.1:安装keepalived+haproxy: 12

3.1.1:部署keepalived: 13

3.1.2:部署haproxy: 15

3.1.3:验证负载状态: 17

3.2:kubernetes apiserver部署: 18

3.2.1:节点1: 18

3.2.2:节点2: 20

3.2.3:节点3: 22

3.3:部署controller-manager服务: 24

3.4:部署scheduler服务: 25

四:node节点部署: 28

4.1:部署kubelet服务: 28

4.2:部署kube-proxy: 30

4.3:部署flannel: 31

4.4:node节点docker 环境配置: 35

4.4.1:node1配置: 35

4.4.2:node2配置: 36

4.4.4:在master查看node节点: 39

4.5:测试容器网络通信: 39

4.5.1:node1启动容器: 39

4.5.2:node2启动容器: 39

4.5.1:node1启动容器: 40

4.5.3:验证网络通信: 40

五:部署SkyDNS : 42

5.1:镜像准备: 42

5.1.1:下载各镜像: 42

5.1.2:导入各镜像: 42

5.2:镜像上传至harbor服务器: 42

5.2.1:镜像打tag: 42

5.2.2:镜像上传至Harbor: 42

5.2.3:Harbor验证镜像: 43

5.3:准备yaml文件: 43

5.3.1:准备skydns-svc.yaml文件: 43

5.3.2:准备skydns-rc.yaml文件: 44

5.3.3:创建服务: 46

5.3.4:部署测试服务busybox: 48

5.3.5:测试解析service: 50

六:k8s 图形部署: 52

6.1:下载所需官方镜像: 52

6.1.1:github下载: 52

6.1.2:Google镜像仓库: 52

6.1.3:dashboard镜像上传至harbor: 55

6.2:创建yaml文件: 55

6.3:创建并验证dashboard: 57

6.3.1:创建dashboard服务: 57

6.3.2:验证service状态: 57

6.3.3:列出service描述信息: 57

6.3.4:验证pod状态: 58

6.3.5:列出pod描述信息: 58

6.3.6:验证deployment状态: 58

6.3.7:列出deployment描述信息: 59

6.3.8:查看某个pod的详细日志: 59

6.3.9:node节点验证容器: 59

6.3.10:验证访问dashboard界面: 59

6.3.11:通过haproxy进行反向代理: 60

6.3.12:访问web测试: 60

6.4:部署heapster+influxdb+grafana: 61

6.4.1:下载源码包: 61

6.4.2:查看需要的镜像: 61

6.4.3:镜像准备: 61

6.4.4:准备yaml文件: 63

6.4.5:创建并验证服务: 67

目录

一:基础环境准备:

整个环境预计使用6台虚拟机或者物理机作为服务器,其中两台为高可用的kubernetes Master,用于安装部署与kubernetes集群相关的服务,另外两台为高可用的镜像仓库Harbor服务,最后两台为node服务器节点即安装部署kubernetes 客户端服务并且真正运行容器提供容器服务的宿主机(可以是物理机也可以是虚拟机),具体IP规划如下:

K8S Master:192.168.10.205、192.168.10.206

K8S node:192.168.10.207、192.168.10.208

Harbor:192.168.0.209、192.168.10.210

规划图如下:

1.1:安装docker:

Kubernetes是底层基于容器技术,目前常用的容器技术是docker,docker的官网是https://www.docker.com/,关于docker的详细使用等请参考上一篇文章。

两台docker服务器,IP 地址分别是192.168.10.209和192.168.10.210,其中1921.68.10.209使用rmp方式安装,192.168.10.210使用二进制包安装。

1.1:rpm包安装:

官方下载地址:

https://download.docker.com/linux/centos/7/x86_64/stable/Packages/

1.1.1:官方rpm包下载:

1.1.2:执行安装:

[root@docker-server6 ~]# yum install docker-ce-17.09.1.ce-1.el7.centos.x86_64.rpm

1.1.3:验证docker版本:

[root@docker-server6 ~]# docker -v

Docker version 17.09.1-ce, build 19e2cf6

1.1.4:启动docker服务:

[root@docker-server6 ~]# systemctl start docker

1.2:二进制方式安装docker:

二进制压缩包下载地址:

https://download.docker.com/linux/static/stable/x86_64/

1.2.1:解压二进制安装包:

[root@docker-server5 ~]# cd /usr/local/src/

[root@docker-server5 src]# tar xf docker-17.09.1-ce.tgz

1.2.2:复制可执行文件:

[root@docker-server5 docker]# cp -p ./* /usr/bin/

1.2.3:准备docker环境配置文件:

此文件用于后期配置harbor仓库等自定义配置

[root@docker-server5 docker]# grep -v "#" /etc/sysconfig/docker | grep -v "^$"

OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'

if [ -z "${DOCKER_CERT_PATH}" ]; then

DOCKER_CERT_PATH=/etc/docker

fi

1.2.4:准备docker服务启动脚本:

此文件需要手动创建

[root@docker-server5 docker]# vim /usr/lib/systemd/system/docker.service

[root@docker-server5 docker]# cat /usr/lib/systemd/system/docker.service

[Unit]

Description=Docker Application Container Engine

Documentation=https://docs.docker.com

After=network-online.target firewalld.service #依赖于firewalld服务

Wants=network-online.target

[Service]

Type=notify

# the default is not to use systemd for cgroups because the delegate issues still

# exists and systemd currently does not support the cgroup feature set required

# for containers run by docker

ExecStart=/usr/bin/dockerd

ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead

# in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity

LimitNPROC=infinity

LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.

# Only systemd 226 and above support this version.

#TasksMax=infinity

TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

# kill only the docker process, not all processes in the cgroup

KillMode=process

# restart the docker process if it exits prematurely

Restart=on-failure

StartLimitBurst=3

StartLimitInterval=60s

[Install]

WantedBy=multi-user.target

1.2.5:验证docker版本:

[root@docker-server5 docker]# docker -v

Docker version 17.09.1-ce, build 19e2cf6

1.2.6:启动docker服务:

[root@docker-server5 docker]# systemctl start docker && systemctl enable docker

二:etcd集群部署:

etcd由CoreOS 开发,是基于 Raft算法的key-value 分布式存储系统,etcd常用保存分布式系统的核心数据以保证数据一致性和高可用性,kubernetes 就是使用的 etcd 存储所有运行数据,如集群IP划分、master和node状态、pod状态、服务发现数据等。

三个节点,IP 分别是192.168.10.205 192.168.10.206和192.168.10.207,经测试,三台服务器的集群最多可以有一台宕机,如果有两台宕机的话则会导致整个etcd集群不可用。

2.1:各节点安装部署etcd服务:

下载地址:https://github.com/coreos/etcd/releases/

2.1.1:节点1部署etcd:

将安装包上传到服务器的/usr/local/src目录。

2.1.1.1:解压二进制安装包:

[root@docker-server1 ~]# cd /usr/local/src/

[root@docker-server1 src]# tar xf etcd-v3.2.12-linux-amd64.tar.gz

2.1.1.2:复制可执行文件:

etcd是启动server端的命令,etcdctl是客户端命令操作工具。

[root@docker-server1 src]# cd etcd-v3.2.12-linux-amd64

[root@docker-server1 etcd-v3.2.12-linux-amd64]# cp etcd etcdctl /usr/bin/

2.1.1.3:创建etcd服务启动脚本:

[root@docker-server1 ~]# vim /usr/lib/systemd/system/etcd.service

[Unit]

Description=Etcd Server

After=network.target

[Service]

Type=simple

WorkingDirectory=/var/lib/etcd/ #数据保存目录

EnvironmentFile=-/etc/etcd/etcd.conf #配置文件路径

User=etcd

ExecStart=/usr/bin/etcd #服务端命令etcd路径

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

2.1.1.4:创建etcd用户并授权目录权限:

[root@docker-server1 ~]# mkdir /var/lib/etcd

[root@docker-server1 ~]# useradd etcd -s /sbin/nologin

[root@docker-server1 ~]# chown etcd.etcd /var/lib/etcd/

[root@docker-server1 ~]# mkdir /etc/etcd

2.1.1.5:编辑主配置文件etcd.conf:

[root@docker-server1 ~]# grep "^[a-Z]" /etc/etcd/etcd.conf

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="http://192.168.10.205:2380"

ETCD_LISTEN_CLIENT_URLS="http://127.0.0.1:2379,http://192.168.10.205:2379"

ETCD_NAME="etcd-cluster1"

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.10.205:2380"

ETCD_ADVERTISE_CLIENT_URLS="http://192.168.10.205:2379"

ETCD_INITIAL_CLUSTER="etcd-cluster1=http://192.168.10.205:2380,etcd-cluster2=http://192.168.10.206:2380,etcd-cluster3=http://192.168.10.207:2380"

ETCD_INITIAL_CLUSTER_TOKEN="myk8s-etcd-cluster"

ETCD_INITIAL_CLUSTER_STATE="new"

#备注

ETCD_DATA_DIR #当前节点的本地数据保存目录

ETCD_LISTEN_PEER_URLS #集群之间通讯端口,写自己的IP

ETCD_LISTEN_CLIENT_URLS #客户端访问地址,写自己的IP

ETCD_NAME #当前节点名称,同一个集群内的各节点不能相同

ETCD_INITIAL_ADVERTISE_PEER_URLS #通告自己的集群端口,静态发现和动态发现,自己IP

ETCD_ADVERTISE_CLIENT_URLS #通告自己的客户端端口,自己IP地址

ETCD_INITIAL_CLUSTER #”节点1名称=http://IP:端口,节点2名称=http://IP:端口,节点3名称=http://IP:端口”,写集群所有的节点信息

ETCD_INITIAL_CLUSTER_TOKEN #创建集群使用的token,一个集群内的节点保持一致

ETCD_INITIAL_CLUSTER_STATE #新建集群的时候的值为new,如果是已经存在的集群为existing。

#如果conf 里没有配置ectdata 参数,默认会在/var/lib 创建*.etcd 文件

#如果etcd集群已经创建成功, 初始状态可以修改为existing。

2.1.2:节点2:

2.1.2.1:解压二进制安装包:

[root@docker-server2 ~]# cd /usr/local/src/

[root@docker-server2 src]# tar xf etcd-v3.2.12-linux-amd64.tar.gz

2.1.2.2:复制可执行文件:

[root@docker-server2 src]# cd etcd-v3.2.12-linux-amd64

[root@docker-server2 etcd-v3.2.12-linux-amd64]# cp etcd etcdctl /usr/bin/

2.1.2.3:创建etcd服务启动脚本:

[root@docker-server2 ~]# vim /usr/lib/systemd/system/etcd.service

[Unit]

Description=Etcd Server

After=network.target

[Service]

Type=simple

WorkingDirectory=/var/lib/etcd/ #数据保存目录

EnvironmentFile=-/etc/etcd/etcd.conf #配置文件路径

User=etcd

ExecStart=/usr/bin/etcd #服务端命令etcd路径

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

2.1.2.4:创建etcd用户并授权数据目录权限:

[root@docker-server2 ~]# mkdir /var/lib/etcd

[root@docker-server2 ~]# useradd etcd -s /sbin/nologin

[root@docker-server2 ~]# chown etcd.etcd /var/lib/etcd/

2.1.2.5:编辑主配置文件etcd.conf:

[root@docker-server2 ~]# mkdir /etc/etcd

[root@docker-server2 ~]# grep -v "#" /etc/etcd/etcd.conf

[Member]

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="http://192.168.10.206:2380"

ETCD_LISTEN_CLIENT_URLS="http://127.0.0.1:2379,http://192.168.10.206:2379"

ETCD_NAME="etcd-cluster2"

[Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.10.206:2380"

ETCD_ADVERTISE_CLIENT_URLS="http://192.168.10.206:2379"

ETCD_INITIAL_CLUSTER="etcd-cluster1=http://192.168.10.205:2380,etcd-cluster2=http://192.168.10.206:2380,etcd-cluster3=http://192.168.10.207:2380"

ETCD_INITIAL_CLUSTER_TOKEN="myk8s-etcd-cluster"

ETCD_INITIAL_CLUSTER_STATE="new"

[Proxy]

2.1.3:节点3:

2.1.3.1:解压二进制安装包:

[root@docker-server3 ~]# cd /usr/local/src/

[root@docker-server3 src]# tar xf etcd-v3.2.12-linux-amd64.tar.gz

2.1.3.2:复制可执行文件:

[root@docker-server3 src]# cd etcd-v3.2.12-linux-amd64

[root@docker-server3 etcd-v3.2.12-linux-amd64]# cp etcd etcdctl /usr/bin/

2.1.3.3:创建etcd服务启动脚本:

[root@docker-server2 ~]# vim /usr/lib/systemd/system/etcd.service

[Unit]

Description=Etcd Server

After=network.target

[Service]

Type=simple

WorkingDirectory=/var/lib/etcd/ #数据保存目录

EnvironmentFile=-/etc/etcd/etcd.conf #配置文件路径

User=etcd

ExecStart=/usr/bin/etcd #服务端命令etcd路径

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

2.1.3.4:创建etcd用户并授权数据目录:

[root@docker-server3 ~]# mkdir /var/lib/etcd

[root@docker-server3 ~]# useradd etcd -s /sbin/nologin

[root@docker-server3 ~]# chown etcd.etcd /var/lib/etcd/

2.1.3.5:编辑主配置文件etcd.conf:

[root@docker-server3 ~]# mkdir /etc/etcd

[root@docker-server3 ~]# grep -v "#" /etc/etcd/etcd.conf

[Member]

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="http://192.168.10.207:2380"

ETCD_LISTEN_CLIENT_URLS="http://127.0.0.1:2379,http://192.168.10.207:2379"

ETCD_NAME="etcd-cluster3"

[Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.10.207:2380"

ETCD_ADVERTISE_CLIENT_URLS="http://192.168.10.207:2379"

ETCD_INITIAL_CLUSTER="etcd-cluster1=http://192.168.10.205:2380,etcd-cluster2=http://192.168.10.206:2380,etcd-cluster3=http://192.168.10.207:2380"

ETCD_INITIAL_CLUSTER_TOKEN="myk8s-etcd-cluster"

ETCD_INITIAL_CLUSTER_STATE="new"

[Proxy]

2.2:启动etcd并验证集群服务:

2.2.1:各节点启动etcd服务:

[root@docker-server1 ~]# systemctl start etcd && systemctl enable etcd

[root@docker-server2 ~]# systemctl start etcd && systemctl enable etcd

[root@docker-server3 ~]# systemctl start etcd && systemctl enable etcd

2.2.2:列出所有etcd节点:

[root@docker-server3 ~]# etcdctl member list

3e841cf8f23a49b2: name=etcd-cluster2 peerURLs=http://192.168.10.206:2380 clientURLs=http://192.168.10.206:2379 isLeader=false

6af68bb1258f9eae: name=etcd-cluster1 peerURLs=http://192.168.10.205:2380 clientURLs=http://192.168.10.205:2379 isLeader=false

d6e858e746a0a921: name=etcd-cluster3 peerURLs=http://192.168.10.207:2380 clientURLs=http://192.168.10.207:2379 isLeader=true

2.2.3:验证leader服务切换:

将leader服务器上的etcd服务关闭,再查看leader服务是否切换节点.

2.2.4:查询集群集群状态:

[root@docker-server3 ~]# etcdctl cluster-health

2.2.5:测试key/value数据提交:

[root@docker-server1 ~]# etcdctl set /testkey "mytest in 192.168.10.205"

2.2.6:其他节点测试key/value数据获取:

2.2.7:查看etcd版本:

[root@docker-server3 ~]# etcdctl --version

etcdctl version: 3.2.12

API version: 2

三:kubernetes Master集群部署:

项目地址:https://github.com/kubernetes/kubernetes/

Server端、客户端和node端二进制安装包下载地址:

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.9.md

3.1:安装keepalived+haproxy:

三台节点部署k8s集群,通过haproxy实现高可用反向代理,规划如下:

3.1.1:部署keepalived:

3.1.1.1:负载1:

[root@docker-server4 src]# tar xvf keepalived-1.3.6.tar.gz

[root@docker-server4 src]# cd keepalived-1.3.6

[root@docker-server4 keepalived-1.3.6]# yum install libnfnetlink-devel libnfnetlink ipvsadm libnl libnl-devel libnl3 libnl3-devel lm_sensors-libs net-snmp-agent-libs ne

t-snmp-libs openssh-server openssh-clients openssl openssl-devel vim wget net-tools mysql-dev mysql supervisor git redis tree sudo psmisc lrzsz gcc gcc-c++ automake pcre pcre-devel zlib zlib-devel openssl openssl-devel iproute

[root@docker-server4 keepalived-1.3.6]# ./configure --prefix=/usr/local/keepalived --disable-fwmark && make && make install

[root@docker-server4 keepalived-1.3.6]# cp /usr/local/src/keepalived-1.3.6/keepalived/etc/init.d/keepalived.rh.init /etc/sysconfig/keepalived.sysconfig

[root@docker-server4 keepalived-1.3.6]# cp /usr/local/src/keepalived-1.3.6/keepalived/keepalived.service /usr/lib/systemd/system/

[root@docker-server4 keepalived-1.3.6]# cp /usr/local/src/keepalived-1.3.6/bin/keepalived /usr/sbin/

[root@docker-server4 keepalived-1.3.6]# mkdir /etc/keepalived

vrrp_instance VI_1 {

state MASTER

interface eth0

virtual_router_id 1

priority 100

advert_int 1

unicast_src_ip 192.168.10.208

unicast_peer {

192.168.10.209

}

authentication {

auth_type PASS

auth_pass 1111

}

virtual_ipaddress {

192.168.10.100 dev eth0 label eth0:0

}

}

3.1.1.2:负载2:

[root@docker-server5 src]# tar xvf keepalived-1.3.6.tar.gz

[root@docker-server5 src]# cd keepalived-1.3.6

[root@docker-server5 src]# ./configure --prefix=/usr/local/keepalived --disable-fwmark && make && make install

[root@docker-server5 src]# cp /usr/local/src/keepalived-1.3.6/keepalived/etc/init.d/keepalived.rh.init /etc/sysconfig/keepalived.sysconfig

[root@docker-server5 src]# cp /usr/local/src/keepalived-1.3.6/keepalived/keepalived.service /usr/lib/systemd/system/

[root@docker-server5 src]# cp /usr/local/src/keepalived-1.3.6/bin/keepalived /usr/sbin/

[root@docker-server5 src]# mkdir /etc/keepalived

[root@docker-server5 src]# vim /etc/keepalived/keepalived.conf

vrrp_instance VI_1 {

state BACKUP

interface eth0

virtual_router_id 1

priority 50

advert_int 1

unicast_src_ip 192.168.10.209

unicast_peer {

192.168.10.208

}

authentication {

auth_type PASS

auth_pass 1111

}

virtual_ipaddress {

192.168.10.100 dev eth0 label eth0:0

}

}

3.1.2:部署haproxy:

3.1.2.1:负载1:

[root@docker-server4 src]# tar xvf haproxy-1.8.2.tar.gz

[root@docker-server4 src]# cd haproxy-1.8.2

[root@docker-server4 haproxy-1.8.2]# yum install gcc pcre pcre-devel openssl openssl-devel -y

[root@docker-server4 haproxy-1.8.2]# make TARGET=linux2628 USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 PREFIX=/usr/local/haproxy && make install PREFIX=/usr/local/haproxy

[root@docker-server4 haproxy-1.8.2]# cp haproxy /usr/sbin/

[root@docker-server4 haproxy-1.8.2]# cp haproxy-systemd-wrapper /usr/sbin/haproxy-systemd-wrapper #之前的版本

[root@docker-server4 src]# cat /etc/sysconfig/haproxy

# Add extra options to the haproxy daemon here. This can be useful for

# specifying multiple configuration files with multiple -f options.

# See haproxy(1) for a complete list of options.

OPTIONS=""

[root@docker-server4 haproxy-1.8.2]# mkdir /etc/haproxy

[root@docker-server4 haproxy-1.8.2]# cat /etc/haproxy/haproxy.cfg

global

maxconn 100000

chroot /usr/local/haproxy

uid 99

gid 99

daemon

nbproc 1

pidfile /usr/local/haproxy/run/haproxy.pid

log 127.0.0.1 local3 info

defaults

option http-keep-alive

option forwardfor

maxconn 100000

mode http

timeout connect 300000ms

timeout client 300000ms

timeout server 300000ms

listen stats

mode http

bind 0.0.0.0:9999

stats enable

log global

stats uri /haproxy-status

stats auth haadmin:q1w2e3r4ys

listen apiserver_port

bind 192.168.10.100:2379

mode http

log global

server apiserver-web1 192.168.10.205:2379 check inter 3000 fall 2 rise 5

server apiserver-web2 192.168.10.206:2379 check inter 3000 fall 2 rise 5

server apiserver-web3 192.168.10.207:2379 check inter 3000 fall 2 rise 5

[root@docker-server5 haproxy-1.8.2]# systemctl restart haproxy

[root@docker-server5 haproxy-1.8.2]# systemctl enable haproxy

3.1.2.2:负载2:

[root@docker-server5 src]# tar xvf haproxy-1.8.2.tar.gz

[root@docker-server5 src]# cd haproxy-1.8.2

[root@docker-server5 haproxy-1.8.2]# yum install gcc pcre pcre-devel openssl openssl-devel -y

[root@docker-server5 haproxy-1.8.2]# make TARGET=linux2628 USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 PREFIX=/usr/local/haproxy && make install PREFIX=/usr/local/haproxy

[root@docker-server5 haproxy-1.8.2]# cp haproxy /usr/sbin/

[root@docker-server5 haproxy-1.8.2]# cp haproxy-systemd-wrapper /usr/sbin/haproxy-systemd-wrapper

[root@docker-server5 haproxy-1.8.2]# cat /etc/sysconfig/haproxy

# Add extra options to the haproxy daemon here. This can be useful for

# specifying multiple configuration files with multiple -f options.

# See haproxy(1) for a complete list of options.

OPTIONS=""

[root@docker-server5 haproxy-1.8.2]# mkdir /etc/haproxy

global

maxconn 100000

chroot /usr/local/haproxy

uid 99

gid 99

daemon

nbproc 1

pidfile /usr/local/haproxy/run/haproxy.pid

log 127.0.0.1 local3 info

defaults

option http-keep-alive

option forwardfor

maxconn 100000

mode http

timeout connect 300000ms

timeout client 300000ms

timeout server 300000ms

listen stats

mode http

bind 0.0.0.0:9999

stats enable

log global

stats uri /haproxy-status

stats auth haadmin:q1w2e3r4ys

listen apiserver_port

bind 192.168.10.100:2379

mode http

log global

server apiserver-web1 192.168.10.205:2379 check inter 3000 fall 2 rise 5

server apiserver-web2 192.168.10.206:2379 check inter 3000 fall 2 rise 5

server apiserver-web3 192.168.10.207:2379 check inter 3000 fall 2 rise 5

[root@docker-server5 haproxy-1.8.2]# systemctl restart haproxy

[root@docker-server5 haproxy-1.8.2]# systemctl enable haproxy

3.1.3:验证负载状态:

3.1.3.1:负载1:

3.1.3.2:负载2:

3.2:kubernetes apiserver部署:

Kube-apiserver提供了http rest接口的访问方式,是kubernetes里所有资源的增删改查操作的唯一入口,kube-apiserver是无状态的,即三台kube-apiserver无需进行主备选取,因为apiserver产生的数据是直接保存在etcd上的,因此api可以直接通过负载进行调用。

3.2.1:节点1:

3.2.1.1:解压二进制安装包:

[root@docker-server1 src]# tar xf kubernetes-1.9.0.tar.gz #包含服务安装脚本

[root@docker-server1 src]# tar xf kubernetes-server-linux-amd64.tar.gz #包含各二进制

[root@docker-server1 src]# pwd

/usr/local/src

[root@docker-server1 src]# cd kubernetes/cluster/centos/master/scripts/

[root@docker-server1 scripts]# ll #当前目录有各个组件的安装脚本

total 28

-rwxr-x--- 1 root root 5054 Dec 16 05:27 apiserver.shvim

-rwxr-x--- 1 root root 2237 Dec 16 05:27 controller-manager.sh

-rwxr-x--- 1 root root 2585 Dec 16 05:27 etcd.sh

-rw-r----- 1 root root 2264 Dec 16 05:27 flannel.sh

-rw-r----- 1 root root 853 Dec 16 05:27 post-etcd.sh

-rwxr-x--- 1 root root 1706 Dec 16 05:27 scheduler.sh

3.2.1.2:修改kube-apiserver安装脚本:

kube-apiserver通过脚本的方式安装,可以自动生成配置文件和服务启动脚本,默认安装路径是在/opt目录的,为规范安装目录,统一安装在/etc目录,因此需要更改脚本如下:

注释六行

[root@docker-server1 scripts]# grep -v "#" apiserver.sh | grep -v "^$"

#变量

MASTER_ADDRESS=${1:-"192.168.10.205"}

ETCD_SERVERS=${2:-"http://192.168.10.205:2379"} #本机地址

SERVICE_CLUSTER_IP_RANGE=${3:-"10.1.0.0/16"} #自定义的service网络,各master节点必须一致

ADMISSION_CONTROL=${4:-""}

#生成主配置文件

cat <<EOF >/etc/kubernetes/cfg/kube-apiserver.cfg

KUBE_LOGTOSTDERR="--logtostderr=true"

KUBE_LOG_LEVEL="--v=4"

KUBE_ETCD_SERVERS="--etcd-servers=${ETCD_SERVERS}"

KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

KUBE_API_PORT="--insecure-port=8080"

NODE_PORT="--kubelet-port=10250"

KUBE_ADVERTISE_ADDR="--advertise-address=${MASTER_ADDRESS}"

KUBE_ALLOW_PRIV="--allow-privileged=false"

KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=${SERVICE_CLUSTER_IP_RANGE}"

KUBE_ADMISSION_CONTROL="--admission-control=${ADMISSION_CONTROL}"

EOF

KUBE_APISERVER_OPTS=" \${KUBE_LOGTOSTDERR} \\

\${KUBE_LOG_LEVEL} \\

\${KUBE_ETCD_SERVERS} \\

\${KUBE_ETCD_CAFILE} \\

\${KUBE_ETCD_CERTFILE} \\

\${KUBE_ETCD_KEYFILE} \\

\${KUBE_API_ADDRESS} \\

\${KUBE_API_PORT} \\

\${NODE_PORT} \\

\${KUBE_ADVERTISE_ADDR} \\

\${KUBE_ALLOW_PRIV} \\

\${KUBE_SERVICE_ADDRESSES} \\

\${KUBE_ADMISSION_CONTROL} \\

\${KUBE_API_CLIENT_CA_FILE} \\

\${KUBE_API_TLS_CERT_FILE} \\

\${KUBE_API_TLS_PRIVATE_KEY_FILE}"

#生成启动脚本

cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service

[Unit]

Description=Kubernetes API Server

Documentation=https://github.com/kubernetes/kubernetes

[Service]

EnvironmentFile=-/etc/kubernetes/cfg/kube-apiserver.cfg

ExecStart=/usr/bin/kube-apiserver ${KUBE_APISERVER_OPTS}

Restart=on-failure

[Install]

WantedBy=multi-user.target

EOF

systemctl daemon-reload

systemctl enable kube-apiserver

systemctl restart kube-apiserver

3.2.1.3:准备安装环境:

[root@docker-server1 scripts]# mkdir /etc/kubernetes/cfg –p

[root@docker-server1 scripts]# cp /usr/local/src/kubernetes/server/bin/kube-apiserver /usr/bin/

3.2.1.4:执行安装:

[root@docker-server1 scripts]# bash apiserver.sh

Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.

3.2.1.5:启动并验证kube-apiserver服务:

[root@docker-server1 scripts]# systemctl start kube-apiserverps –ef |

3.2.2:节点2:

3.2.2.1:解压二进制安装包:

[root@docker-server2 ~]# cd /usr/local/src/

[root@docker-server2 src]# tar xf kubernetes-1.9.0.tar.gz

[root@docker-server2 src]# tar xf kubernetes-server-linux-amd64.tar.gz

3.2.2.2:修改kube-apiserver安装脚本:

Apiserver通过脚本的方式安装,可以自动生成配置文件和服务启动脚本,默认安装路径是在/opt目录的,为规范安装目录,统一安装在/etc目录,因此需要更改脚本如下:

[root@docker-server2 scripts]# grep -v "#" apiserver.sh | grep -v "^$"

MASTER_ADDRESS=${1:-"192.168.10.206"}

ETCD_SERVERS=${2:-"http://192.168.10.206:2379"}

SERVICE_CLUSTER_IP_RANGE=${3:-"10.1.0.0/16"}

ADMISSION_CONTROL=${4:-""}

cat <<EOF >/etc/kubernetes/cfg/kube-apiserver.cfg

KUBE_LOGTOSTDERR="--logtostderr=true"

KUBE_LOG_LEVEL="--v=4"

KUBE_ETCD_SERVERS="--etcd-servers=${ETCD_SERVERS}"

KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

KUBE_API_PORT="--insecure-port=8080"

NODE_PORT="--kubelet-port=10250"

KUBE_ADVERTISE_ADDR="--advertise-address=${MASTER_ADDRESS}"

KUBE_ALLOW_PRIV="--allow-privileged=false"

KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=${SERVICE_CLUSTER_IP_RANGE}"

KUBE_ADMISSION_CONTROL="--admission-control=${ADMISSION_CONTROL}"

EOF

KUBE_APISERVER_OPTS=" \${KUBE_LOGTOSTDERR} \\

\${KUBE_LOG_LEVEL} \\

\${KUBE_ETCD_SERVERS} \\

\${KUBE_ETCD_CAFILE} \\

\${KUBE_ETCD_CERTFILE} \\

\${KUBE_ETCD_KEYFILE} \\

\${KUBE_API_ADDRESS} \\

\${KUBE_API_PORT} \\

\${NODE_PORT} \\

\${KUBE_ADVERTISE_ADDR} \\

\${KUBE_ALLOW_PRIV} \\

\${KUBE_SERVICE_ADDRESSES} \\

\${KUBE_ADMISSION_CONTROL} \\

\${KUBE_API_CLIENT_CA_FILE} \\

\${KUBE_API_TLS_CERT_FILE} \\

\${KUBE_API_TLS_PRIVATE_KEY_FILE}"

cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service

[Unit]

Description=Kubernetes API Server

Documentation=https://github.com/kubernetes/kubernetes

[Service]

EnvironmentFile=-/etc/kubernetes/cfg/kube-apiserver.cfg

ExecStart=/usr/bin/kube-apiserver ${KUBE_APISERVER_OPTS}

Restart=on-failure

[Install]

WantedBy=multi-user.target

EOF

systemctl daemon-reload

systemctl enable kube-apiserver

systemctl restart kube-apiserver

3.2.2.3:准备安装环境:

[root@docker-server2 scripts]# mkdir /etc/kubernetes/cfg –p

[root@docker-server2 scripts]# cp /usr/local/src/kubernetes/server/bin/kube-apiserver /usr/bin/

3.2.2.4:执行安装:

[root@docker-server2 scripts]# bash apiserver.sh

Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.

3.2.2.5:启动并验证kube-apiserver服务:

3.2.3:节点3:

3.2.3.1:解压二进制包:

[root@docker-server3 ~]# cd /usr/local/src/

[root@docker-server3 src]# tar xf kubernetes-1.9.0.tar.gz

[root@docker-server3 src]# tar xf kubernetes-server-linux-amd64.tar.gz

3.2.2.2:修改kube-apiserver安装脚本:

[root@docker-server3 scripts]# grep -v "#" apiserver.sh | grep -v "^$"

MASTER_ADDRESS=${1:-"192.168.10.207"}

ETCD_SERVERS=${2:-"http://192.168.10.207:2379"}

SERVICE_CLUSTER_IP_RANGE=${3:-"10.1.0.0/16"}

ADMISSION_CONTROL=${4:-""}

cat <<EOF >/etc/kubernetes/cfg/kube-apiserver.cfg

KUBE_LOGTOSTDERR="--logtostderr=true"

KUBE_LOG_LEVEL="--v=4"

KUBE_ETCD_SERVERS="--etcd-servers=${ETCD_SERVERS}"

KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

KUBE_API_PORT="--insecure-port=8080"

NODE_PORT="--kubelet-port=10250"

KUBE_ADVERTISE_ADDR="--advertise-address=${MASTER_ADDRESS}"

KUBE_ALLOW_PRIV="--allow-privileged=false"

KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=${SERVICE_CLUSTER_IP_RANGE}"

KUBE_ADMISSION_CONTROL="--admission-control=${ADMISSION_CONTROL}"

EOF

KUBE_APISERVER_OPTS=" \${KUBE_LOGTOSTDERR} \\

\${KUBE_LOG_LEVEL} \\

\${KUBE_ETCD_SERVERS} \\

\${KUBE_ETCD_CAFILE} \\

\${KUBE_ETCD_CERTFILE} \\

\${KUBE_ETCD_KEYFILE} \\

\${KUBE_API_ADDRESS} \\

\${KUBE_API_PORT} \\

\${NODE_PORT} \\

\${KUBE_ADVERTISE_ADDR} \\

\${KUBE_ALLOW_PRIV} \\

\${KUBE_SERVICE_ADDRESSES} \\

\${KUBE_ADMISSION_CONTROL} \\

\${KUBE_API_CLIENT_CA_FILE} \\

\${KUBE_API_TLS_CERT_FILE} \\

\${KUBE_API_TLS_PRIVATE_KEY_FILE}"

cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service

[Unit]

Description=Kubernetes API Server

Documentation=https://github.com/kubernetes/kubernetes

[Service]

EnvironmentFile=-/etc/kubernetes/cfg/kube-apiserver

ExecStart=/usr/bin/kube-apiserver ${KUBE_APISERVER_OPTS}

Restart=on-failure

[Install]

WantedBy=multi-user.target

EOF

systemctl daemon-reload

systemctl enable kube-apiserver

systemctl restart kube-apiserver

3.2.2.3:准备安装环境:

[root@docker-server3 scripts]# mkdir /etc/kubernetes/cfg –p

[root@docker-server3 scripts]# mkdir /etc/kubernetes/cfg -p

[root@docker-server3 scripts]# cp /usr/local/src/kubernetes/server/bin/kube-apiserver /usr/bin/

3.2.2.4:执行安装:

[root@docker-server3 scripts]# bash apiserver.sh

Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.

3.2.2.5:启动并验证kube-apiserver:

[root@docker-server3 scripts]# systemctl start kube-apiserver

3.2.4:测试etcd数据:

3.2.4.1:在任意etcd提交数据:

[root@bj-zw-k8s-group1-master1-v-9-63 ~]# etcdctl set /testkey "test value"

test value

[root@bj-zw-k8s-group1-master1-v-9-63 ~]# etcdctl get /testkey

test value

3.2.4.1:在其他etcd验证数据:

3.3:部署controller-manager服务:

注释两个证书

[root@docker-server1 scripts]# grep -v "#" controller-manager.sh | grep -v "^$"

MASTER_ADDRESS=${1:-"192.168.10.205"} #apiserver的VIP

cat <<EOF >/etc/kubernetes/cfg/kube-controller-manager.cfg

KUBE_LOGTOSTDERR="--logtostderr=true"

KUBE_LOG_LEVEL="--v=4"

KUBE_MASTER="--master=${MASTER_ADDRESS}:8080"

KUBE_LEADER_ELECT="--leader-elect"

EOF

KUBE_CONTROLLER_MANAGER_OPTS=" \${KUBE_LOGTOSTDERR} \\

\${KUBE_LOG_LEVEL} \\

\${KUBE_MASTER} \\

\${KUBE_CONTROLLER_MANAGER_ROOT_CA_FILE} \\

\${KUBE_CONTROLLER_MANAGER_SERVICE_ACCOUNT_PRIVATE_KEY_FILE}\\

\${KUBE_LEADER_ELECT}"

cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service

[Unit]

Description=Kubernetes Controller Manager

Documentation=https://github.com/kubernetes/kubernetes

[Service]

EnvironmentFile=-/etc/kubernetes/cfg/kube-controller-manager.cfg

ExecStart=/usr/bin/kube-controller-manager ${KUBE_CONTROLLER_MANAGER_OPTS}

Restart=on-failure

[Install]

WantedBy=multi-user.target

EOF

systemctl daemon-reload

systemctl enable kube-controller-manager

systemctl restart kube-controller-manager

复制启动脚本:

[root@docker-server1 scripts]# bash controller-manager.sh

Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.

验证服务:

[root@docker-server1 scripts]# systemctl start kube-controller-manager

验证服务启动:

3.4:部署scheduler服务:

编辑部署脚本:

[root@docker-server1 scripts]# grep -v "#" scheduler.sh | grep -v "^$"

MASTER_ADDRESS=${1:-"192.168.10.205"}

cat <<EOF >/etc/kubernetes/cfg/kube-scheduler.cfg

KUBE_LOGTOSTDERR="--logtostderr=true"

KUBE_LOG_LEVEL="--v=4"

KUBE_MASTER="--master=${MASTER_ADDRESS}:8080"

KUBE_LEADER_ELECT="--leader-elect"

KUBE_SCHEDULER_ARGS=""

EOF

KUBE_SCHEDULER_OPTS=" \${KUBE_LOGTOSTDERR} \\

\${KUBE_LOG_LEVEL} \\

\${KUBE_MASTER} \\

\${KUBE_LEADER_ELECT} \\

\$KUBE_SCHEDULER_ARGS"

cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service

[Unit]

Description=Kubernetes Scheduler

Documentation=https://github.com/kubernetes/kubernetes

[Service]

EnvironmentFile=-/etc/kubernetes/cfg/kube-scheduler.cfg

ExecStart=/usr/bin/kube-scheduler ${KUBE_SCHEDULER_OPTS}

Restart=on-failure

[Install]

WantedBy=multi-user.target

EOF

systemctl daemon-reload

systemctl enable kube-scheduler

systemctl restart kube-scheduler

复制启动脚本:

[root@docker-server1 scripts]# cp /usr/local/src/kubernetes/server/bin/kube-scheduler /usr/bin/

复制命令。验证K8S 集群:

[root@docker-server1 scripts]# cp /usr/local/src/kubernetes/server/bin/kubectl /usr/bin/

执行安装:

[root@docker-server1 scripts]# bash scheduler.sh

Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.

[root@docker-server1 ~]# systemctl restart kube-scheduler

验证服务:

四:node节点部署:

所有node节点都要部署kubelet和kuble-proxy服务:

4.1:部署kubelet服务:

[root@docker-server4 src]# tar xf kubernetes-1.9.0.tar.gz

[root@docker-server4 src]# tar xf kubernetes-server-linux-amd64.tar.gz

[root@docker-server4 src]# cd kubernetes/cluster/centos/node/scripts

[root@docker-server4 scripts]# yum install yum-utils -y

[root@docker-server4 scripts]# cp /usr/local/src/kubernetes/server/bin/kubelet /usr/bin/

[root@docker-server4 scripts]# mkdir /etc/kubernetes/cfg –p

编辑安装脚本:

[root@docker-server4 scripts]# grep -v "#" kubelet.sh | grep -v "^$"

MASTER_ADDRESS=${1:-"192.168.10.100"} #API-Server的VIP 地址

NODE_ADDRESS=${2:-"192.168.10.208"} #本机的IP地址

DNS_SERVER_IP=${3:-"192.168.10.205"} #自定义的DNS地址,后期使用

DNS_DOMAIN=${4:-"cluster.local"}

KUBECONFIG_DIR=${KUBECONFIG_DIR:-/etc/kubernetes/cfg}

cat <<EOF > "${KUBECONFIG_DIR}/kubelet.kubeconfig"

apiVersion: v1

kind: Config

clusters:

- cluster:

server: http://${MASTER_ADDRESS}:8080/

name: local

contexts:

- context:

cluster: local

name: local

current-context: local

EOF

cat <<EOF >/etc/kubernetes/cfg/kubelet.cfg

KUBE_LOGTOSTDERR="--logtostderr=true"

KUBE_LOG_LEVEL="--v=4"

NODE_ADDRESS="--address=${NODE_ADDRESS}"

NODE_PORT="--port=10250"

NODE_HOSTNAME="--hostname-override=${NODE_ADDRESS}"

KUBELET_KUBECONFIG="--kubeconfig=${KUBECONFIG_DIR}/kubelet.kubeconfig"

KUBE_ALLOW_PRIV="--allow-privileged=false"

KUBELET__DNS_IP="--cluster-dns=${DNS_SERVER_IP}"

KUBELET_DNS_DOMAIN="--cluster-domain=${DNS_DOMAIN}"

KUBELET_ARGS=""

EOF

KUBELET_OPTS=" \${KUBE_LOGTOSTDERR} \\

\${KUBE_LOG_LEVEL} \\

\${NODE_ADDRESS} \\

\${NODE_PORT} \\

\${NODE_HOSTNAME} \\

\${KUBELET_KUBECONFIG} \\

\${KUBE_ALLOW_PRIV} \\

\${KUBELET__DNS_IP} \\

\${KUBELET_DNS_DOMAIN} \\

\$KUBELET_ARGS"

cat <<EOF >/usr/lib/systemd/system/kubelet.service

[Unit]

Description=Kubernetes Kubelet

After=docker.service

Requires=docker.service

[Service]

EnvironmentFile=-/etc/kubernetes/cfg/kubelet.cfg

ExecStart=/usr/bin/kubelet ${KUBELET_OPTS}

Restart=on-failure

KillMode=process

[Install]

WantedBy=multi-user.target

EOF

systemctl daemon-reload

systemctl enable kubelet

systemctl restart kubelet

[root@docker-server4 scripts]# bash kubelet.sh

4.2:部署kube-proxy:

编辑安装脚本:

[root@docker-server4 scripts]# grep -v "#" proxy.sh | grep -v "^$"

MASTER_ADDRESS=${1:-"192.168.10.100"}

NODE_ADDRESS=${2:-"192.168.10.208"}

cat <<EOF >/etc/kubernetes/cfg/kube-proxy.cfg

KUBE_LOGTOSTDERR="--logtostderr=true"

KUBE_LOG_LEVEL="--v=4"

NODE_HOSTNAME="--hostname-override=${NODE_ADDRESS}"

KUBE_MASTER="--master=http://${MASTER_ADDRESS}:8080"

EOF

KUBE_PROXY_OPTS=" \${KUBE_LOGTOSTDERR} \\

\${KUBE_LOG_LEVEL} \\

\${NODE_HOSTNAME} \\

\${KUBE_MASTER}"

cat <<EOF >/usr/lib/systemd/system/kube-proxy.service

[Unit]

Description=Kubernetes Proxy

After=network.target

[Service]

EnvironmentFile=-/etc/kubernetes/cfg/kube-proxy.cfg

ExecStart=/usr/bin/kube-proxy ${KUBE_PROXY_OPTS}

Restart=on-failure

[Install]

WantedBy=multi-user.target

EOF

systemctl daemon-reload

systemctl enable kube-proxy

systemctl restart kube-proxy

[root@docker-server4 scripts]# cp /usr/local/src/kubernetes/server/bin/kube-proxy /usr/bin/

[root@docker-server5 scripts]# swapoff -a #不能用交换分区

[root@docker-server4 scripts]# bash proxy.sh

Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.

4.3:部署flannel:

下载地址:

https://github.com/coreos/flannel/releases

[root@docker-server4 src]# tar xf flannel-v0.9.1-linux-amd64.tar.gz

[root@docker-server4 src]# mv flanneld /usr/bin/

[root@docker-server4 src]# cd kubernetes/cluster/centos/node/scripts/

[root@docker-server4 scripts]# pwd

/usr/local/src/kubernetes/cluster/centos/node/scripts

编辑安装脚本:

[root@docker-server4 scripts]# grep -v "#" flannel.sh | grep -v "^$"

ETCD_SERVERS=${1:-"http://192.168.10.205:2379"}

FLANNEL_NET=${2:-"10.0.0.0/16"} #node 上启动容器的网络范围

cat <<EOF >/etc/kubernetes/cfg/flannel.cfg

FLANNEL_ETCD="-etcd-endpoints=${ETCD_SERVERS}"

FLANNEL_ETCD_KEY="-etcd-prefix=/coreos.com/network"

#FLANNEL_ETCD_CAFILE="--etcd-cafile=${CA_FILE}"

#FLANNEL_ETCD_CERTFILE="--etcd-certfile=${CERT_FILE}"

#FLANNEL_ETCD_KEYFILE="--etcd-keyfile=${KEY_FILE}"

EOF

cat <<EOF >/usr/lib/systemd/system/flannel.service

[Unit]

Description=Flanneld overlay address etcd agent

After=network.target

Before=docker.service

[Service]

EnvironmentFile=-/etc/kubernetes/cfg/flannel.cfg

ExecStartPre=/usr/bin/remove-docker0.sh

ExecStart=/usr/bin/flanneld --ip-masq \${FLANNEL_ETCD} \${FLANNEL_ETCD_KEY} \${FLANNEL_ETCD_CAFILE} \${FLANNEL_ETCD_CERTFILE} \${FLANNEL_ETCD_K

EYFILE}ExecStartPost=/usr/bin/mk-docker-opts.sh -d /run/flannel/docker

Type=notify

[Install]

WantedBy=multi-user.target

RequiredBy=docker.service

EOF

attempt=0

while true; do

/usr/bin/etcdctl \

--no-sync -C ${ETCD_SERVERS} \

get /coreos.com/network/config >/dev/null 2>&1

if [[ "$?" == 0 ]]; then

break

else

if (( attempt > 600 )); then

echo "timeout for waiting network config" > ~/kube/err.log

exit 2

fi

/usr/bin/etcdctl \

--no-sync -C ${ETCD_SERVERS} \

mk /coreos.com/network/config "{\"Network\":\"${FLANNEL_NET}\"}" >/dev/null 2>&1

attempt=$((attempt+1))

sleep 3

fi

done

wait

systemctl daemon-reload

复制启动脚本:

[root@docker-server4 node]# cp /usr/local/src/kubernetes/cluster/centos/node/bin/mk-docker-opts.sh /usr/bin/

[root@docker-server4 node]# cp /usr/local/src/kubernetes/cluster/centos/node/bin/remove-docker0.sh /usr/bin/

[root@docker-server3 ~]# scp /usr/bin/etcdctl 192.168.10.208:/usr/bin/

执行部署:

# bash flannel.sh

启动服务:

[root@docker-server4 scripts]# systemctl start flannel

节点2:

[root@docker-server3 ~]# scp /usr/bin/etcdctl 192.168.10.209:/usr/bin/

[root@docker-server5 scripts]# bash -x flannel.sh

[root@docker-server5 scripts]# cp /usr/local/src/kubernetes/cluster/centos/node/bin/mk-docker-opts.sh /usr/bin/

[root@docker-server5 scripts]# cp /usr/local/src/kubernetes/cluster/centos/node/bin/remove-docker0.sh /usr/bin/

编辑安装文件:

[root@docker-server5 scripts]# grep -v "#" flannel.sh | grep -v "^$"

ETCD_SERVERS=${1:-"http://192.168.10.100:2379"}

FLANNEL_NET=${2:-"10.0.0.0/16"}

cat <<EOF >/etc/kubernetes/cfg/flannel.cfg

FLANNEL_ETCD="-etcd-endpoints=${ETCD_SERVERS}"

FLANNEL_ETCD_KEY="-etcd-prefix=/coreos.com/network"

EOF

cat <<EOF >/usr/lib/systemd/system/flannel.service

[Unit]

Description=Flanneld overlay address etcd agent

After=network.target

Before=docker.service

[Service]

EnvironmentFile=-/etc/kubernetes/cfg/flannel.cfg

ExecStartPre=/usr/bin/remove-docker0.sh

ExecStart=/usr/bin/flanneld --ip-masq \${FLANNEL_ETCD} \${FLANNEL_ETCD_KEY} \${FLANNEL_ETCD_CAFILE} \${FLANNEL_ETCD_CERTFILE} \${FLANNEL_ETCD_KEYFILE}

ExecStartPost=/usr/bin/mk-docker-opts.sh -d /run/flannel/docker

Type=notify

[Install]

WantedBy=multi-user.target

RequiredBy=docker.service

EOF

attempt=0

while true; do

/usr/bin/etcdctl \

--no-sync -C ${ETCD_SERVERS} \

get /coreos.com/network/config >/dev/null 2>&1

if [[ "$?" == 0 ]]; then

break

else

if (( attempt > 600 )); then

echo "timeout for waiting network config" > ~/kube/err.log

exit 2

fi

/usr/bin/etcdctl \

--no-sync -C ${ETCD_SERVERS} \

mk /coreos.com/network/config "{\"Network\":\"${FLANNEL_NET}\"}" >/dev/null 2>&1

attempt=$((attempt+1))

sleep 3

fi

done

wait

systemctl daemon-reload

启动服务:

[root@docker-server5 scripts]# systemctl restart docker && systemctl enable docker

[root@docker-server5 scripts]# systemctl restart flannel #会删除已经启动的docker的网卡配置

[root@docker-server5 scripts]# cat /run/flannel/subnet.env

FLANNEL_NETWORK=10.0.0.0/16

FLANNEL_SUBNET=10.0.57.1/24

FLANNEL_MTU=1472

FLANNEL_IPMASQ=true

4.4:node节点docker 环境配置:

4.4.1:node1配置:

编辑启动脚本:

[root@docker-server5 scripts]# grep -v "#" docker.sh | grep -v "^$"

DOCKER_OPTS=${1:-""}

DOCKER_CONFIG=/etc/kubernetes/cfg/docker.cfg

cat <<EOF >$DOCKER_CONFIG

DOCKER_OPTS="-H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock -s overlay --selinux-enabled=false ${DOCKER_OPTS}"

EOF

cat <<EOF >/usr/lib/systemd/system/docker.service

[Unit]

Description=Docker Application Container Engine

Documentation=http://docs.docker.com

After=network.target flannel.service

Requires=flannel.service

[Service]

Type=notify

EnvironmentFile=-/run/flannel/docker #网络参数

EnvironmentFile=-/etc/kubernetes/cfg/docker.cfg

WorkingDirectory=/usr/bin #必须要改

ExecStart=/usr/bin/dockerd \$DOCKER_OPT_BIP \$DOCKER_OPT_MTU \$DOCKER_OPTS

LimitNOFILE=1048576

LimitNPROC=1048576

[Install]

WantedBy=multi-user.target

EOF

systemctl daemon-reload

systemctl enable docker

systemctl restart docker

确认docker 运行正常:

[root@docker-server5 scripts]# whereis dockerd

dockerd: /usr/bin/dockerd

[root@docker-server5 scripts]# dockerd -v

Docker version 17.09.1-ce, build 19e2cf6

执行脚本:

[root@docker-server5 scripts]# bash docker.sh

#查看网络配置:

# etcdctl -C http://10.20.3.74:2379 get /k8s-boss.com/network/config

{"Network":"192.168.1.0/24"}

查看当前网络:

4.4.2:node2配置:

[root@docker-server4 scripts]# grep -v "#" docker.sh | grep -v "^$"

DOCKER_OPTS=${1:-""}

DOCKER_CONFIG=/etc/kubernetes/cfg/docker.cfg

cat <<EOF >$DOCKER_CONFIG

DOCKER_OPTS="-H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock -s overlay --selinux-enabled=false ${DOCKER_OPTS}"

EOF

cat <<EOF >/usr/lib/systemd/system/docker.service

[Unit]

Description=Docker Application Container Engine

Documentation=http://docs.docker.com

After=network.target flannel.service

Requires=flannel.service

[Service]

Type=notify

EnvironmentFile=-/run/flannel/docker

EnvironmentFile=-/etc/kubernetes/cfg/docker.cfg

WorkingDirectory=/usr/bin

ExecStart=/usr/bin/dockerd \$DOCKER_OPT_BIP \$DOCKER_OPT_MTU \$DOCKER_OPTS

LimitNOFILE=1048576

LimitNPROC=1048576

[Install]

WantedBy=multi-user.target

EOF

systemctl daemon-reload

systemctl enable docker

systemctl restart docker

确认docker 运行正常:

[root@docker-server4 scripts]# whereis dockerd

dockerd: /usr/bin/dockerd /usr/share/man/man8/dockerd.8.gz

[root@docker-server4 scripts]# dockerd -v

Docker version 1.12.6, build ec8512b/1.12.6

执行脚本:

[root@docker-server4 scripts]# bash docker.sh

查看当前网络:

Docker启动方式:

4.4.4:在master查看node节点:

[root@docker-server1 ~]# kubectl -s http://192.168.10.100:8080 get node

4.4.5:其他验证命令:

http://blog.csdn.net/felix_yujing/article/details/51622132

[root@bj-zw-k8s-group1-master1-v-9-63 ~]# kubectl cluster-info dump

4.5:测试容器网络通信:

4.5.1:node1启动容器:

[root@docker-server5 ~]# docker run --name node1 --rm -it docker.io/centos /bin/bash

[root@900402570f90 /]#

4.5.2:node2启动容器:

[root@docker-server4 opt]# docker images

REPOSITORY TAG IMAGE ID CREATED SIZE

zhangshijie/centos-nginx latest 678e2f074b0d 2 months ago 354MB

centos latest 196e0ce0c9fb 3 months ago 197MB

[root@docker-server4 opt]# docker run --name node2--rm -it zhangshijie/centos-nginx /bin/bash

[root@afc1023fbe87 /]# yum install net-tools -y

4.5.1:node1启动容器:

[root@docker-server5 ~]# docker run --name node1 --rm -it docker.io/centos /bin/bash

[root@900402570f90 /]#

4.5.3:验证网络通信:

如果网络不通,检查防火墙配置,添加防火墙规则iptables -A FORWARD -s 10.0.0.0/16 -j ACCEPT

五:部署SkyDNS :

skydns是Kubernetes集群中能够使用service name对Service进行访问的基础,在Kubernetes集群安装完毕后,应默认进行启动,一般情况下,将images push 到私库,在预装有master的机器上使用yaml文件创建skydns。

Skydns有三部分组成,分别是etcd、kube2sky、skydns,其中kube2sky是在service进行创建、删除、修改等操作时将信息保存到etcd,而skydns则监听kubernetes,当有service创建时,会使用kubelet配置的KUBELET__DNS_IP="--cluster-dns=10.254.254.254" , KUBELET_DNS_DOMAIN="--cluster-domain=cluster.local" 在创建pod中从而使用对应的dns服务器,这个dns解析服务就是有skydns提供的。

有些镜像需要从Google镜像仓库下载,目前总计需要四个镜像,下载仓库地址为:https://console.cloud.google.com/gcr/images/google-containers/GLOBAL

5.1:镜像准备:

5.1.1:下载各镜像:

# docker pull gcr.io/google-containers/kube2sky-amd64:1.15

# docker pull gcr.io/google-containers/skydns:2015-10-13-8c72f8

# docker pull gcr.io/google-containers/exechealthz-amd64:1.2

# docker pull gcr.io/google-containers/etcd:3.1.11

5.1.2:导入各镜像:

[root@docker-server1 opt]# docker load < /opt/skydns_2015-10-13-8c72f8c.tar.gz

5.2:镜像上传至harbor服务器:

5.2.1:镜像打tag:

[root@docker-server1 opt]# docker tag gcr.io/google-containers/kube2sky-amd64:1.15 192.168.10.210/web/kube2sky-amd64:1.15

[root@docker-server1 opt]# docker tag gcr.io/google-containers/exechealthz-amd64:1.2 192.168.10.210/web/exechealthz-amd64:1.2

[root@docker-server1 opt]# docker tag gcr.io/google-containers/etcd-amd64:3.1.11 192.168.10.210/web/etcd-amd64:3.1.11

[root@docker-server1 opt]# docker tag gcr.io/google-containers/skydns:2015-10-13-8c72f8c 192.168.10.210/web/skydns:2015-10-13-8c72f8c

5.2.2:镜像上传至Harbor:

[root@docker-server1 opt]# docker push 192.168.10.210/web/etcd-amd64:3.1.11

[root@docker-server1 opt]# docker push 192.168.10.210/web/kube2sky-amd64:1.15

[root@docker-server1 opt]# docker push 192.168.10.210/web/exechealthz-amd64:1.2

[root@docker-server1 opt]# docker push 192.168.10.210/web/skydns:2015-10-13-8c72f8c

5.2.3:Harbor验证镜像:

5.3:准备yaml文件:

5.3.1:准备skydns-svc.yaml文件:

[root@docker-server1 ~]# cat skydns-svc.yaml

apiVersion: v1

kind: Service

metadata:

name: kube-dns

namespace: kube-system

labels:

k8s-app: kube-dns

kubernetes.io/cluster-service: "true"

kubernetes.io/name: "KubeDNS"

spec:

selector:

k8s-app: kube-dns

clusterIP: 10.1.254.100

ports:

- name: dns

port: 53

protocol: UDP

- name: dns-tcp

port: 53

protocol: TCP

5.3.2:准备skydns-rc.yaml文件:

[root@docker-server1 ~]# cat skydns-rc.yaml

apiVersion: v1

kind: ReplicationController

metadata:

name: kube-dns-v9

namespace: kube-system

labels:

k8s-app: kube-dns

version: v9

kubernetes.io/cluster-service: "true"

spec:

replicas: 1

selector:

k8s-app: kube-dns

version: v9

template:

metadata:

labels:

k8s-app: kube-dns

version: v9

kubernetes.io/cluster-service: "true"

spec:

containers:

- name: etcd

image: 192.168.10.210/web/etcd-amd64:3.1.11

resources:

limits:

cpu: 100m

memory: 50Mi

command:

- /usr/local/bin/etcd

- -data-dir

- /var/etcd/data

- -listen-client-urls

- http://127.0.0.1:2379,http://127.0.0.1:4001

- -advertise-client-urls

- http://127.0.0.1:2379,http://127.0.0.1:4001

- -initial-cluster-token

- skydns-etcd

- name: kube2sky

image: 192.168.10.210/web/kube2sky-amd64:1.15

resources:

limits:

cpu: 100m

memory: 50Mi

args:

- --kube_master_url=http://192.168.10.205:8080

- --domain=cluster.local

- name: skydns

image: 192.168.10.210/web/skydns:2015-10-13-8c72f8c

resources:

limits:

cpu: 100m

memory: 50Mi

args:

- -machines=http://127.0.0.1:4001

- -addr=0.0.0.0:53

- -ns-rotate=false

- -domain=cluster.local

ports:

- containerPort: 53

name: dns

protocol: UDP

- containerPort: 53

name: dns-tcp

protocol: TCP

livenessProbe:

httpGet:

path: /healthz

port: 8080

scheme: HTTP

initialDelaySeconds: 60

timeoutSeconds: 5

readinessProbe:

httpGet:

path: /readiness

port: 8080

scheme: HTTP

initialDelaySeconds: 30

timeoutSeconds: 5

- name: healthz

image: 192.168.10.210/web/exechealthz-amd64:1.2

resources:

limits:

cpu: 10m

memory: 20Mi

args:

- -cmd=nslookup kubernetes.default.svc.cluster.local localhost >/dev/null

- -port=8080

ports:

- containerPort: 8080

protocol: TCP

volumes:

- name: etcd-storage

emptyDir: {}

5.3.3:创建服务:

在yaml文件的当前目录执行以下命令:

[root@docker-server1 ~]# kubectl create -f skydns-rc.yaml -f skydns-svc.yaml

无法启动容器,一直ContainerCreating状态,以下为拍错过程:

5.3.3.1:查看pof状态:

[root@docker-server1 ~]# kubectl get pod --all-namespaces

NAMESPACE NAME READY STATUS RESTARTS AGE

kube-system kube-dns-v9-ccsr2 0/4 ContainerCreating 0 5s

5.3.3.2:查看pod具体信息:

[root@docker-server1 ~]# kubectl describe pod kube-dns-v9-ccsr2 --namespace="kube-system"

#日志显示在node节点192.168.10.208无法启动sandbox

5.3.3.3:到node节点查看信息:

[root@docker-server4 ~]# tail -fn200 /var/log/messages

日志提示无法下载镜像gcr.io/google_containers/pause-amd64:3.0

C:\Users\ZhangJie\Documents\Tencent Files\1669121886\Image\C2C\0WTNL[%[N(PDV949V9G)V8H.png

5.3.3.4:下载并导入pause镜像:

[root@iZ62ovkwaktZ ~]# docker pull gcr.io/google_containers/pause-amd64:3.0

[root@docker-server4 ~]# docker load -i /opt/pause-amd64_v3.0.tar.gz

5.3.3.5:删除并重新创建服务:

[root@docker-server1 ~]# kubectl delete -f skydns-rc.yaml -f skydns-svc.yaml

[root@docker-server1 ~]# kubectl create -f skydns-rc.yaml -f skydns-svc.yaml

[root@docker-server1 ~]# kubectl get pod --all-namespaces

NAMESPACE NAME READY STATUS RESTARTS AGE

kube-system kube-dns-v9-x2c2t 0/4 ContainerCreating 0 4s

[root@docker-server1 ~]# kubectl describe pod kube-dns-v9-x2c2t --namespace="kube-system"

5.3.3.6:node节点验证容器:

在node节点192.168.10.208 验证容器已经自动创建并启动成功。

5.3.3.7:验证最终pod状态:

下图在namespace中运行了一个podkube-dns-v9-x2c2t。

[root@docker-server1 ~]# kubectl get pod --all-namespaces

5.3.3.4:验证service最终状态:

下图在两个namespace(default和kube-system)空间中分别运行了两个service(kubernetes和kube-dns),下面将用继续DNS 测试是否可以正常解析service的名称。

[root@docker-server1 ~]# kubectl get service --all-namespaces

5.3.4:部署测试服务busybox:

BusyBox 是一个集成了一百多个最常用Linux命令和工具的软件,BusyBox包含了一些简单的工具,例如ls、cat和echo等等,还包含了一些更大、更复杂的工具,例grep、find、mount以及telnet。有些人将 BusyBox 称为 Linux 工具里的瑞士军刀,简单的说BusyBox就好像是个大工具箱,它集成压缩了 Linux 的许多工具和命令,也包含了 Android 系统的自带的shell。

5.3.4.1:下载并导入镜像:

# docker pull gcr.io/google-containers/busybox:latest

# docker save gcr.io/google-containers/busybox:latest > /opt/busybox_latest.tar.gz

# sz /opt/busybox_latest.tar.gz

[root@docker-server1 ~]# docker load -i /opt/busybox_latest.tar.gz

5.3.4.2:镜像上传到harbor:

[root@docker-server1 ~]# docker push 192.168.10.210/web/busybox:latest

[root@docker-server1 ~]# docker tag gcr.io/google-containers/busybox:latest 192.168.10.210/web/busybox:latest

5.3.4.3:harbor验证镜像:

5.3.4.4:编写yaml文件:

[root@docker-server1 ~]# cat busybox.yaml

apiVersion: v1

kind: Pod

metadata:

name: busybox

namespace: default #default namespace的DNS

spec:

containers:

- image: 192.168.10.210/web/busybox

command:

- sleep

- "3600"

imagePullPolicy: IfNotPresent

name: busybox

restartPolicy: Always

5.3.4.5:创建服务:

[root@docker-server1 ~]# kubectl create -f busybox.yaml

5.3.4.6:验证服务:

[root@docker-server1 ~]# kubectl get pod --all-namespaces

5.3.5:测试解析service:

查看当前所有的service,用于DNS测试,DNS 只能解析相同namespace的service名称到其对应的IP地址,无法夸namespace解析:

5.3.5.1:测试解析kubernetes:

[root@docker-server1 ~]# kubectl exec busybox nslookup kubernetes

注:默认是解析的namespace为default的service IP,无法解析其他namespace的service IP:

5.3.5.2:创建多namespace DNS:

[root@docker-server1 ~]# cat busybox-kube-dns.yaml

apiVersion: v1

kind: Pod

metadata:

name: busybox

namespace: kube-system #对应的namespace

spec:

containers:

- image: 192.168.10.210/web/busybox

command:

- sleep

- "3600"

imagePullPolicy: IfNotPresent

name: busybox

restartPolicy: Always

5.3.5.3:创建服务:

[root@docker-server1 ~]# kubectl create -f busybox-kube-dns.yaml

pod "busybox" created

5.3.5.4:验证服务创建成功:

[root@docker-server1 ~]# kubectl get pods --all-namespaces

5.3.5.5:测试解析:

[root@docker-server1 ~]# kubectl --namespace="kube-system" exec busybox nslookup kube-dns

六:kubernetes 图形部署:

Kubernetes dashboard在1.6.1版本开始支持简体中文界面,截止到当前dashboard最新稳定版本1.8.1。

6.1:下载所需官方镜像:

Linux一切皆文件,python一切皆对象,kubernetes 一切皆镜像,dashboard 部署与传统的服务部署方式不同,是使用官方docker images和yaml文件启动dashboard服务。

6.1.1:github下载:

下载地址:https://github.com/kubernetes/dashboard/releases

6.1.2:Google镜像仓库:

下载地址:

https://console.cloud.google.com/gcr/images/google-containers/GLOBAL

其他镜像下载方式与以下步骤相同

6.1.2.1:搜索镜像:

6.1.2.2:查看镜像:

6.1.2.3:查看详细信息:

6.1.2.4:查看镜像下载命令:

点击显示拉取命令:

6.1.2.5:下载dashboard镜像:

[root@iZ62ovkwaktZ ~]# docker pull gcr.io/google-containers/kubernetes-dashboard-amd64:v1.8.1

6.1.2.6:下载heapster镜像:

[root@iZ62ovkwaktZ ~]# docker pull gcr.io/google-containers/heapster:v1.5.0

将下载完成的heapster镜像导入到各master和node节点,不需要重新打tag和上传到harbor。

注:如果没有heapster镜像,在创建dashboard后可以正常运行但是无法访问,查看pod日志会报以下错误:

6.1.3:dashboard镜像上传至harbor:

6.1.3.1:镜像打tag:

[root@docker-server1 ~]# docker load < kubernetes-dashboard-amd64-v1.8.1.tar.gz

[root@docker-server1 ~]# docker tag gcr.io/google-containers/kubernetes-dashboard-amd64:v1.8.1 192.168.10.210/web/kubernetes-dashboard-amd64:v1.8.1

6.1.3.2:登录harbor并上传镜像:

[root@docker-server1 ~]# docker login 192.168.10.210

Username (admin): admin

Password:

Login Succeeded

[root@docker-server1 ~]# docker push 192.168.10.210/web/kubernetes-dashboard-amd64:1.8.1

6.1.3.4:harbor验证镜像:

6.2:创建yaml文件:

[root@docker-server1 k8s-1.8.1-dashborard]# pwd

/opt/k8s-1.8.1-dashborard

[root@docker-server1 k8s-1.8.1-dashborard]# cat kubernetes-dashboard-1.8.1-http.yaml

# Copyright 2017 The Kubernetes Authors.

# Author: ZhangShijie

# Create Date: 2018-01-4

# Licensed under the Apache License, Version 2.0 (the "License");

# you may not use this file except in compliance with the License.

# You may obtain a copy of the License at

#

# http://www.apache.org/licenses/LICENSE-2.0

#

# Unless required by applicable law or agreed to in writing, software

# distributed under the License is distributed on an "AS IS" BASIS,

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

# See the License for the specific language governing permissions and

# limitations under the License.

# Configuration to deploy release version of the Dashboard UI compatible with

# Kubernetes 1.8.

#

# Example usage: kubectl create -f <this_file>

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

labels:

app: kubernetes-dashboard

name: kubernetes-dashboard

namespace: kube-system

spec:

replicas: 1

selector:

matchLabels:

app: kubernetes-dashboard

template:

metadata:

labels:

app: kubernetes-dashboard

# Comment the following annotation if Dashboard must not be deployed on master

annotations:

scheduler.alpha.kubernetes.io/tolerations: |

[

{

"key": "dedicated",

"operator": "Equal",

"value": "master",

"effect": "NoSchedule"

}

]

spec:

containers:

- name: kubernetes-dashboard

image: 192.168.10.210/web/kubernetes-dashboard-amd64:v1.8.1 #harbor仓库地址

imagePullPolicy: IfNotPresent

ports:

- containerPort: 9090

protocol: TCP

args:

# Uncomment the following line to manually specify Kubernetes API server Host

# If not specified, Dashboard will attempt to auto discover the API server and connect

# to it. Uncomment only if the default does not work.

# - --apiserver-host=http://my-address:port

- --apiserver-host=http://192.168.10.205:8080 #master api地址

- --heapster-host=http://heapster #不加此行会报错

livenessProbe:

httpGet:

path: /

port: 9090

initialDelaySeconds: 30

timeoutSeconds: 30

---

kind: Service

apiVersion: v1

metadata:

labels:

app: kubernetes-dashboard

name: kubernetes-dashboard

namespace: kube-system

spec:

type: NodePort

ports:

- port: 80

targetPort: 9090

selector:

app: kubernetes-dashboard

6.3:创建并验证dashboard:

6.3.1:创建dashboard服务:

[root@docker-server1 k8s-1.8.1-dashborard]# kubectl create -f kubernetes-dashboard-1.8.1-http.yaml

deployment "kubernetes-dashboard" created

service "kubernetes-dashboard" created

6.3.2:验证service状态:

[root@docker-server1 k8s-1.8.1-dashborard]# kubectl get service --all-namespaces

6.3.3:列出service描述信息:

[root@docker-server1 k8s-1.8.1-dashborard]# kubectl describe service kubernetes-dashboard --namespace="kube-system"

Name: kubernetes-dashboard

Namespace: kube-system

Labels: app=kubernetes-dashboard

Annotations: <none>

Selector: app=kubernetes-dashboard

Type: NodePort

IP: 10.1.83.81

Port: <unset> 80/TCP

TargetPort: 9090/TCP

NodePort: <unset> 30759/TCP

Endpoints: 10.0.93.3:9090

Session Affinity: None

External Traffic Policy: Cluster

Events: <none>

6.3.4:验证pod状态:

[root@docker-server1 k8s-1.8.1-dashborard]# kubectl get pods --all-namespaces

6.3.5:列出pod描述信息:

[root@docker-server1 k8s-1.8.1-dashborard]# kubectl describe pod kubernetes-dashboard-78df7bfb94-7qxql --namespace="kube-system"

6.3.6:验证deployment状态:

[root@docker-server1 k8s-1.8.1-dashborard]# kubectl get deployment --namespace="kube-system"

6.3.7:列出deployment描述信息:

[root@docker-server1 k8s-1.8.1-dashborard]# kubectl describe pod kubernetes-dashboard-78df7bfb94-7qxql --namespace="kube-system"

6.3.8:查看某个pod的详细日志:

[root@docker-server1 k8s-1.8.1-dashborard]# kubectl logs -f kubernetes-dashboard-78df7bfb94-29wlf --namespace="kube-system"

6.3.9:node节点验证容器:

在6.3.5步骤可以查看到在那个node运行

6.3.10:验证访问dashboard界面:

在6.3.2步骤可以查看到访问的端口,访问方式为nodeIP+nodePort端口:

6.3.10.1:验证dashboard版本信息:

6.3.10.2:验证界面操作:

6.3.11:通过haproxy进行反向代理dashboard:

[root@docker-server4 ~]# vim /etc/haproxy/haproxy.cfg

listen web_port_80

bind 192.168.10.100:80

mode http

log global

balance source

server server1 10.1.83.81:80 check inter 3000 fall 2 rise 5 #service网络的IP:PORT

[root@docker-server4 ~]# systemctl restart haproxy

6.3.12:访问web测试:

6.4:部署heapster+influxdb+grafana:

Github地址:https://github.com/kubernetes/heapster

6.4.1:下载源码包:

将源码包下载并解压

[root@docker-server1 opt]# wget https://github.com/kubernetes/heapster/archive/v1.5.0.zip

[root@docker-server1 opt]# unzip v1.5.0.zip

6.4.2:查看需要的镜像:

[root@docker-server1 influxdb]# pwd

/opt/heapster-1.5.0/deploy/kube-config/influxdb

[root@docker-server1 influxdb]# grep image ./*

根据官方提供的yaml,需要准备三个镜像。

6.4.3:镜像准备:

当前最新版下载地址https://github.com/kubernetes/heapster/releases,根据上一步骤得到三个镜像分别是heapster-amd64_v1.5.0、heapster-grafana-amd64_v4.4.3和heapster-influxdb-amd64_v1.3.3

Heapster更新使用1.5.0

6.4.3.1:下载镜像:

国内下载需要翻墙

[root@iZ62ovkwaktZ ~]# docker pull gcr.io/google-containers/heapster-grafana-amd64:v4.4.3

[root@iZ62ovkwaktZ ~]# docker pull gcr.io/google-containers/heapster-influxdb-amd64:v1.3.3

[root@iZ62ovkwaktZ ~]# docker pull gcr.io/google-containers/heapster-amd64:v1.5.0

6.4.3.2:导入镜像:

将镜像从国外下载至本地服务器然后导入至docker镜像服务器。

[root@docker-server1 k8s-1.8.1-dashborard]# docker load -i heapster-amd64_v1.5.0.tar.gz

[root@docker-server1 k8s-1.8.1-dashborard]# docker load -i heapster-grafana-amd64_v4.4.3.tar.gz

[root@docker-server1 k8s-1.8.1-dashborard]# docker load -i heapster-influxdb-amd64_v1.3.3.tar.gz

[root@docker-server1 k8s-1.8.1-dashborard]# docker tag gcr.io/google-containers/heapster-amd64:v1.5.0 192.168.10.210/web/heapster-amd64:v1.5.0

[root@docker-server1 k8s-1.8.1-dashborard]# docker tag gcr.io/google-containers/heapster-grafana-amd64:v4.4.3 192.168.10.210/web/heapster-grafana-amd64:v4.4.3

[root@docker-server1 k8s-1.8.1-dashborard]# docker tag gcr.io/google-containers/heapster-influxdb-amd64:v1.3.3 192.168.10.210/web/heapster-influxdb-amd64:v1.3.3

6.4.3.3:上传至harbor:

将打好tag的镜像分别 上传至harbor服务器。

[root@docker-server1 k8s-1.8.1-dashborard]# docker login 192.168.10.210

Username (admin): admin

Password:

Login Succeeded

[root@docker-server1 k8s-1.8.1-dashborard]# docker push 192.168.10.210/web/heapster-amd64:v1.5.0

[root@docker-server1 k8s-1.8.1-dashborard]# docker push 192.168.10.210/web/heapster-grafana-amd64:v4.4.3

[root@docker-server1 k8s-1.8.1-dashborard]# docker push 192.168.10.210/web/heapster-influxdb-amd64:v1.3.3

6.4.3.4.:harbor验证镜像:

6.4.4:准备yaml文件:

共有三个yaml文件

6.4.4.1:准备grafana.yaml文件:

[root@docker-server1 influxdb]# pwd

/opt/heapster-1.5.0/deploy/kube-config/influxdb

[root@docker-server1 influxdb]# cat grafana.yaml

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

name: monitoring-grafana

namespace: kube-system

spec:

replicas: 1

template:

metadata:

labels:

task: monitoring

k8s-app: grafana

spec:

containers:

- name: grafana

image: 192.168.10.210/web/heapster-grafana-amd64:v4.4.3

ports:

- containerPort: 3000

protocol: TCP

volumeMounts:

- mountPath: /etc/ssl/certs

name: ca-certificates

readOnly: true

- mountPath: /var

name: grafana-storage

env:

- name: INFLUXDB_HOST

value: monitoring-influxdb

- name: GF_SERVER_HTTP_PORT

value: "3000"

# The following env variables are required to make Grafana accessible via

# the kubernetes api-server proxy. On production clusters, we recommend

# removing these env variables, setup auth for grafana, and expose the grafana

# service using a LoadBalancer or a public IP.

- name: GF_AUTH_BASIC_ENABLED

value: "false"

- name: GF_AUTH_ANONYMOUS_ENABLED

value: "true"

- name: GF_AUTH_ANONYMOUS_ORG_ROLE

value: Admin

- name: GF_SERVER_ROOT_URL

# If you're only using the API Server proxy, set this value instead:

# value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy

value: /

volumes:

- name: ca-certificates

hostPath:

path: /etc/ssl/certs

- name: grafana-storage

emptyDir: {}

---

apiVersion: v1

kind: Service

metadata:

labels:

# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)

# If you are NOT using this as an addon, you should comment out this line.

kubernetes.io/cluster-service: 'true'

kubernetes.io/name: monitoring-grafana

name: monitoring-grafana

namespace: kube-system

spec:

# In a production setup, we recommend accessing Grafana through an external Loadbalancer

# or through a public IP.

# type: LoadBalancer

# You could also use NodePort to expose the service at a randomly-generated port

# type: NodePort

ports:

- port: 80

targetPort: 3000

selector:

k8s-app: grafana

6.4.4.2:准备heapster.yaml文件:

[root@docker-server1 influxdb]# cat heapster.yaml

apiVersion: v1

kind: ServiceAccount

metadata:

name: heapster

namespace: kube-system

---

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

name: heapster

namespace: kube-system

spec:

replicas: 1

template:

metadata:

labels:

task: monitoring

k8s-app: heapster

spec:

serviceAccountName: heapster

containers:

- name: heapster

image: 192.168.10.210/web/heapster-amd64:v1.4.2

imagePullPolicy: IfNotPresent

command:

- /heapster

- --source=kubernetes:http://kubernetes.default?inClusterConfig=false&insecure=true

- --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086

#- --sink=influxdb:http://monitoring-influxdb:8086

---

apiVersion: v1

kind: Service

metadata:

labels:

task: monitoring

# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)

# If you are NOT using this as an addon, you should comment out this line.

kubernetes.io/cluster-service: 'true'

kubernetes.io/name: Heapster

name: heapster

namespace: kube-system

spec:

ports:

- port: 80

targetPort: 8082

selector:

k8s-app: heapster

6.4.4.3:准备influxdb.yaml文件:

[root@docker-server1 influxdb]# cat influxdb.yaml

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

name: monitoring-influxdb

namespace: kube-system

spec:

replicas: 1

template:

metadata:

labels:

task: monitoring

k8s-app: influxdb

spec:

containers:

- name: influxdb

image: 192.168.10.210/web/heapster-influxdb-amd64:v1.3.3

volumeMounts:

- mountPath: /data

name: influxdb-storage

volumes:

- name: influxdb-storage

emptyDir: {}

---

apiVersion: v1

kind: Service

metadata:

labels:

task: monitoring

# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)

# If you are NOT using this as an addon, you should comment out this line.

kubernetes.io/cluster-service: 'true'

kubernetes.io/name: monitoring-influxdb

name: monitoring-influxdb

namespace: kube-system

spec:

ports:

- port: 8086

targetPort: 8086

selector:

k8s-app: influxdb

6.4.5:创建并验证服务:

6.4.5.1:创建服务:

[root@docker-server1 influxdb]# kubectl create -f grafana.yaml -f heapster.yaml -f influxdb.yaml

deployment "monitoring-grafana" created

service "monitoring-grafana" created

serviceaccount "heapster" created

deployment "heapster" created

service "heapster" created

deployment "monitoring-influxdb" created

service "monitoring-influxdb" created

6.4.5.2:验证service状态:

[root@docker-server1 influxdb]# kubectl get service --all-namespaces

6.4.5.3:验证pod状态:

[root@docker-server1 influxdb]# kubectl get pod --all-namespaces

6.4.5.4:web界面验证:

七:kubernetes 部署案例:

镜像规划:

镜像需要按照系统基础镜像---业务基础镜像---业务镜像的分层模式构建,如先基于centos 官方基础镜像构建一个自定义的centos base镜像,然后构建自定义服务平台镜像,如各JDK版本镜像、Nginx基础镜像等,最后再基于业务平台基础镜像构建出可用于生产平台的自定义业务镜像。

优势:分层设计,每一层镜像可以最大化复用,节省业务镜像的构建时间。

C:\Users\ZhangJie\Documents\Tencent Files\1669121886\Image\C2C\%0UZVG140QA]RO1]A4FB[2H.png 缺点:上层更新,其下层镜像也必须重新构建才能生效,综合比较优点大于缺点。

7.1:构建tomcat业务镜像:

7.1.1:构建自定义centos基础镜像:

基于官方的Centos 7.2.1511镜像进行构建,需要下载Centos基础镜像,默认的版本为latest即当前的最新版本,如果要下载指定的版本则需要具体指定版本。

[root@docker-server1 ~]# docker pull centos:7.2.1511

7.2.1511: Pulling from library/centos

f2d1d709a1da: Pull complete

Digest: sha256:7c47810fd05ba380bd607a1ece3b4ad7e67f5906b1b981291987918cb22f6d4d

Status: Downloaded newer image for centos:7.2.1511

7.1.1.1:创建Dockerfile目录:

目录的创建结合业务,创建分层的业务目录。

[root@docker-server1 opt]# mkdir -pv /opt/dockerfile/system/{centos,redtar,ubuntu}

[root@docker-server1 opt]# mkdir -pv /opt/dockerfile/web/{nginx/boss/{nginx-pre,nginx -online},jdk/{jdk7,jdk6},tomcat/boss/{tomcat-pre,tomcat-online}}

7.1.1.2:创建Dockerfile文件:

Dockefile文件是创建docker 镜像的重要依赖文件,其中定义了镜像的各种需要安装的包、添加的文件等操作,另外要安装业务需求把需要安装的包进行安装。

[root@docker-server1 ~]# cd /opt/dockerfile/system/centos

[root@docker-server1 centos]# vim Dockerfile

#Centos Base Image

FROM docker.io/centos:7.2.1511

MAINTAINER zhangshijie "zhangshijie@300.cn"

RUN useradd -u 2000 www

RUN rm -rf /etc/yum.repo.d/*

RUN yum clean all

ADD *.repo /etc/yum.repo.d/

RUN yum makecache

RUN yum install -y vim wget tree pcre pcre-devel gcc gcc-c++ zlib zlib-devel openssl openssl-devel iproute net-tools iotop unzip zip iproute ntpdate nfs-utils tcpdump

7.1.1.3:准备附件文件:

凡是使用ADD命令或者COPY命令添加到镜像里面的文件都要提前准备好。

[root@docker-server1 centos]# wget http://mirrors.aliyun.com/repo/Centos-7.repo

[root@docker-server1 centos]# wget http://mirrors.aliyun.com/repo/epel-7.repo

7.1.1.4:执行构建命令:

[root@docker-server1 centos]# docker build -t 192.168.10.210/images/centos7.2.1511-base .

#推荐将每个镜像的构建命令写成脚本保存到当前目录,方便后期使用,如:

[root@docker-server1 centos]# cat build-command.sh

#!/bin/bash

docker build -t 192.168.10.210/images/centos7.2.1511-base .

#构建开始:

#构建过程中:

#构建完成:

一定要看到success的结果才是成功的,有时候会因为某些错误导致构建失败而中间退出。

7.1.1.5:验证基础镜像:

7.1.1.6:测试基础镜像:

[root@docker-server1 centos]# docker run -it --rm 192.168.10.210/images/centos7.2.1511-base /bin/bash

7.1.1.7:上传到harbor服务器:

确保有harbor服务器的上传镜像目录上传权限,镜像上传到harbor服务器可以极大的方便的镜像分发。

[root@docker-server1 centos]# docker push 192.168.10.210/images/centos7.2.1511-base

7.1.1.8:其他服务器测试下载镜像:

[root@docker-server4 ~]# docker pull 192.168.10.210/images/centos7.2.1511-base

7.1.2:构建基础JDK镜像:

JDK(Java Development Kit) 是 Java 语言的软件开发工具包(SDK),目前java程序的占比依然比较大,JDK 镜像或者Nginx等镜像,基于上一步骤的Centos 7.2.1511的基础镜像进行构建,因此基础镜像需要把本步骤需要使用的包以及其他需求等准备好。

7.1.2.1:进入到指定目录:

镜像的构建一定要目录和层级标准化,否则后期时间一久,就会出现镜像构建比较乱的情况。

[root@docker-server1 ~]# cd /opt/dockerfile/web/jdk/jdk7/

7.1.2.2:编辑Dockerfile:

#JDK Image, For java version "1.7.0_79"

FROM 192.168.10.210/images/centos7.2.1511-base:latest

MAINTAINER zhangshijie "zhangshijie@300.cn"

ADD jdk-7u79-linux-x64.tar.gz /usr/local/src

RUN ln -sv /usr/local/src/jdk1.7.0_79 /usr/local/jdk

RUN rm -rf /usr/local/src/jdk-7u79-linux-x64.tar.gz

ENV JAVA_HOME /usr/local/jdk

ENV JRE_HOME $JAVA_HOME/jre

ENV CLASSPATH $JAVA_HOME/lib/:$JRE_HOME/lib/

ENV PATH $PATH:$JAVA_HOME/bin

RUN ln -snf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo "Asia/Shanghai" > /etc/timezone

7.1.2.3:准备附件文件:

[root@docker-server1 jdk7]# ll jdk-7u79-linux-x64.tar.gz

-rw-r--r-- 1 root root 153512879 Oct 26 2016 jdk-7u79-linux-x64.tar.gz

7.1.2.4:执行构建命令:

[root@docker-server1 jdk7]# cat build-command.sh

#!/bin/bash

docker build -t 192.168.10.210/images/centos7.2.1511-jdk-7_79 .

7.1.2.5:验证镜像:

7.1.2.6:测试镜像:

[root@docker-server1 jdk7]# docker run -it --rm 192.168.10.210/images/centos7.2.1511-jdk-7_79 bash

7.1.2.7:上传到harbor服务器:

[root@docker-server1 jdk7]# docker push 192.168.10.210/images/centos7.2.1511-jdk-7_79

7.1.2.8:在其他服务器测试下载镜像:

[root@docker-server4 ~]# docker pull 192.168.10.210/images/centos7.2.1511-jdk-7_79

7.1.3:构建Tomcat业务镜像:

基于JDK镜像,构建出不同业务的镜像,比如不同tomcat版本、不同的APP目录、不同应用环境的镜像等。

7.1.3.1:进入到指定目录:

[root@docker-server1 tomcat-online]# pwd

/opt/dockerfile/web/tomcat/boss/tomcat-online

7.1.3.2:编辑Dockerfile:

[root@bj-zw-k8s-group1-master2-v-9-67 tomcat-invoice]# cat Dockerfile

#Tomcat Image

FROM 192.168.10.210/images/centos7.2.1511-jdk-7_79

MAINTAINER zhangshijie "zhangshijie@300.cn"

#env settinf

ADD profile /etc/profile

ENV TZ "Asia/Shanghai"

ENV LANG en_US.UTF-8

ENV TERM xterm

ENV TOMCAT_MAJOR_VERSION 7

ENV TOMCAT_MINOR_VERSION 7.0.79

ENV CATALINA_HOME /apps/tomcat

ENV APP_DIR ${CATALINA_HOME}/webapps

#tomcat settinf

RUN mkdir /apps

RUN mkdir /usr/local/boss

RUN mkdir /usr/ftp_home

ADD apache-tomcat-7.0.68.tar.gz /apps

RUN ln -sv /apps/apache-tomcat-7.0.68 /apps/tomcat

ADD run_tomcat.sh ${CATALINA_HOME}/bin/

ADD catalina.sh ${CATALINA_HOME}/bin/

ADD test.tar.gz /apps/tomcat/webapps/

#ADD invoice.war /tmp

#RUN unzip /tmp/invoice.war -d /apps/tomcat/webapps/invoice

#RUN rm -rf /tmp/invoice.war

#ADD config_runmode.properties /usr/local/boss/

RUN chmod a+x ${CATALINA_HOME}/bin/*.sh

RUN chown www.www /apps /usr/local/boss /usr/ftp_home -R

CMD ["/apps/tomcat/bin/run_tomcat.sh"]

EXPOSE 8080 8009

7.1.3.3:准备附加文件:

7.1.3.3.1:tomcat前台启动脚本:

[root@docker-server1 tomcat-online]# vim run_tomcat.sh

#!/bin/bash

echo "10.20.3.8 www.test.com" >> /etc/hosts

echo "nameserver 202.106.0.20" > /etc/resolv.conf

rpcbind

su - www -c "/apps/tomcat/bin/catalina.sh run"

7.1.3.3.2:准备tomcat和app:

[root@docker-server1 tomcat-online]# mkdir test

[root@docker-server1 tomcat-online]# echo "Docker test page" > test/index.html

[root@docker-server1 tomcat-online]# tar czvf test.tar.gz test/

7.1.3.4:执行构建命令:

[root@docker-server1 tomcat-online]# cat build-command.sh

#!/bin/bash

docker build -t 192.168.10.210/images/centos7.2.1511-tomcat7-79-online .

#构建完成:

7.1.3.5:验证镜像:

7.1.3.6:测试镜像启动容器:

7.1.3.6.1:容器启动中:

7.1.3.6.3:容器启动完成:

7.1.3.6.4:访问宿主机端口测试:

7.1.3.6.5:访问app测试:

7.1.3.7:镜像上传至harbor:

[root@docker-server1 tomcat-online]# docker push 192.168.10.210/images/centos7.2.1511-tomcat7-79-online

7.2:构建Nginx业务镜像:

7.2.1:构建自定义Nginx基础镜像:

7.2.1.1:进入到指定目录:

[root@docker-server1 nginx-online]# pwd

/opt/dockerfile/web/nginx/boss/nginx-online

服务器技术交流群请加微信 YJZyjz