第22章 Kubernetes企业实战

1.1 Kubernetes入门及概念介绍

Kubernetes(k8s)是自动化容器操作的开源平台,这些操作包括部署,调度和节点集群间扩展。如果你曾经用过Docker容器技术部署容器,可以将Docker看成Kubernetes内部使用的低级别组件。 Kubernetes不仅支持Docker,还支持Rocket,这是另一种容器技术。使用Kubernetes可以实现如下功能:

  • 自动化容器的部署和复制;
  • 随时扩展或收缩容器规模;
  • 将容器组织成组,并且提供容器间的负载均衡;
  • 很容易地升级应用程序容器的新版本;
  • 提供容器弹性,如果容器失效就替换它等。

1.2 Kubernetes平台组件概念

Kubernetes集群中主要存在两种类型的节点:master、minion节点,Minion节点为运行 Docker容器的节点,负责和节点上运行的 Docker 进行交互,并且提供了代理功能。

  • Kubelect Master:Master节点负责对外提供一系列管理集群的API接口,并且通过和 Minion 节点交互来实现对集群的操作管理。
  • Apiserver:用户和 kubernetes 集群交互的入口,封装了核心对象的增删改查操作,提供了 RESTFul 风格的 API 接口,通过etcd来实现持久化并维护对象的一致性。
  • Scheduler:负责集群资源的调度和管理,例如当有 pod 异常退出需要重新分配机器时,scheduler 通过一定的调度算法从而找到最合适的节点。
  • Controller-manager:主要是用于保证 replication Controller 定义的复制数量和实际运行的 pod 数量一致,另外还保证了从 service 到 pod 的映射关系总是最新的。
  • Kubelet:运行在 minion节点,负责和节点上的Docker交互,例如启停容器,监控运行状态等。
  • Proxy:运行在 minion 节点,负责为 pod 提供代理功能,会定期从 etcd 获取 service 信息,并根据 service 信息通过修改 iptables 来实现流量转发(最初的版本是直接通过程序提供转发功能,效率较低。),将流量转发到要访问的 pod 所在的节点上去。
  • Etcd:etcd 是一个分布式一致性k-v存储系统数据库,可用于服务注册发现与共享配置储数据库,用来存储kubernetes的信息的,etcd组件作为一个高可用、强一致性的服务发现存储仓库,渐渐为开发人员所关注。在云计算时代,如何让服务快速透明地接入到计算集群中,如何让共享配置信息快速被集群中的所有机器发现,更为重要的是,如何构建这样一套高可用、安全、易于部署以及响应快速的服务集群,etcd的诞生就是为解决该问题。
  • Flannel:Flannel是CoreOS 团队针对 Kubernetes 设计的一个覆盖网络(Overlay Network)工具,Flannel 目的就是为集群中的所有节点重新规划 IP 地址的使用规则,从而使得不同节点上的容器能够获得同属一个内网且不重复的 IP 地址,并让属于不同节点上的容器能够直接通过内网 IP 通信。

1.3 Kubernetes单节点平台配置实战

部署Kubernetes云计算平台,至少准备两台服务器,此处为4台,包括一台Docker仓库;

Kubernetes Master节点:192.168.0.141

Kubernetes Node1节点:192.168.0.142

每台服务器主机上都运行如下命令:

systemctl stop firewalld

systemctl disable firewalld

yum -y install ntp

ntpdate pool.ntp.org

systemctl start ntpd

systemctl enable ntpd

1.4 Kubernetes Master安装与配置

Kubernetes Master节点上安装etcd和Kubernetes、Flannel网络,指令如下:

yum install kubernetes-master etcd flannel -y

Master /etc/etcd/etcd.conf配置文件,代码如下:

cat>/etc/etcd/etcd.conf<<EOF

# [member]

ETCD_NAME=etcd1

ETCD_DATA_DIR=”/data/etcd”

#ETCD_WAL_DIR=””

#ETCD_SNAPSHOT_COUNT=”10000″

#ETCD_HEARTBEAT_INTERVAL=”100″

#ETCD_ELECTION_TIMEOUT=”1000″

ETCD_LISTEN_PEER_URLS=”http://192.168.0.141:2380″

ETCD_LISTEN_CLIENT_URLS=”http://192.168.0.141:2379,http://127.0.0.1:2379″

ETCD_MAX_SNAPSHOTS=”5″

#ETCD_MAX_WALS=”5″

#ETCD_CORS=””

#

#[cluster]

ETCD_INITIAL_ADVERTISE_PEER_URLS=”http://192.168.0.141:2380″

# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. “test=http://…”

ETCD_INITIAL_CLUSTER=”etcd1=http://192.168.0.141:2380″

#ETCD_INITIAL_CLUSTER_STATE=”new”

#ETCD_INITIAL_CLUSTER_TOKEN=”etcd-cluster”

ETCD_ADVERTISE_CLIENT_URLS=”http://192.168.0.141:2379″

#ETCD_DISCOVERY=””

#ETCD_DISCOVERY_SRV=””

#ETCD_DISCOVERY_FALLBACK=”proxy”

#ETCD_DISCOVERY_PROXY=””

#

#[proxy]

#ETCD_PROXY=”off”

#ETCD_PROXY_FAILURE_WAIT=”5000″

#ETCD_PROXY_REFRESH_INTERVAL=”30000″

#ETCD_PROXY_DIAL_TIMEOUT=”1000″

#ETCD_PROXY_WRITE_TIMEOUT=”5000″

#ETCD_PROXY_READ_TIMEOUT=”0″

#

#[security]

#ETCD_CERT_FILE=””

#ETCD_KEY_FILE=””

#ETCD_CLIENT_CERT_AUTH=”false”

#ETCD_TRUSTED_CA_FILE=””

#ETCD_PEER_CERT_FILE=””

#ETCD_PEER_KEY_FILE=””

#ETCD_PEER_CLIENT_CERT_AUTH=”false”

#ETCD_PEER_TRUSTED_CA_FILE=””

#

#[logging]

#ETCD_DEBUG=”false”

# examples for -log-package-levels etcdserver=WARNING,security=DEBUG

#ETCD_LOG_PACKAGE_LEVELS=””

EOF

mkdir -p /data/etcd/;chmod 757 -R /data/etcd/

systemctl restart etcd.service

启动etcd服务,执行命令:systemctl restart etcd.service

Master K8S /etc/kubernetes/config配置文件,代码如下:

cat>/etc/kubernetes/config<<EOF

# kubernetes system config

# The following values are used to configure various aspects of all

# kubernetes i, including

# kube-apiserver.service

# kube-controller-manager.service

# kube-scheduler.service

# kubelet.service

# kube-proxy.service

# logging to stderr means we get it in the systemd journal

KUBE_LOGTOSTDERR=”–logtostderr=true”

# journal message level, 0 is debug

KUBE_LOG_LEVEL=”–v=0″

# Should this cluster be allowed to run privileged docker containers

KUBE_ALLOW_PRIV=”–allow-privileged=false”

# How the controller-manager, scheduler, and proxy find the apiserver

KUBE_MASTER=”–master=http://192.168.0.141:8080″

EOF

将Kubernetes的apiserver进程的服务地址告诉Kubernetes的controller-manager, scheduler,proxy进程。

Master /etc/kubernetes/apiserver配置文件,代码如下:

cat>/etc/kubernetes/apiserver<<EOF

# kubernetes system config

# The following values are used to configure the kube-apiserver

# The address on the local server to listen to.

KUBE_API_ADDRESS=”–insecure-bind-address=0.0.0.0″

# The port on the local server to listen on.

KUBE_API_PORT=”–port=8080″

# Port minions listen on

KUBELET_PORT=”–kubelet-port=10250″

# Comma separated list of nodes in the etcd cluster

KUBE_ETCD_SERVERS=”–etcd-servers=http://192.168.0.141:2379″

# Address range to use for i

KUBE_SERVICE_ADDRESSES=”–service-cluster-ip-range=10.254.0.0/16″

# default admission control policies

#KUBE_ADMISSION_CONTROL=”–admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota”

KUBE_ADMISSION_CONTROL=”–admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota”

# Add your own!

KUBE_API_ARGS=””

EOF

for I in etcd kube-apiserver kube-controller-manager kube-scheduler; do

systemctl restart $I

systemctl enable $I

done

启动Kubernetes Master节点上的etcd, apiserver, controller-manager和scheduler进程及状态:

for I in etcd kube-apiserver kube-controller-manager kube-scheduler; do

systemctl restart $I

systemctl enable $I

systemctl status $I

done

1.5 Kubernetes Node安装配置

在Kubernetes Node1节点上安装flannel、docker和Kubernetes

yum install kubernetes-node docker flannel *rhsm* -y

vim配置文件/etc/kubernetes/config,代码如下:

cat>/etc/kubernetes/config<<EOF

# kubernetes system config

# The following values are used to configure various aspects of all

# kubernetes services, including

# kube-apiserver.service

# kube-controller-manager.service

# kube-scheduler.service

# kubelet.service

# kube-proxy.service

# logging to stderr means we get it in the systemd journal

KUBE_LOGTOSTDERR=”–logtostderr=true”

# journal message level, 0 is debug

KUBE_LOG_LEVEL=”–v=0″

# Should this cluster be allowed to run privileged docker containers

KUBE_ALLOW_PRIV=”–allow-privileged=false”

# How the controller-manager, scheduler, and proxy find the apiserver

KUBE_MASTER=”–master=http://192.168.0.141:8080″

EOF

配置文件/etc/kubernetes/kubelet,代码如下:

cat>/etc/kubernetes/kubelet<<EOF

###

# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or “” for all interfaces)

KUBELET_ADDRESS=”–address=0.0.0.0″

# The port for the info server to serve on

KUBELET_PORT=”–port=10250″

# You may leave this blank to use the actual hostname

KUBELET_HOSTNAME=”–hostname-override=192.168.0.142″

# location of the api-server

KUBELET_API_SERVER=”–api-servers=http://192.168.0.141:8080″

# pod infrastructure container

#KUBELET_POD_INFRA_CONTAINER=”–pod-infra-container-image=192.168.0.123:5000/centos68″

KUBELET_POD_INFRA_CONTAINER=”–pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest”

# Add your own!

KUBELET_ARGS=””

EOF

for I in kube-proxy kubelet docker

do

systemctl restart $I

systemctl enable $I

done

iptables -P FORWARD ACCEPT

分别启动Kubernetes Node节点上kube-proxy、kubelet、docker、flanneld进程并查看其状态。

Master节点上使用kubectl get nodes命令,可以看到加入的kubernetes集群的两个Node节点:

kubectl get nodes

至此,Kubernetes集群环境搭建完毕。

1.6 Kubernetes多节点平台配置实战

部署Kubernetes云计算平台,至少准备两台服务器,此处为4台,包括一台Docker仓库;

Kubernetes Master节点:192.168.0.141

Kubernetes Node1节点:192.168.0.142

Kubernetes Node2节点:192.168.0.143

Docker私有库节点:192.168.0.123

每台服务器主机上都运行如下命令:

systemctl stop firewalld

systemctl disable firewalld

yum -y install ntp

ntpdate pool.ntp.org

systemctl start ntpd

systemctl enable ntpd

1.7 Kubernetes Master安装与配置

Kubernetes Master节点上安装etcd和Kubernetes、Flannel网络,指令如下:

yum install kubernetes-master etcd flannel -y

Master /etc/etcd/etcd.conf配置文件,代码如下:

cat>/etc/etcd/etcd.conf<<EOF

# [member]

ETCD_NAME=etcd1

ETCD_DATA_DIR=”/data/etcd”

#ETCD_WAL_DIR=””

#ETCD_SNAPSHOT_COUNT=”10000″

#ETCD_HEARTBEAT_INTERVAL=”100″

#ETCD_ELECTION_TIMEOUT=”1000″

ETCD_LISTEN_PEER_URLS=”http://192.168.0.141:2380″

ETCD_LISTEN_CLIENT_URLS=”http://192.168.0.141:2379,http://127.0.0.1:2379″

ETCD_MAX_SNAPSHOTS=”5″

#ETCD_MAX_WALS=”5″

#ETCD_CORS=””

#

#[cluster]

ETCD_INITIAL_ADVERTISE_PEER_URLS=”http://192.168.0.141:2380″

# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. “test=http://…”

ETCD_INITIAL_CLUSTER=”etcd1=http://192.168.0.141:2380,etcd2=http://192.168.0.142:2380,etcd3=http://192.168.0.143:2380″

#ETCD_INITIAL_CLUSTER_STATE=”new”

#ETCD_INITIAL_CLUSTER_TOKEN=”etcd-cluster”

ETCD_ADVERTISE_CLIENT_URLS=”http://192.168.0.141:2379″

#ETCD_DISCOVERY=””

#ETCD_DISCOVERY_SRV=””

#ETCD_DISCOVERY_FALLBACK=”proxy”

#ETCD_DISCOVERY_PROXY=””

#

#[proxy]

#ETCD_PROXY=”off”

#ETCD_PROXY_FAILURE_WAIT=”5000″

#ETCD_PROXY_REFRESH_INTERVAL=”30000″

#ETCD_PROXY_DIAL_TIMEOUT=”1000″

#ETCD_PROXY_WRITE_TIMEOUT=”5000″

#ETCD_PROXY_READ_TIMEOUT=”0″

#

#[security]

#ETCD_CERT_FILE=””

#ETCD_KEY_FILE=””

#ETCD_CLIENT_CERT_AUTH=”false”

#ETCD_TRUSTED_CA_FILE=””

#ETCD_PEER_CERT_FILE=””

#ETCD_PEER_KEY_FILE=””

#ETCD_PEER_CLIENT_CERT_AUTH=”false”

#ETCD_PEER_TRUSTED_CA_FILE=””

#

#[logging]

#ETCD_DEBUG=”false”

# examples for -log-package-levels etcdserver=WARNING,security=DEBUG

#ETCD_LOG_PACKAGE_LEVELS=””

EOF

mkdir -p /data/etcd/;chmod 757 -R /data/etcd/

systemctl restart etcd.service

启动etcd服务,执行命令:systemctl restart etcd.service

Master /etc/kubernetes/config配置文件,代码如下:

cat>/etc/kubernetes/config<<EOF

# kubernetes system config

# The following values are used to configure various aspects of all

# kubernetes i, including

# kube-apiserver.service

# kube-controller-manager.service

# kube-scheduler.service

# kubelet.service

# kube-proxy.service

# logging to stderr means we get it in the systemd journal

KUBE_LOGTOSTDERR=”–logtostderr=true”

# journal message level, 0 is debug

KUBE_LOG_LEVEL=”–v=0″

# Should this cluster be allowed to run privileged docker containers

KUBE_ALLOW_PRIV=”–allow-privileged=false”

# How the controller-manager, scheduler, and proxy find the apiserver

KUBE_MASTER=”–master=http://192.168.0.141:8080″

EOF

将Kubernetes的apiserver进程的服务地址告诉Kubernetes的controller-manager, scheduler,proxy进程。

Master /etc/kubernetes/apiserver配置文件,代码如下:

cat>/etc/kubernetes/apiserver<<EOF

# kubernetes system config

# The following values are used to configure the kube-apiserver

# The address on the local server to listen to.

KUBE_API_ADDRESS=”–insecure-bind-address=0.0.0.0″

# The port on the local server to listen on.

KUBE_API_PORT=”–port=8080″

# Port minions listen on

KUBELET_PORT=”–kubelet-port=10250″

# Comma separated list of nodes in the etcd cluster

KUBE_ETCD_SERVERS=”–etcd-servers=http://192.168.0.141:2379,http://192.168.0.142:2379,http://192.168.0.143:2379″

# Address range to use for i

KUBE_SERVICE_ADDRESSES=”–service-cluster-ip-range=10.254.0.0/16″

# default admission control policies

#KUBE_ADMISSION_CONTROL=”–admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota”

KUBE_ADMISSION_CONTROL=”–admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota”

# Add your own!

KUBE_API_ARGS=””

EOF

for I in etcd kube-apiserver kube-controller-manager kube-scheduler; do

systemctl restart $I

systemctl enable $I

done

启动Kubernetes Master节点上的etcd, apiserver, controller-manager和scheduler进程及状态:

for I in etcd kube-apiserver kube-controller-manager kube-scheduler; do

systemctl restart $I

systemctl enable $I

systemctl status $I

done

1.8 Kubernetes Node1安装配置

在Kubernetes Node1节点上安装flannel、docker和Kubernetes

yum install kubernetes-node etcd docker flannel *rhsm* -y

1.9 Node1节点Etcd配置

vim node1 /etc/etcd/etcd.conf配置flannel,内容如下:

cat>/etc/etcd/etcd.conf<<EOF

##########

# [member]

ETCD_NAME=etcd2

ETCD_DATA_DIR=”/data/etcd”

#ETCD_WAL_DIR=””

#ETCD_SNAPSHOT_COUNT=”10000″

#ETCD_HEARTBEAT_INTERVAL=”100″

#ETCD_ELECTION_TIMEOUT=”1000″

ETCD_LISTEN_PEER_URLS=”http://192.168.0.142:2380″

ETCD_LISTEN_CLIENT_URLS=”http://192.168.0.142:2379,http://127.0.0.1:2379″

ETCD_MAX_SNAPSHOTS=”5″

#ETCD_MAX_WALS=”5″

#ETCD_CORS=””

#[cluster]

ETCD_INITIAL_ADVERTISE_PEER_URLS=”http://192.168.0.142:2380″

# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. “test=http://…”

ETCD_INITIAL_CLUSTER=”etcd1=http://192.168.0.141:2380,etcd2=http://192.168.0.142:2380,etcd3=http://192.168.0.143:2380″

#ETCD_INITIAL_CLUSTER_STATE=”new”

#ETCD_INITIAL_CLUSTER_TOKEN=”etcd-cluster”

ETCD_ADVERTISE_CLIENT_URLS=”http://192.168.0.142:2379″

#ETCD_DISCOVERY=””

#ETCD_DISCOVERY_SRV=””

#ETCD_DISCOVERY_FALLBACK=”proxy”

#ETCD_DISCOVERY_PROXY=””

#[proxy]

#ETCD_PROXY=”off”

#ETCD_PROXY_FAILURE_WAIT=”5000″

#ETCD_PROXY_REFRESH_INTERVAL=”30000″

#ETCD_PROXY_DIAL_TIMEOUT=”1000″

#ETCD_PROXY_WRITE_TIMEOUT=”5000″

#ETCD_PROXY_READ_TIMEOUT=”0″

#

#[security]

#ETCD_CERT_FILE=””

#ETCD_KEY_FILE=””

#ETCD_CLIENT_CERT_AUTH=”false”

#ETCD_TRUSTED_CA_FILE=””

#ETCD_PEER_CERT_FILE=””

#ETCD_PEER_KEY_FILE=””

#ETCD_PEER_CLIENT_CERT_AUTH=”false”

#ETCD_PEER_TRUSTED_CA_FILE=””

#

#[logging]

#ETCD_DEBUG=”false”

# examples for -log-package-levels etcdserver=WARNING,security=DEBUG

#ETCD_LOG_PACKAGE_LEVELS=””

EOF

mkdir -p /data/etcd/;chmod 757 -R /data/etcd/;service etcd restart

配置信息告诉flannel进程etcd服务的位置以及在etcd上网络配置信息的节点位置。

1.10 Node1 Kubernetes配置

Vim配置文件/etc/kubernetes/config,代码如下:

cat>/etc/kubernetes/config<<EOF

# kubernetes system config

# The following values are used to configure various aspects of all

# kubernetes services, including

# kube-apiserver.service

# kube-controller-manager.service

# kube-scheduler.service

# kubelet.service

# kube-proxy.service

# logging to stderr means we get it in the systemd journal

KUBE_LOGTOSTDERR=”–logtostderr=true”

# journal message level, 0 is debug

KUBE_LOG_LEVEL=”–v=0″

# Should this cluster be allowed to run privileged docker containers

KUBE_ALLOW_PRIV=”–allow-privileged=false”

# How the controller-manager, scheduler, and proxy find the apiserver

KUBE_MASTER=”–master=http://192.168.0.141:8080″

EOF

配置文件/etc/kubernetes/kubelet,代码如下:

cat>/etc/kubernetes/kubelet<<EOF

###

# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or “” for all interfaces)

KUBELET_ADDRESS=”–address=0.0.0.0″

# The port for the info server to serve on

KUBELET_PORT=”–port=10250″

# You may leave this blank to use the actual hostname

KUBELET_HOSTNAME=”–hostname-override=192.168.0.142″

# location of the api-server

KUBELET_API_SERVER=”–api-servers=http://192.168.0.141:8080″

# pod infrastructure container

#KUBELET_POD_INFRA_CONTAINER=”–pod-infra-container-image=192.168.0.123:5000/centos68″

KUBELET_POD_INFRA_CONTAINER=”–pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest”

# Add your own!

KUBELET_ARGS=””

EOF

for I in kube-proxy kubelet docker

do

systemctl restart $I

systemctl enable $I

done

iptables -P FORWARD ACCEPT

分别启动Kubernetes Node节点上kube-proxy、kubelet、docker、flanneld进程并查看其状态

1.11 Kubernetes Node2安装配置

在Kubernetes Node2节点上安装flannel、docker和Kubernetes

yum install kubernetes-node etcd docker flannel *rhsm* -y

1.12 Node2节点Etcd配置

Node2 /etc/etcd/etcd.conf配置flannel,内容如下:

cat>/etc/etcd/etcd.conf<<EOF

##########

# [member]

ETCD_NAME=etcd3

ETCD_DATA_DIR=”/data/etcd”

#ETCD_WAL_DIR=””

#ETCD_SNAPSHOT_COUNT=”10000″

#ETCD_HEARTBEAT_INTERVAL=”100″

#ETCD_ELECTION_TIMEOUT=”1000″

ETCD_LISTEN_PEER_URLS=”http://192.168.0.143:2380″

ETCD_LISTEN_CLIENT_URLS=”http://192.168.0.143:2379,http://127.0.0.1:2379″

ETCD_MAX_SNAPSHOTS=”5″

#ETCD_MAX_WALS=”5″

#ETCD_CORS=””

#[cluster]

ETCD_INITIAL_ADVERTISE_PEER_URLS=”http://192.168.0.143:2380″

# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. “test=http://…”

ETCD_INITIAL_CLUSTER=”etcd1=http://192.168.0.141:2380,etcd2=http://192.168.0.142:2380,etcd3=http://192.168.0.143:2380″

#ETCD_INITIAL_CLUSTER_STATE=”new”

#ETCD_INITIAL_CLUSTER_TOKEN=”etcd-cluster”

ETCD_ADVERTISE_CLIENT_URLS=”http://192.168.0.143:2379″

#ETCD_DISCOVERY=””

#ETCD_DISCOVERY_SRV=””

#ETCD_DISCOVERY_FALLBACK=”proxy”

#ETCD_DISCOVERY_PROXY=””

#[proxy]

#ETCD_PROXY=”off”

#ETCD_PROXY_FAILURE_WAIT=”5000″

#ETCD_PROXY_REFRESH_INTERVAL=”30000″

#ETCD_PROXY_DIAL_TIMEOUT=”1000″

#ETCD_PROXY_WRITE_TIMEOUT=”5000″

#ETCD_PROXY_READ_TIMEOUT=”0″

#

#[security]

#ETCD_CERT_FILE=””

#ETCD_KEY_FILE=””

#ETCD_CLIENT_CERT_AUTH=”false”

#ETCD_TRUSTED_CA_FILE=””

#ETCD_PEER_CERT_FILE=””

#ETCD_PEER_KEY_FILE=””

#ETCD_PEER_CLIENT_CERT_AUTH=”false”

#ETCD_PEER_TRUSTED_CA_FILE=””

#

#[logging]

#ETCD_DEBUG=”false”

# examples for -log-package-levels etcdserver=WARNING,security=DEBUG

#ETCD_LOG_PACKAGE_LEVELS=””

EOF

mkdir -p /data/etcd/;chmod 757 -R /data/etcd/;service etcd restart

配置信息告诉flannel进程etcd服务的位置以及在etcd上网络配置信息的节点位置。

1.13 Node2 Kubernetes配置

Vim配置文件/etc/kubernetes/config,代码如下:

cat>/etc/kubernetes/config<<EOF

# kubernetes system config

# The following values are used to configure various aspects of all

# kubernetes services, including

# kube-apiserver.service

# kube-controller-manager.service

# kube-scheduler.service

# kubelet.service

# kube-proxy.service

# logging to stderr means we get it in the systemd journal

KUBE_LOGTOSTDERR=”–logtostderr=true”

# journal message level, 0 is debug

KUBE_LOG_LEVEL=”–v=0″

# Should this cluster be allowed to run privileged docker containers

KUBE_ALLOW_PRIV=”–allow-privileged=false”

# How the controller-manager, scheduler, and proxy find the apiserver

KUBE_MASTER=”–master=http://192.168.0.141:8080″

EOF

配置文件/etc/kubernetes/kubelet,代码如下:

cat>/etc/kubernetes/kubelet<<EOF

###

# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or “” for all interfaces)

KUBELET_ADDRESS=”–address=0.0.0.0″

# The port for the info server to serve on

KUBELET_PORT=”–port=10250″

# You may leave this blank to use the actual hostname

KUBELET_HOSTNAME=”–hostname-override=192.168.0.143″

# location of the api-server

KUBELET_API_SERVER=”–api-servers=http://192.168.0.141:8080″

# pod infrastructure container

#KUBELET_POD_INFRA_CONTAINER=”–pod-infra-container-image=192.168.0.123:5000/centos68″

KUBELET_POD_INFRA_CONTAINER=”–pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest”

# Add your own!

KUBELET_ARGS=””

EOF

for I in kube-proxy kubelet docker

do

systemctl restart $I

systemctl enable $I

done

iptables -P FORWARD ACCEPT

分别启动Kubernetes Node节点上kube-proxy、kubelet、docker、flanneld进程并查看其状态:

Master节点上使用kubectl get nodes命令,可以看到加入的kubernetes集群的两个Node节点:

kubectl get nodes

至此,Kubernetes集群环境搭建完毕。

1.14 Kubernetes Dashboard UI实战

 Kubernetes实现的最重要的工作是对Docker容器集群统一的管理和调度,通常使用命令行来操作Kubernetes集群及各个节点,命令行操作非常不方便,如果使用UI界面来可视化操作,会更加方便的管理和维护。

如下为配置kubernetes dashboard完整过程,在Node节点提前导入两个列表镜像(从云盘下载即可):

  • pod-infrastructure
  • kubernetes-dashboard-amd64

Docker镜像导入指令如下:

  • docker load <pod-infrastructure.tgz,将导入的pod镜像名称修改,命令如下:

docker tag $(docker images|grep none|awk ‘{print $3}’) registry.access.redhat.com/rhel7/pod-infrastructure

  • docker load <kubernetes-dashboard-amd64.tgz,将导入的pod镜像名称修改,命令如下:

docker tag $(docker images|grep none|awk ‘{print $3}’) bestwu/kubernetes-dashboard-amd64:v1.6.3

然后在Master端,创建dashboard-controller.yaml,代码如下:

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

name: kubernetes-dashboard

namespace: kube-system

labels:

k8s-app: kubernetes-dashboard

kubernetes.io/cluster-service: “true”

spec:

selector:

matchLabels:

k8s-app: kubernetes-dashboard

template:

metadata:

labels:

k8s-app: kubernetes-dashboard

annotations:

scheduler.alpha.kubernetes.io/critical-pod: ”

scheduler.alpha.kubernetes.io/tolerations: ‘[{“key”:”CriticalAddonsOnly”, “operator”:”Exists”}]’

spec:

containers:

– name: kubernetes-dashboard

image: bestwu/kubernetes-dashboard-amd64:v1.6.3

resources:

# keep request = limit to keep this container in guaranteed class

limits:

cpu: 100m

memory: 50Mi

requests:

cpu: 100m

memory: 50Mi

ports:

– containerPort: 9090

args:

– –apiserver-host=http://192.168.0.141:8080

livenessProbe:

httpGet:

path: /

port: 9090

initialDelaySeconds: 30

timeoutSeconds: 30

创建dashboard-service.yaml,代码如下:

apiVersion: v1

kind: Service

metadata:

name: kubernetes-dashboard

namespace: kube-system

labels:

k8s-app: kubernetes-dashboard

kubernetes.io/cluster-service: “true”

spec:

selector:

k8s-app: kubernetes-dashboard

ports:

– port: 80

targetPort: 9090

创建dashboard dashborad pods模块:

kubectl create -f dashboard-controller.yaml

kubectl create -f dashboard-service.yaml

通过浏览器访问:http://192.168.0.141:8080/,如图所示: