简介
- Rook官网:https://rook.io
- Rook是云原生计算基金会(CNCF)的孵化级项目.
- Rook是Kubernetes的开源云本地存储协调器,为各种存储解决方案提供平台,框架和支持,以便与云原生环境本地集成。
- 至于CEPH,官网在这:https://ceph.com/
- ceph官方提供的helm部署,至今我没成功过,所以转向使用rook提供的方案
有道笔记原文:http://note.youdao.com/noteshare?id=281719f1f0374f787effc90067e0d5ad&sub=0B59EA339D4A4769B55F008D72C1A4C0
环境
centos 7.5
kernel 4.18.7-1.el7.elrepo.x86_64
docker 18.06
kubernetes v1.12.2
kubeadm部署:
网络: canal
DNS: coredns
集群成员:
192.168.1.1 kube-master
192.168.1.2 kube-node1
192.168.1.3 kube-node2
192.168.1.4 kube-node3
192.168.1.5 kube-node4
所有node节点准备一块200G的磁盘:/dev/sdb
准备工作
cat <<EOF > /etc/sysctl.d/ceph.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
开始部署Operator
#执行apply之后稍等一会。
#operator会在集群内的每个主机创建两个pod:rook-discover,rook-ceph-agent
kubectl -n rook-ceph-system get pod -o wide
给节点打标签
- 运行ceph-mon的节点打上:ceph-mon=enabled
kubectl label nodes {kube-node1,kube-node2,kube-node3} ceph-mon=enabled
- 运行ceph-osd的节点,也就是存储节点,打上:ceph-osd=enabled
kubectl label nodes {kube-node1,kube-node2,kube-node3} ceph-osd=enabled
- 运行ceph-mgr的节点,打上:ceph-mgr=enabled
配置cluster.yaml文件
apiVersion: v1
kind: Namespace
metadata:
name: rook-ceph
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: rook-ceph-cluster
namespace: rook-ceph
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: rook-ceph-cluster
namespace: rook-ceph
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: [ "get", "list", "watch", "create", "update", "delete" ]
---
开始部署ceph
kubectl apply -f cluster.yaml
# cluster会在rook-ceph这个namesapce创建资源
# 盯着这个namesapce的pod你就会发现,它在按照顺序创建Pod
kubectl -n rook-ceph get pod -o wide -w
# 看到所有的pod都Running就行了
# 注意看一下pod分布的宿主机,跟我们打标签的主机是一致的
kubectl -n rook-ceph get pod -o wide
配置ceph dashboard
kubectl -n rook-ceph get service
#可以看到dashboard监听了8443端口
- 创建个nodeport类型的service以便集群外部访问
kubectl apply -f dashboard-external-https.yaml
MGR_POD=`kubectl get pod -n rook-ceph | grep mgr | awk '{print $1}'`
kubectl -n rook-ceph logs $MGR_POD | grep password
- 打开浏览器输入任意一个Node的IP+nodeport端口
- 这里我的就是:https://192.168.1.2:30290
配置ceph为storageclass
- 官方给了一个样本文件:storageclass.yaml
- 这个文件使用的是 RBD 块存储
- pool创建详解:https://rook.io/docs/rook/v0.8/ceph-pool-crd.html
apiVersion: ceph.rook.io/v1beta1
kind: Pool
metadata:
kubectl apply -f storageclass.yaml
kubectl get storageclasses.storage.k8s.io -n rook-ceph
kubectl describe storageclasses.storage.k8s.io -n rook-ceph
cat << EOF > nginx.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: ceph
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
selector:
app: nginx
ports:
- port: 80
name: nginx-port
targetPort: 80
protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: /html
name: http-file
volumes:
- name: http-file
persistentVolumeClaim:
claimName: nginx-pvc
EOF
kubectl apply -f nginx.yaml
kubectl get pv,pvc
# 看一下nginx这个pod也运行了
kubectl get pod
kubectl delete -f nginx.yaml
kubectl get pv,pvc
添加新的OSD进入集群
kubectl label nodes kube-node4 ceph-osd=enabled
kubectl apply -f cluster.yaml
# 盯着rook-ceph名称空间,集群会自动添加node4进来
kubectl -n rook-ceph get pod -o wide -w
kubectl -n rook-ceph get pod -o wide
lsblk
删除一个节点
kubectl label nodes kube-node3 ceph-osd-
kubectl apply -f cluster.yaml
# 盯着rook-ceph名称空间
kubectl -n rook-ceph get pod -o wide -w
kubectl -n rook-ceph get pod -o wide
# 最后记得删除宿主机的/var/lib/rook文件夹
来源:https://www.cnblogs.com/lvcisco/p/12751947.html |