本文共 3090 字,大约阅读时间需要 10 分钟。
MySQL 在 Kubernetes 环境中运行这件事情本身并不困难,最简单的方式就是找到 MySQL 的 Docker image,跑起来就行了,但是要做到生产环境可用,还是有几个问题要解决,所以本文不对整个流程做详细的描述,而是把重点放在几个难点上。
Kubernetes 集群存储 PV 支持 Static 静态配置以及 Dynamic 动态配置,动态卷配置 (Dynamic provisioning) 可以根据需要动态的创建存储卷。我们知道,之前的静态配置方式,集群管理员必须手动调用云/存储服务提供商的接口来配置新的固定大小的 Image 存储卷,然后创建 PV 对象以在 Kubernetes 中请求分配使用它们。通过动态卷配置,能自动化完成以上两步骤,它无须集群管理员预先配置存储资源,而是使用 StorageClass 对象指定的供应商来动态配置存储资源。
cat rbd-storage-class.yamlkind: StorageClassapiVersion: storage.k8s.io/v1metadata: name: rbdprovisioner: kubernetes.io/rbdparameters: monitors: 10.222.78.12:6789 adminId: admin adminSecretName: ceph-secret-admin adminSecretNamespace: default pool: rbd userId: admin userSecretName: ceph-secret-admin
cat rbd-dyn-pv-claim.yamlapiVersion: v1kind: PersistentVolumeClaimmetadata: name: ceph-rbd-dyn-pv-claimspec: accessModes: - ReadWriteOnce storageClassName: rbd resources: requests: storage: 1Gi
rbd-dyn-pvc-pod1.yamlapiVersion: v1kind: Podmetadata: labels: test: rbd-dyn-pvc-pod name: ceph-rbd-dyn-pv-pod1spec: containers: - name: ceph-rbd-dyn-pv-busybox1 image: busybox command: ["sleep", "60000"] volumeMounts: - name: ceph-dyn-rbd-vol1 mountPath: /mnt/ceph-dyn-rbd-pvc/busybox readOnly: false volumes: - name: ceph-dyn-rbd-vol1 persistentVolumeClaim: claimName: ceph-rbd-dyn-pv-claim
Manages the deployment and scaling of a set of Pods , and provides guarantees about the ordering and uniqueness of these Pods.
使用多个 StatefulSet 运行多个 MySQL Pod ,第一个是 Master,其他是 Slave:
目前 Kubernetes 将服务暴露到外网的方式主要有三种:
What is the scope for kubeadm?
We want kubeadm to be a common set of building blocks for all Kubernetes deployments; the piece that provides secure and recommended ways to bootstrap Kubernetes. Since there is no one true way to setup Kubernetes, kubeadm will support more than one method for each phase. We want to identify the phases every deployment of Kubernetes has in common and make configurable and easy-to-use kubeadm commands for those phases. If your organization, for example, requires that you distribute the certificates in the cluster manually or in a custom way, skip using kubeadm just for that phase. We aim to keep kubeadm usable for all other phases in that case. We want you to be able to pick which things you want kubeadm to do and let you do the rest yourself.
转载地址:http://geuyo.baihongyu.com/