Ubuntu 22.04安装Kubernetes 1.34完整教程

概述

Ubuntu 22.04 LTS是Kubernetes部署的热门选择,其稳定的内核和良好的软件包支持使其成为生产环境的理想选择。本文将详细介绍如何在Ubuntu 22.04上安装配置Kubernetes 1.34,包括系统准备、组件安装、集群初始化以及日常运维操作。

前置条件

  • 至少3台Ubuntu 22.04服务器(Master 1台 + Worker 2台)
  • 每台服务器至少2核CPU、4GB内存
  • 稳定的网络连接,能够访问外网下载镜像
  • 具有sudo权限的用户

服务器规划

主机名          IP地址           角色          备注
--------------------------------------------------------------
k8s-master     192.168.1.10    Master/Worker  4核8GB
k8s-node1      192.168.1.11    Worker         2核4GB
k8s-node2      192.168.1.12    Worker         2核4GB

系统准备

1. 配置主机名

# Master节点
sudo hostnamectl set-hostname k8s-master

# Node1节点
sudo hostnamectl set-hostname k8s-node1

# Node2节点
sudo hostnamectl set-hostname k8s-node2

2. 配置hosts文件

sudo tee -a /etc/hosts << EOF
192.168.1.10 k8s-master
192.168.1.11 k8s-node1
192.168.1.12 k8s-node2
EOF

3. 关闭防火墙

# 关闭防火墙
sudo ufw disable

# 或者配置防火墙规则
sudo ufw allow 6443/tcp
sudo ufw allow 2379/tcp
sudo ufw allow 2380/tcp
sudo ufw allow 10250/tcp
sudo ufw allow 10255/tcp
sudo ufw reload

4. 配置Swap分区

# 临时关闭swap
sudo swapoff -a

# 永久关闭(注释掉swap行)
sudo sed -i '/ swap / s/^/#/' /etc/fstab

# 验证
free -h

5. 加载内核模块

# 加载内核模块
sudo tee /etc/modules-load.d/k8s.conf << EOF
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# 配置sysctl
sudo tee /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

# 应用配置
sudo sysctl --system

6. 安装容器运行时(containerd)

# 安装 containerd
sudo apt-get update
sudo apt-get install -y containerd

# 配置 containerd
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml

# 使用systemd作为cgroup驱动
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml

# 重启 containerd
sudo systemctl restart containerd
sudo systemctl enable containerd

7. 安装Docker(可选)

# 安装Docker
sudo apt-get install -y apt-transport-https ca-certificates curl gnupg lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io

# 配置Docker
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json << EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

sudo systemctl restart docker
sudo systemctl enable docker

安装Kubernetes组件

1. 添加Kubernetes仓库

# 添加GPG密钥
sudo curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.34/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

# 添加仓库
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.34/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

# 更新仓库
sudo apt-get update

2. 安装Kubernetes组件

# 安装 kubeadm、kubelet、kubectl
sudo apt-get install -y kubeadm=1.34.0-1.1 kubelet=1.34.0-1.1 kubectl=1.34.0-1.1

# 锁定版本(防止自动升级)
sudo apt-mark hold kubeadm kubelet kubectl

# 验证安装
kubeadm version
kubelet --version
kubectl version --client

3. 配置 kubectl 命令补全

# 安装 bash-completion
sudo apt-get install -y bash-completion

# 配置kubectl补全
echo 'source > ~/.bashrc
echo 'alias k=kubectl' >> ~/.bashrc
echo 'complete -o default -F __start_kubectl k' >> ~/.bashrc

# 生效
source ~/.bashrc

初始化Master节点

1. 预拉取镜像(可选)

# 查看需要拉取的镜像
kubeadm config images list --kubernetes-version v1.34.0

# 拉取镜像(使用阿里云镜像加速)
kubeadm config images pull   --kubernetes-version v1.34.0   --image-repository registry.aliyuncs.com/google_containers

2. 初始化集群

# 在Master节点执行初始化
sudo kubeadm init   --apiserver-advertise-address=192.168.1.10   --image-repository registry.aliyuncs.com/google_containers   --kubernetes-version v1.34.0   --service-cidr=10.96.0.0/12   --pod-network-cidr=10.244.0.0/16   --cri-socket=unix:///run/containerd/containerd.sock

# 初始化成功后会显示类似以下信息:
# [init] Using Kubernetes version: v1.34.0
# [preflight] Running pre-flight checks
# [preflight] Pulling images required for setting up a Kubernetes cluster
# [preflight] This might take a minute or two, depending on your internet speed
# [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a temporary certificate authority.
# [certificates] Generated front-proxy-ca certificate and key.
# [certificates] Generated front-proxy-client certificate and key.
# [certificates] Generated etcd/ca certificate and key.
# [certificates] Generated etcd/server certificate and key.
# [certificates] Generated etcd/peer certificate and key.
# [certificates] Generated etcd/healthcheck-client certificate and key.
# [certificates] Generated apiserver-etcd-client certificate and key.
# [certificates] Generated apiserver certificate and key.
# [certificates] Generated apiserver-kubelet-client certificate and key.
# [certificates] Generated root CA certificate and key.
# [upload-certs] Uploaded certificates to Secret.
# [mark-control-plane] Marking the node k8s-master as control-plane and adding taints.
# [bootstrap-token] Using token: abcdef.1234567890abcdef
# [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
# [addons] Applied essential addon: CoreDNS
# [addons] Applied essential addon: kube-proxy

# Your Kubernetes control-plane has initialized successfully!

# To start using your cluster, you need to run the following as a regular user:

#   mkdir -p $HOME/.kube
#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
#   sudo chown $(id -u):$(id -g) $HOME/.kube/config

# Alternatively, if you are the root user, you can run:

#   export KUBECONFIG=/etc/kubernetes/admin.conf

# You should now deploy a pod network to the cluster.
# Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
#   https://kubernetes.io/docs/concepts/cluster-administration/addons/

# You can now join any number of the control-plane nodes running the following command on each as root:

#   kubeadm join 192.168.1.10:6443 --token abcdef.1234567890abcdef #       --discovery-token-ca-cert-hash sha256:xxxxxxxxxx #       --control-plane

# Then you can join any number of worker nodes by running the following on each as root:

#   kubeadm join 192.168.1.10:6443 --token abcdef.1234567890abcdef #       --discovery-token-ca-cert-hash sha256:xxxxxxxxxx

3. 配置kubectl

# 配置kubectl(普通用户)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# 设置别名
echo 'alias k=kubectl' >> ~/.bashrc
echo 'complete -o default -F __start_kubectl k' >> ~/.bashrc

# 验证配置
kubectl get nodes
kubectl get pods -n kube-system

4. 安装网络插件

本文使用Calico作为网络插件,它稳定可靠且功能丰富。

# 安装Calico
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

# 验证安装
kubectl get pods -n kube-system | grep calico

# 等待所有Pod就绪
kubectl wait --for=condition=Ready pods -l k8s-app=calico-node -n kube-system --timeout=300s

或者使用Flannel作为替代:

# 安装Flannel
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

# 验证安装
kubectl get pods -n kube-system | grep flannel

添加Worker节点

1. 在Worker节点执行

# 在每个Worker节点上执行(使用初始化时生成的命令)
sudo kubeadm join 192.168.1.10:6443   --token abcdef.1234567890abcdef   --discovery-token-ca-cert-hash sha256:xxxxxxxxxx   --cri-socket=unix:///run/containerd/containerd.sock

2. 如果忘记Token

# 在Master节点上重新生成join命令
kubeadm token create --print-join-command

# 查看所有token
kubeadm token list

3. 验证节点状态

# 在Master节点查看
kubectl get nodes -o wide

# 查看节点详细信息
kubectl describe nodes k8s-node1

# 查看节点资源使用
kubectl top nodes

部署Dashboard

1. 安装Dashboard

# 部署Dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

# 创建管理员用户
kubectl apply -f - << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
EOF

# 获取访问令牌
kubectl -n kubernetes-dashboard create token admin-user

# 暴露Dashboard服务(生产环境建议使用Ingress)
kubectl expose service kubernetes-dashboard -n kubernetes-dashboard --type=NodePort --target-port=8443 --name=dashboard-nodeport

# 查看暴露的端口
kubectl get svc -n kubernetes-dashboard

2. 访问Dashboard

# 方法1:通过NodePort访问
# 访问地址:https://:32000

# 方法2:通过kubectl代理访问
kubectl proxy

# 访问地址:http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

# 使用Bearer Token登录(使用上面生成的token)

常用操作命令

1. Pod操作

# 创建Pod
kubectl run nginx --image=nginx

# 创建Pod(指定资源限制)
kubectl run web --image=nginx --requests=cpu=500m,memory=256Mi --limits=cpu=1,memory=512Mi

# 查看Pod列表
kubectl get pods
kubectl get pods -o wide
kubectl get pods -A

# 查看Pod详情
kubectl describe pod nginx

# 查看Pod日志
kubectl logs nginx
kubectl logs nginx -f
kubectl logs nginx --previous

# 删除Pod
kubectl delete pod nginx

# 进入Pod内部
kubectl exec -it nginx -- /bin/bash

# 缩放Pod副本数
kubectl scale deployment nginx --replicas=3

2. Deployment操作

# 创建Deployment
kubectl create deployment nginx --image=nginx

# 暴露服务
kubectl expose deployment nginx --port=80 --type=NodePort

# 更新镜像
kubectl set image deployment/nginx nginx=nginx:1.25

# 查看更新状态
kubectl rollout status deployment/nginx

# 回滚Deployment
kubectl rollout undo deployment/nginx

# 查看历史版本
kubectl rollout history deployment/nginx

# 扩缩容
kubectl scale deployment/nginx --replicas=5

3. Service操作

# 查看Service
kubectl get svc
kubectl get svc -o wide

# 创建Service
kubectl expose pod my-pod --port=80 --target-port=8080

# 查看Service详情
kubectl describe service nginx

# 删除Service
kubectl delete service nginx

4. Namespace操作

# 查看Namespace
kubectl get ns

# 创建Namespace
kubectl create namespace dev

# 切换Namespace
kubectl config set-context --current --namespace=dev

# 删除Namespace
kubectl delete namespace dev

5. ConfigMap和Secret

# 创建ConfigMap
kubectl create configmap app-config --from-literal=key1=value1 --from-literal=key2=value2

# 创建Secret
kubectl create secret generic db-secret --from-literal=username=admin --from-literal=password=123456

# 在Pod中使用
# 参考Kubernetes官方文档中的示例

高可用部署(可选)

1. 架构设计

# 高可用架构需要至少3个Master节点
主机名          IP地址           角色
-----------------------------------------
k8s-master1    192.168.1.10    Master (etcd leader)
k8s-master2    192.168.1.11    Master (etcd follower)
k8s-master3    192.168.1.12    Master (etcd follower)
k8s-node1      192.168.1.13    Worker
k8s-node2      192.168.1.14    Worker

# 需要配置负载均衡器(如HAProxy或Nginx)
# 负载均衡器VIP: 192.168.1.100:6443

2. 初始化第一个Master节点

# 初始化第一个Master节点
sudo kubeadm init   --control-plane   --apiserver-advertise-address=192.168.1.100   --image-repository registry.aliyuncs.com/google_containers   --kubernetes-version v1.34.0   --service-cidr=10.96.0.0/12   --pod-network-cidr=10.244.0.0/16   --upload-certs

# 保存join命令(需要用于其他Master和Worker节点)
# 控制平面节点join命令
# Worker节点join命令

3. 添加其他Master节点

# 在其他Master节点执行
sudo kubeadm join 192.168.1.100:6443   --token abcdef.1234567890abcdef   --discovery-token-ca-cert-hash sha256:xxxxxxxxxx   --control-plane   --cri-socket=unix:///run/containerd/containerd.sock

故障排查

1. 常见问题排查

# 查看Pod状态
kubectl get pods -A

# 查看Pod详细状态
kubectl describe pod  -n 

# 查看Pod日志
kubectl logs  -n 
kubectl logs  -n  --previous

# 查看集群事件
kubectl get events --sort-by='.metadata.creationTimestamp'

# 检查节点状态
kubectl get nodes
kubectl describe nodes 

# 检查kubelet状态
sudo systemctl status kubelet
sudo journalctl -u kubelet -f

# 检查containerd状态
sudo systemctl status containerd
sudo journalctl -u containerd -f

2. 重置集群

# 在需要重置的节点上执行
sudo kubeadm reset
sudo rm -rf $HOME/.kube/config
sudo rm -rf /etc/kubernetes/pki
sudo rm -rf /var/lib/etcd
sudo rm -rf /var/lib/kubelet
sudo rm -rf /var/lib/cni
sudo rm -rf /etc/cni/net.d

# 清理iptables规则
sudo iptables -F && sudo iptables -t nat -F && sudo iptables -m -X

生产环境最佳实践

1. 安全配置

# 启用RBAC
kubectl auth can-i create pods --as=system:serviceaccount:default:default

# 创建命名空间隔离
kubectl create namespace production
kubectl create namespace staging
kubectl create namespace development

# 配置资源限制
# 在每个Namespace中配置ResourceQuota和LimitRange

2. 资源管理

# 为Pod设置资源限制
apiVersion: v1
kind: Pod
metadata:
  name: production-pod
spec:
  containers:
  - name: app
    image: nginx
    resources:
      requests:
        memory: "256Mi"
        cpu: "200m"
      limits:
        memory: "512Mi"
        cpu: "1000m"

3. 监控配置

# 安装Prometheus Operator
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install prometheus prometheus-community/kube-prometheus-stack -n monitoring --create-namespace

# 安装Grafana
helm install grafana grafana/grafana -n monitoring

# 启用metrics-server
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

# 查看资源使用情况
kubectl top nodes
kubectl top pods -A

4. 日志管理

# 安装ELK Stack
helm repo add elastic https://helm.elastic.co
helm install elasticsearch elastic/elasticsearch -n logging --create-namespace
helm install kibana elastic/kibana -n logging
helm install filebeat elastic/filebeat -n logging

# 或者使用Loki
helm repo add grafana https://grafana.github.io/helm-charts
helm install loki grafana/loki-stack -n logging --create-namespace

总结

通过本文的详细介绍,你应该已经掌握了在Ubuntu 22.04上安装配置Kubernetes 1.34的完整流程。Kubernetes是一个功能强大的容器编排平台,建议在实际使用中注意以下几点:

  • 生产环境建议:使用高可用架构,至少3个Master节点
  • 资源规划:根据业务需求合理规划节点资源
  • 网络安全:配置NetworkPolicy限制Pod间通信
  • 监控告警:部署完整的监控和告警系统
  • 备份恢复:定期备份etcd数据和重要配置
  • 版本升级:制定合理的版本升级计划

希望本文能够帮助你顺利搭建Kubernetes集群!如有问题,欢迎在评论区交流讨论。

参考资源

发表回复

后才能评论