当前位置: 首页 > news >正文

上海网站开发公司哪家好做特卖的网站怎么赚钱

上海网站开发公司哪家好,做特卖的网站怎么赚钱,毕设 网站开发的必要性,网站制作泉州公司文章目录 k8s综合项目1、项目规划图2、项目描述3、项目环境4、前期准备4.1、环境准备4.2、ip划分4.3、静态配置ip地址4.4、修改主机名4.5、部署k8s集群4.5.1、关闭防火墙和selinux4.5.2、升级系统4.5.3、每台主机都配置hosts文件#xff0c;相互之间通过主机名互相访问4.5.4、… 文章目录 k8s综合项目1、项目规划图2、项目描述3、项目环境4、前期准备4.1、环境准备4.2、ip划分4.3、静态配置ip地址4.4、修改主机名4.5、部署k8s集群4.5.1、关闭防火墙和selinux4.5.2、升级系统4.5.3、每台主机都配置hosts文件相互之间通过主机名互相访问4.5.4、配置master和node之间的免密通道4.5.5、关闭交换分区swap提升性能(三台一起操作)4.5.6、为什么要关闭swap交换分区4.5.7、修改机器内核参数(三台一起操作)4.5.8、配置阿里云的repo源(三台一起)4.5.9、配置时间同步(三台一起)4.5.10、安装docker服务(三台一起)4.5.11、安装docker的最新版本4.5.12、配置镜像加速器4.5.13、继续配置Kubernetes4.5.14、安装初始化k8s需要的软件包(三台一起)4.5.15、kubeadm初始化k8s集群4.5.16、基于kubeadm.yaml文件初始化k8s4.5.17、改一下node的角色为worker4.5.18、安装网络插件4.5.19、安装kubectl top node4.5.20、让node节点也可以访问 kubectl get node 5、先搭建k8s里边的内容5.1、搭建nfs服务器给web 服务提供网站数据创建好相关的pv、pvc5.1.1、设置共享目录5.1.2、创建共享目录5.1.3、刷新nfs或者重新输出共享目录5.1.4、创建一个pv使用nfs服务器共享的目录5.1.5、应用一下5.1.6、创建pvc使用存储类example-nfs5.1.7、创建pod启动pvc5.1.8、测试 5.2、将自己go语言的代码镜像从harbor仓库中拉取出来5.2.1、先把go语言的代码制作成镜像5.2.2、然后上传到harbor仓库node节点拉取ghweb镜像 5.3、启动HPA功能部署自己的web pod当cpu使用率达到50%的时候进行水平扩缩最小10个业务pod最多20个业务pod。5.5、使用探针(liveness、readiness、startup)的httpget、exec方法对web业务pod进行监控一旦出现问题马上重启增强业务pod的可靠性。5.6、搭建ingress controller 和ingress规则给web服务做基于域名的负载均衡5.7、部署和访问 Kubernetes 仪表板Dashboard5.8、使用ab工具对整个k8s集群里的web服务进行压力测试 6、搭建Prometheus服务器6.1、为了方便多台机器操作先部署ansible在堡垒机上6.2、搭建prometheus 服务器和grafana出图软件,监控所有的服务器6.2.1、安装exporter6.2.2、在Prometheus服务器上添加被监控的服务器6.2.3、安装grafana出图展示 7、进行跳板机和防火墙的配置7.1、将k8s集群里的机器还有nfs服务器进行tcp wrappers的配置只允许堡垒机ssh进来拒绝其他的机器ssh过去。7.2、搭建防火墙服务器7.3、编写dnat和snat策略7.4、将整个k8s集群里的服务器的网关设置为防火墙服务器的LAN口的ip地址192.168.182.1777.5、测试SNAT功能7.6、测试dnat功能7.7、测试堡垒机发布 8、项目心得 k8s综合项目 1、项目规划图 2、项目描述 项目描述/项目功能 模拟企业里的k8s生产环境部署webnfsharborPrometheusgranfa等应用构建一个高可用高性能的web系统同时能监控整个k8s集群的使用。 3、项目环境 CentOS 7.9ansible 2.9.27Docker 2.6.0.0Docker Compose 2.18.1Kubernetes 1.20.6Harbor 2.1.0nfs v4metrics-server 0.6.0ingress-nginx-controllerv1.1.0kube-webhook-certgen-v1.1.0Dashboard v2.5.0Prometheus 2.44.0Grafana 9.5.1。 4、前期准备 4.1、环境准备 9台全新的Linux服务器关闭firewall和seLinux配置静态ip地址修改主机名添加hosts解析 但是由于我的电脑本身只有16G内存搞不了9台所以把prometheusansble堡垒机放到一台服务器上把NFS服务器和harbor仓库放到了一台服务器上。 4.2、ip划分 主机名ip防火墙192.168.40.87堡垒机/跳板机prometheusansible192.168.182.141NFS服务器harbor仓库192.168.182.140master192.168.182.142node-1192.168.182.143node-2192.168.182.144 4.3、静态配置ip地址 以master为例 [rootmaster ~]# cd /etc/sysconfig/network-scripts/ [rootmaster network-scripts]# ls ifcfg-ens33 ifdown-eth ifdown-post ifdown-Team ifup-aliases ifup-ipv6 ifu ifcfg-lo ifdown-ippp ifdown-ppp ifdown-TeamPort ifup-bnep ifup-isdn ifu ifdown ifdown-ipv6 ifdown-routes ifdown-tunnel ifup-eth ifup-plip ifu ifdown-bnep ifdown-isdn ifdown-sit ifup ifup-ippp ifup-plusb ifu [rootmaster network-scripts]# vim ifcfg-ens33 [rootmaster network-scripts]# cat ifcfg-ens33 BOOTPROTOnone DEFROUTEyes NAMEens33 UUID9c5e3120-2fcf-4124-b924-f2976d52512f DEVICEens33 ONBOOTyes IPADDR192.168.182.142 PREFIX24 GATEWAY192.168.182.2 DNS1114.114.114.114 [rootmaster network-scripts]# [rootmaster network-scripts]# service network restart Restarting network (via systemctl): [ 确定 ] [rootmaster network-scripts]# ping www.baidu.com PING www.a.shifen.com (183.2.172.42) 56(84) bytes of data. 64 bytes from 183.2.172.42 (183.2.172.42): icmp_seq1 ttl128 time18.1 ms 64 bytes from 183.2.172.42 (183.2.172.42): icmp_seq2 ttl128 time17.7 ms ^C --- www.a.shifen.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1000ms rtt min/avg/max/mdev 17.724/17.956/18.188/0.232 ms 4.4、修改主机名 hostnamectl set-hostname master bash hostnamectl set-hostname node-1 bash hostnamectl set-hostname node-2 bash hostnamectl set-hostname nfs bash hostnamectl set-hostname firewalld bash hostnamectl set-hostname jump bash4.5、部署k8s集群 4.5.1、关闭防火墙和selinux [rootlocalhost ~]# service firewalld stop Redirecting to /bin/systemctl stop firewalld.service [rootlocalhost ~]# systemctl disable firewalld Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. [rootlocalhost ~]# setenforce 0 [rootlocalhost ~]# sed -i s/SELINUXenforcing/SELINUXdisabled/g /etc/selinux/config [rootlocalhost ~]# 4.5.2、升级系统 yum update -y4.5.3、每台主机都配置hosts文件相互之间通过主机名互相访问 加入这三行 192.168.182.142 master 192.168.182.143 node-1 192.168.182.144 node-2[rootmaster ~]# vim /etc/hosts [rootmaster ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.182.142 master 192.168.182.143 node-1 192.168.182.144 node-2 [rootmaster ~]# 4.5.4、配置master和node之间的免密通道 ssh-keygen cd /root/.ssh/ ssh-copy-id -i id_rsa.pub rootnode-1 ssh-copy-id -i id_rsa.pub rootnode-24.5.5、关闭交换分区swap提升性能(三台一起操作) [rootmaster .ssh]# swapoff -a永久关闭注释swap挂载给swap这行开头加一下注释 [rootmaster .ssh]# vim /etc/fstab #/dev/mapper/centos-swap swap swap defaults 0 04.5.6、为什么要关闭swap交换分区 Swap是交换分区如果机器内存不够会使用swap分区但是swap分区的性能较低k8s设计的时候为了能提升性能默认是不允许使用交换分区的。Kubeadm初始化的时候会检测swap是否关闭如果没关闭那就初始化失败。如果不想要关闭交换分区安装k8s的时候可以指定–ignore-preflight-errorsSwap来解决。 4.5.7、修改机器内核参数(三台一起操作) [rootmaster .ssh]# modprobe br_netfilter [rootmaster .ssh]# echo modprobe br_netfilter /etc/profile [rootmaster .ssh]# cat /etc/sysctl.d/k8s.conf EOF net.bridge.bridge-nf-call-ip6tables 1 net.bridge.bridge-nf-call-iptables 1 net.ipv4.ip_forward 1 EOF [rootmaster .ssh]# sysctl -p /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables 1 net.bridge.bridge-nf-call-iptables 1 net.ipv4.ip_forward 1 [rootmaster .ssh]# 4.5.8、配置阿里云的repo源(三台一起) yum install -y yum-utilsyum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repoyum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet ipvsadm配置安装k8s组件需要的阿里云的repo源vim /etc/yum.repos.d/kubernetes.repo [kubernetes] nameKubernetes baseurlhttps://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled1 gpgcheck04.5.9、配置时间同步(三台一起) [rootmaster ~]# yum install ntpdate -y [rootmaster ~]# ntpdate cn.pool.ntp.org3 Mar 10:15:12 ntpdate[73056]: adjust time server 84.16.67.12 offset 0.007718 sec [rootmaster ~]# 加入计划任务 [rootmaster ~]# crontab -e no crontab for root - using an empty one crontab: installing new crontab [rootmaster ~]# crontab -l * */1 * * * /usr/sbin/ntpdate cn.pool.ntp.org [rootmaster ~]# [rootmaster ~]# service crond restart Redirecting to /bin/systemctl restart crond.service [rootmaster ~]# 4.5.10、安装docker服务(三台一起) 4.5.11、安装docker的最新版本 [rootmaster ~]# sudo yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y[rootmaster ~]# systemctl start docker systemctl enable docker.service Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service. 您在 /var/spool/mail/root 中有新邮件 [rootmaster ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES [rootmaster ~]# 4.5.12、配置镜像加速器 [rootmaster ~]# vim /etc/docker/daemon.json {registry-mirrors:[https://rsbud4vc.mirror.aliyuncs.com,https://registry.docker-cn.com,https://docker.mirrors.ustc.edu.cn,https://dockerhub.azk8s.cn,http://hub-mirror.c.163.com],exec-opts: [native.cgroupdriversystemd] } [rootmaster ~]# [rootmaster ~]# systemctl daemon-reload 您在 /var/spool/mail/root 中有新邮件 [rootmaster ~]# systemctl restart docker [rootmaster ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES [rootmaster ~]# 4.5.13、继续配置Kubernetes 4.5.14、安装初始化k8s需要的软件包(三台一起) k8s 1.24开始就不再使用docker作为底层的容器运行时软件采用containerd作为底层的容器运行时软件 [rootmaster ~]# yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6Kubeadm: kubeadm是一个工具用来初始化k8s集群的 kubelet: 安装在集群所有节点上用于启动Pod的 kubectl: 通过kubectl可以部署和管理应用查看各种资源创建、删除和更新各种组件 [rootmaster ~]# systemctl enable kubelet Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service. 您在 /var/spool/mail/root 中有新邮件 [rootmaster ~]# 4.5.15、kubeadm初始化k8s集群 把初始化k8s集群需要的离线镜像包上传到master、node-1、node-2机器上手动解压 利用xftp传到master上的root用户的家目录下 再利用scp传递到node-1和node-2上(之前建立过免密通道) [rootmaster ~]# scp k8simage-1-20-6.tar.gz rootnode-1:/root k8simage-1-20-6.tar.gz 100% 1033MB 129.0MB/s 00:08 [rootmaster ~]# scp k8simage-1-20-6.tar.gz rootnode-2:/root k8simage-1-20-6.tar.gz 100% 1033MB 141.8MB/s 00:07 [rootmaster ~]# 导入镜像(三台一起) [rootmaster ~]# docker load -i k8simage-1-20-6.tar.gz生成一个yml文件(在master上操作) [rootmaster ~]# kubeadm config print init-defaults kubeadm.yaml 您在 /var/spool/mail/root 中有新邮件 [rootmaster ~]# ls anaconda-ks.cfg k8simage-1-20-6.tar.gz kubeadm.yaml [rootmaster ~]# cat kubeadm.yaml apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication kind: InitConfiguration localAPIEndpoint:advertiseAddress: 1.2.3.4bindPort: 6443 nodeRegistration:criSocket: /var/run/dockershim.sockname: mastertaints:- effect: NoSchedulekey: node-role.kubernetes.io/master --- apiServer:timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns:type: CoreDNS etcd:local:dataDir: /var/lib/etcd imageRepository: k8s.gcr.io kind: ClusterConfiguration kubernetesVersion: v1.20.0 networking:dnsDomain: cluster.localserviceSubnet: 10.96.0.0/12 scheduler: {} [rootmaster ~]# 修改内容 [rootmaster ~]# cat kubeadm.yaml apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication kind: InitConfiguration localAPIEndpoint:advertiseAddress: 192.168.182.142bindPort: 6443 nodeRegistration:criSocket: /var/run/dockershim.sockname: mastertaints:- effect: NoSchedulekey: node-role.kubernetes.io/master --- apiServer:timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns:type: CoreDNS etcd:local:dataDir: /var/lib/etcd imageRepository: registry.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.20.0 networking:dnsDomain: cluster.localserviceSubnet: 10.96.0.0/12podSubnet: 10.244.0.0/16 scheduler: {} --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration cgroupDriver: systemd [rootmaster ~]# 4.5.16、基于kubeadm.yaml文件初始化k8s [rootmaster ~]# kubeadm init --configkubeadm.yaml --ignore-preflight-errorsSystemVerification[rootmaster ~]# mkdir -p $HOME/.kube 您在 /var/spool/mail/root 中有新邮件 [rootmaster ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [rootmaster ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config [rootmaster ~]# 接下来去node-1和node-2上去执行 [rootnode-1 ~]# kubeadm join 192.168.182.142:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:f655c7887580b8aae5a4b510253c14c76615b0ccc2d8a84aa9759fd02d278f41去master上查看是否成功 [rootmaster ~]# kubectl get node NAME STATUS ROLES AGE VERSION master NotReady control-plane,master 8m22s v1.20.6 node-1 NotReady none 67s v1.20.6 node-2 NotReady none 61s v1.20.6 [rootmaster ~]# 4.5.17、改一下node的角色为worker [rootmaster ~]# kubectl label node node-1 node-role.kubernetes.io/workerworker node/node-1 labeled [rootmaster ~]# kubectl label node node-2 node-role.kubernetes.io/workerworker node/node-2 labeled [rootmaster ~]# kubectl get node NAME STATUS ROLES AGE VERSION master NotReady control-plane,master 15m v1.20.6 node-1 NotReady worker 8m12s v1.20.6 node-2 NotReady worker 8m6s v1.20.6 [rootmaster ~]# 4.5.18、安装网络插件 先利用xftp上传文件:calico.yml到/root/ [rootmaster ~]# kubectl apply -f calico.yaml [rootmaster ~]# kubectl get node NAME STATUS ROLES AGE VERSION master Ready control-plane,master 4h27m v1.20.6 node-1 Ready worker 4h25m v1.20.6 node-2 Ready worker 4h25m v1.20.6 [rootmaster ~]# STATUS的状态变为Ready —成功了 4.5.19、安装kubectl top node 首先要装个软件metrics-server----》可获取pod的cpu内存使用情况 在外界下载下metrics-server的yaml文件然后上传到虚拟机进行解压 [rootmaster pod]# unzip metrics-server.zip [rootmaster pod]# 进入metrics-server文件夹把tar包传递给node节点 [rootmaster metrics-server]# lscomponents.yaml metrics-server-v0.6.3.tar [rootmaster metrics-server]# scp metrics-server-v0.6.3.tar node-1:/root metrics-server-v0.6.3.tar 100% 67MB 150.8MB/s 00:00 [rootmaster metrics-server]# scp metrics-server-v0.6.3.tar node-2:/root metrics-server-v0.6.3.tar 100% 67MB 151.7MB/s 00:00 [rootmaster metrics-server]# 三台导入镜像 [rootnode-1 ~]# docker load -i metrics-server-v0.6.3.tar 启用metrics-server pod [rootmaster metrics-server]# kubectl apply -f components.yaml serviceaccount/metrics-server created clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrole.rbac.authorization.k8s.io/system:metrics-server created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created service/metrics-server created deployment.apps/metrics-server created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created [rootmaster metrics-server]# [rootmaster metrics-server]# kubectl get pod -n kube-system|grep metrics metrics-server-769f6c8464-ctxl7 1/1 Running 0 49s [rootmaster metrics-server]# 查看是否可用 [rootmaster metrics-server]# kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% master 118m 5% 1180Mi 68% node-1 128m 6% 985Mi 57% node-2 60m 3% 634Mi 36% [rootmaster metrics-server]# 4.5.20、让node节点也可以访问 kubectl get node 现在master节点上传递给node [rootmaster ~]# scp /etc/kubernetes/admin.conf node-1:/root admin.conf 100% 5567 5.4MB/s 00:00 [rootmaster ~]# scp /etc/kubernetes/admin.conf node-2:/root admin.conf 100% 5567 7.4MB/s 00:00 [rootmaster ~]# 再在这个node节点上操作 mkdir -p $HOME/.kube sudo cp -i /root/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config[rootnode-1 ~]# kubectl get node NAME STATUS ROLES AGE VERSION master Ready control-plane,master 28m v1.20.6 node-1 Ready worker 27m v1.20.6 node-2 Ready worker 27m v1.20.6 [rootnode-1 ~]# 5、先搭建k8s里边的内容 5.1、搭建nfs服务器给web 服务提供网站数据创建好相关的pv、pvc 给每一台机器都下载 建议k8s集群内的所有的节点都安装nfs-utils软件因为节点服务器里创建卷需要支持nfs网络文件系统在node-1、node-2上都安装nfs-utils软件不需要启动nfs服务主要是使用nfs服务器共享的文件夹需要去挂载nfs文件系统 yum install nfs-utils -y只是在nfs服务器上启动nfs服务就可以了 [rootnfs ~]# service nfs restart Redirecting to /bin/systemctl restart nfs.service [rootnfs ~]# nfs服务器上的防火墙和selinux都是禁用的5.1.1、设置共享目录 [rootnfs ~]# vim /etc/exports [rootnfs ~]# cat /etc/exports /web 192.168.182.0/24(rw,sync,all_squash) [rootnfs ~]# 5.1.2、创建共享目录 [rootnfs ~]# mkdir /web [rootnfs ~]# cd /web/ [rootnfs web]# echo welcome to my-web index.html [rootnfs web]# cat index.html welcome to my-web [rootnfs web]# 设置/web文件夹的权限允许其他人过来读写 [rootnfs web]# chmod 777 /web [rootnfs web]# chown nfsnobody:nfsnobody /web [rootnfs web]# ll -d /web drwxrwxrwx. 2 nfsnobody nfsnobody 24 3月 27 18:21 /web [rootnfs web]# 5.1.3、刷新nfs或者重新输出共享目录 exportfs -a 输出所有共享目录 exportfs -v 显示输出的共享目录 exportfs -r 重新输出所有的共享目录[rootnfs web]# exportfs -rv exporting 192.168.182.0/24:/web [rootnfs web]# 或者执行 [rootnfs web]# service nfs restart Redirecting to /bin/systemctl restart nfs.service [rootnfs web]# 5.1.4、创建一个pv使用nfs服务器共享的目录 [rootmaster storage]# vim nfs-pv.yaml [rootmaster storage]# cat nfs-pv.yaml apiVersion: v1 kind: PersistentVolume metadata:name: sc-nginx-pv-2labels:type: sc-nginx-pv-2 spec:capacity:storage: 5Gi accessModes:- ReadWriteManystorageClassName: nfs #存储类对应的名字nfs:path: /web #nfs共享的目录server: 192.168.182.140 #nfs服务器的ip地址readOnly: false #访问模式 [rootmaster storage]# 5.1.5、应用一下 [rootmaster storage]# kubectl apply -f nfs-pv.yaml persistentvolume/sc-nginx-pv-2 created [rootmaster storage]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE sc-nginx-pv-2 5Gi RWX Retain Bound default/sc-nginx-pvc-2 nfs 5s task-pv-volume 10Gi RWO Retain Bound default/task-pv-claim manual 6h17m [rootmaster storage]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE sc-nginx-pvc-2 Bound sc-nginx-pv-2 5Gi RWX nfs 9m19s task-pv-claim Bound task-pv-volume 10Gi RWO manual 5h58m [rootmaster storage]# 5.1.6、创建pvc使用存储类example-nfs [rootmaster storage]# vim pvc-sc.yaml [rootmaster storage]# cat pvc-sc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata:name: sc-nginx-pvc-2 spec:accessModes:- ReadWriteMany resources:requests:storage: 1GistorageClassName: nfs [rootmaster storage]# [rootmaster storage]# kubectl apply -f pvc-sc.yaml persistentvolumeclaim/sc-nginx-pvc-2 created [rootmaster storage]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE sc-nginx-pvc-2 Pending nfs 8s task-pv-claim Bound task-pv-volume 10Gi RWO manual 5h49m [rootmaster storage]# 5.1.7、创建pod启动pvc [rootmaster storage]# vim pod-nfs.yaml [rootmaster storage]# cat pod-nfs.yaml apiVersion: v1 kind: Pod metadata:name: sc-pv-pod-nfs spec:volumes:- name: sc-pv-storage-nfspersistentVolumeClaim:claimName: sc-nginx-pvc-2containers:- name: sc-pv-container-nfsimage: nginximagePullPolicy: IfNotPresentports:- containerPort: 80name: http-servervolumeMounts:- mountPath: /usr/share/nginx/htmlname: sc-pv-storage-nfs [rootmaster storage]# 应用一下 [rootmaster storage]# kubectl apply -f pod-nfs.yaml pod/sc-pv-pod-nfs created [rootmaster storage]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES sc-pv-pod-nfs 1/1 Running 0 63s 10.244.84.130 node-1 none none 您在 /var/spool/mail/root 中有新邮件 [rootmaster storage]# [rootmaster storage]# 5.1.8、测试 [rootmaster storage]# curl 10.244.84.130 welcome to my-web [rootmaster storage]# 修改nfs中的index.html在master上查看效果: [rootnfs web]# vim index.html welcome to my-web welcome to changsha [rootnfs-server web]# [rootmaster storage]# curl 10.244.84.130 welcome to my-web welcome to changsha 您在 /var/spool/mail/root 中有新邮件 [rootmaster storage]# 5.2、将自己go语言的代码镜像从harbor仓库中拉取出来 5.2.1、先把go语言的代码制作成镜像 [rootdocker ~]# mkdir /go [rootdocker ~]# cd /go [rootdocker go]# ls [rootdocker go]# vim server.go [rootdocker go]# cat server.go package mainimport (net/httpgithub.com/gin-gonic/gin )func main() {r : gin.Default()r.GET(/, func(c *gin.Context) {c.JSON(http.StatusOK, gin.H{message: Halou, gaohui 2024 Fighting!,})})r.Run() }[rootmaster go]# [rootdocker go]# go mod init web go: creating new go.mod: module web go: to add module requirements and sums:go mod tidy [rootdocker go]# go env -w GOPROXYhttps://goproxy.cn,direct [rootdocker go]# [rootdocker go]# go mod tidy [rootdocker go]# go run server.go [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.[GIN-debug] [WARNING] Running in debug mode. Switch to release mode in production.- using env: export GIN_MODErelease- using code: gin.SetMode(gin.ReleaseMode)[GIN-debug] GET / -- main.main.func1 (3 handlers) [GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value. Please check https://pkg.go.dev/github.com/gin-gonic/gin#readme-don-t-trust-all-proxies for details. [GIN-debug] Environment variable PORT is undefined. Using port :8080 by default [GIN-debug] Listening and serving HTTP on :8080 [GIN] 2024/02/01 - 19:05:04 | 200 | 137.998µs | 192.168.153.1 | GET /开始编译server.go成一个二进制文件(测试) [rootdocker go]# go build -o ghweb . [rootdocker go]# ls ghweb go.mod go.sum server.go [rootdocker go]# ./ghweb [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.[GIN-debug] [WARNING] Running in debug mode. Switch to release mode in production.- using env: export GIN_MODErelease- using code: gin.SetMode(gin.ReleaseMode)[GIN-debug] GET / -- main.main.func1 (3 handlers) [GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value. Please check https://pkg.go.dev/github.com/gin-gonic/gin#readme-don-t-trust-all-proxies for details. [GIN-debug] Environment variable PORT is undefined. Using port :8080 by default [GIN-debug] Listening and serving HTTP on :8080 ^C [rootdocker go]# 编写dockerfile文件 [rootdocker go]# vim Dockerfile FROM centos:7 WORKDIR /go COPY . /go RUN ls /go pwd ENTRYPOINT [/go/ghweb] [rootdocker go]# 制作镜像 [rootdocker go]# docker build -t ghweb:1.0 . [] Building 29.2s (9/9) FINISHED docker:default [internal] load build definition from Dockerfile 0.0s transferring dockerfile: 117B 0.0s [internal] load metadata for docker.io/library/centos:7 21.0s [internal] load .dockerignore 0.0s transferring context: 2B 0.0s [1/4] FROM docker.io/library/centos:7sha256:9d4bcbbb213dfd745b58be38b13b99 7.8s resolve docker.io/library/centos:7sha256:9d4bcbbb213dfd745b58be38b13b99 0.0s sha256:9d4bcbbb213dfd745b58be38b13b996ebb5ac315fe75711bd 1.20kB / 1.20kB 0.0s sha256:dead07b4d8ed7e29e98de0f4504d87e8880d4347859d839686a31 529B / 529B 0.0s sha256:eeb6ee3f44bd0b5103bb561b4c16bcb82328cfe5809ab675b 2.75kB / 2.75kB 0.0s sha256:2d473b07cdd5f0912cd6f1a703352c82b512407db6b05b4 76.10MB / 76.10MB 4.2s extracting sha256:2d473b07cdd5f0912cd6f1a703352c82b512407db6b05b43f25537 3.4s [internal] load build context 0.0s transferring context: 11.61MB 0.0s [2/4] WORKDIR /go 0.1s [3/4] COPY . /go 0.1s [4/4] RUN ls /go pwd 0.3s exporting to image 0.0s exporting layers 0.0s writing image sha256:59a5509da737328cc0dbe6c91a33409b7cdc5e5eeb8a46efa7d 0.0s naming to docker.io/library/ghweb:1.0 0.0s [rootmaster go]# 查看镜像 [rootdocker go]# docker images|grep ghweb ghweb 1.0 458531408d3b 11 seconds ago 216MB [rootdocker go]# 5.2.2、然后上传到harbor仓库 https://github.com/goharbor/harbor/releases/tag/v2.1.0 先去官网下载2.1的版本 新建harbor文件夹放进去解压 [rootnfs ~]# mkdir /harbor [rootnfs ~]# cd /harbor/ [rootnfs harbor]# [rootnfs harbor]# tar xf harbor-offline-installer-v2.1.0.tgz [rootnfs harbor]# ls docker-compose harbor harbor-offline-installer-v2.1.0.tgz [rootnfsharbor]# [rootnfs harbor]# cd harbor [rootnfs harbor]# ls common.sh harbor.v2.1.0.tar.gz harbor.yml.tmpl install.sh LICENSE prepare [rootnfs harbor]# cp harbor.yml.tmpl harbor.yml 修改 harbor.yml 中的hostname和prot 注释掉 https简化[rootnfs harbor]# vim harbor.yml然后拖拽docker compose这个软件进入当前目录并且加入可执行权限 [rootnfs harbor]# cp ../docker-compose . [rootnfs harbor]# ls common.sh docker-compose harbor.v2.1.0.tar.gz harbor.yml harbor.yml.tmpl install.sh LICENSE prepare [rootnfs harbor]# chmod x docker-compose [rootnfs harbor]# cp docker-compose /usr/bin/ [rootnfs harbor]# ./install.sh为什么拷贝到 /usr/bin 是因为在环境变量中可以找到 查看是否成功 [rootnfs harbor]# docker compose ls NAME STATUS CONFIG FILES harbor running(9) /harbor/harbor/docker-compose.yml 您在 /var/spool/mail/root 中有新邮件 [rootnfs harbor]# 先新建项目 再新建用户 账号gh 密码Sc123456 然后得在项目中添加成员之后利用gh的用户登录 传镜像到仓库 [rootnfs-server docker]# docker login 192.168.182.140:8089 Username: gh Password: WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded [rootnfs-server docker]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE 192.168.182.140:8089/gao/ghweb 1.0 458531408d3b 12 minutes ago 216MB ghweb 1.0 458531408d3b 12 minutes ago 216MB registry.cn-beijing.aliyuncs.com/google_registry/hpa-example latest 4ca4c13a6d7c 8 years ago 481MB [rootnfs-server docker]# docker push 192.168.182.140:8089/gao/ghweb:1.0 The push refers to repository [192.168.182.140:8089/gao/ghweb] aed658a8d439: Pushed 3e7a541e1360: Pushed a72a96e845e5: Pushed 174f56854903: Pushed 1.0: digest: sha256:53ad51fdfd846e8494c547609d2f45331150d2da5081c2f7867affdc65c55cfd size: 1153 [rootnfs-server docker]# node节点拉取ghweb镜像 k8s集群每个节点都登入到harbor中以便于从harbor中拉回镜像。 [rootmaster ~]# vim /etc/docker/daemon.json 您在 /var/spool/mail/root 中有新邮件 [rootmaster ~]# cat /etc/docker/daemon.json {registry-mirrors:[https://rsbud4vc.mirror.aliyuncs.com,https://registry.docker-cn.com,https://docker.mirrors.ustc.edu.cn,https://dockerhub.azk8s.cn,http://hub-mirror.c.163.com],insecure-registries:[192.168.182.140:8089],exec-opts: [native.cgroupdriversystemd] } [rootmaster ~]# 重新加载配置重启docker服务 [rootmaster ~]# systemctl daemon-reload systemctl restart docker登录harbor [rootmaster ~]# docker login 192.168.182.140:8089 Username: gh Password: WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded 您在 /var/spool/mail/root 中有新邮件 [rootmaster ~]# 从harbor中拉取镜像 [rootmaster ~]# docker pull 192.168.182.140:8089/gao/ghweb:1.0 1.0: Pulling from gao/ghweb 2d473b07cdd5: Pull complete deb4bb5a3691: Pull complete 880231ee488c: Pull complete ec220df6aef4: Pull complete Digest: sha256:53ad51fdfd846e8494c547609d2f45331150d2da5081c2f7867affdc65c55cfd Status: Downloaded newer image for 192.168.182.140:8089/gao/ghweb:1.0 192.168.182.140:8089/gao/ghweb:1.0 [rootmaster ~]# docker images|grep ghweb 192.168.182.140:8089/gao/ghweb 1.0 458531408d3b 20 minutes ago 216MB [rootmaster ~]# 5.3、启动HPA功能部署自己的web pod当cpu使用率达到50%的时候进行水平扩缩最小10个业务pod最多20个业务pod。 [rootmaster ~]# mkdir /hpa 您在 /var/spool/mail/root 中有新邮件 [rootmaster ~]# cd /hpa [rootmaster hpa]# vim my-web.yaml apiVersion: apps/v1 kind: Deployment metadata:labels:app: mywebname: myweb spec:replicas: 3selector:matchLabels:app: mywebtemplate:metadata:labels:app: mywebspec:containers:- name: mywebimage: 192.168.182.140:8089/gao/ghweb:1.0imagePullPolicy: IfNotPresentports:- containerPort: 8089resources:limits:cpu: 300mrequests:cpu: 100m --- apiVersion: v1 kind: Service metadata:labels:app: myweb-svcname: myweb-svc spec:selector:app: mywebtype: NodePortports:- port: 8089protocol: TCPtargetPort: 8089nodePort: 30001 [rootmaster hpa]# [rootmaster hpa]# kubectl apply -f my-web.yaml deployment.apps/myweb created service/myweb-svc created [rootmaster hpa]# 创建HPA功能 [rootmaster hpa]# kubectl autoscale deployment myweb --cpu-percent50 --min10 --max20 horizontalpodautoscaler.autoscaling/myweb autoscaled 您在 /var/spool/mail/root 中有新邮件 [rootmaster hpa]# [rootmaster hpa]# kubectl get pod NAME READY STATUS RESTARTS AGE myweb-7558d9fbc4-869f5 1/1 Running 0 12s myweb-7558d9fbc4-c5wdr 1/1 Running 0 12s myweb-7558d9fbc4-dgdbs 1/1 Running 0 82s myweb-7558d9fbc4-hmt62 1/1 Running 0 12s myweb-7558d9fbc4-r84bc 1/1 Running 0 12s myweb-7558d9fbc4-rld88 1/1 Running 0 82s myweb-7558d9fbc4-s82vh 1/1 Running 0 82s myweb-7558d9fbc4-sn5dp 1/1 Running 0 12s myweb-7558d9fbc4-t9pvl 1/1 Running 0 12s myweb-7558d9fbc4-vzlnb 1/1 Running 0 12s sc-pv-pod-nfs 1/1 Running 1 7h27m [rootmaster hpa]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE myweb Deployment/myweb 0%/50% 10 20 10 33s [rootmaster hpa]# 访问30001端口 5.5、使用探针(liveness、readiness、startup)的httpget、exec方法对web业务pod进行监控一旦出现问题马上重启增强业务pod的可靠性。 [rootmaster hpa]# vim my-web.yaml apiVersion: apps/v1 kind: Deployment metadata:labels:app: mywebname: myweb spec:replicas: 3selector:matchLabels:app: mywebtemplate:metadata:labels:app: mywebspec:containers:- name: mywebimage: 192.168.182.140:8089/gao/ghweb:1.0imagePullPolicy: IfNotPresentports:- containerPort: 8080resources:limits:cpu: 300mrequests:cpu: 100mlivenessProbe:exec:command:- ls- /initialDelaySeconds: 5periodSeconds: 5readinessProbe:exec:command:- ls- /initialDelaySeconds: 5periodSeconds: 5 startupProbe:exec:command:- ls- /failureThreshold: 3periodSeconds: 10lifecycle:postStart:exec:command: [/bin/sh, -c, echo Container started] --- apiVersion: v1 kind: Service metadata:labels:app: myweb-svcname: myweb-svc spec:selector:app: mywebtype: NodePortports:- port: 8080protocol: TCPtargetPort: 8080nodePort: 30001 [rootmaster hpa]# [rootmaster hpa]# kubectl apply -f my-web.yaml deployment.apps/myweb configured service/myweb-svc unchanged [rootmaster hpa]# Liveness: exec [ls /] delay5s timeout1s period5s #success1 #failure3Readiness: exec [ls /] delay5s timeout1s period5s #success1 #failure3Startup: exec [ls /] delay0s timeout1s period10s #success1 #failure3 5.6、搭建ingress controller 和ingress规则给web服务做基于域名的负载均衡 ingress controller 本质上是一个nginx软件用来做负载均衡。 ingress 是k8s内部管理nginx配置nginx.conf的组件用来给ingress controller传参。[rootmaster ingress]# ls ingress-controller-deploy.yaml nfs-pvc.yaml sc-ingress.yaml ingress_nginx_controller.tar nfs-pv.yaml sc-nginx-svc-1.yaml kube-webhook-certgen-v1.1.0.tar.gz nginx-deployment-nginx-svc-2.yaml [rootmaster ingress]# ingress-controller-deploy.yaml 是部署ingress controller使用的yaml文件ingress-nginx-controllerv.tar.gz ingress-nginx-controller镜像kube-webhook-certgen-v1.1.0.tar.gz kube-webhook-certgen镜像sc-ingress.yaml 创建ingress的配置文件sc-nginx-svc-1.yaml 启动sc-nginx-svc服务和相关pod的yamlnginx-deployment-nginx-svc-2.yaml 启动sc-nginx-svc-2服务和相关pod的yaml [rootmaster ingress]# scp kube-webhook-certgen-v1.1.0.tar.gz node-2:/root kube-webhook-certgen-v1.1.0.tar.gz 100% 47MB 123.6MB/s 00:00 [rootmaster ingress]# scp kube-webhook-certgen-v1.1.0.tar.gz node-1:/root kube-webhook-certgen-v1.1.0.tar.gz 100% 47MB 144.4MB/s 00:00 [rootmaster ingress]# scp ingress_nginx_controller.tar node-1:/root ingress_nginx_controller.tar 100% 276MB 129.2MB/s 00:02 [rootmaster ingress]# scp ingress_nginx_controller.tar node-2:/root ingress_nginx_controller.tar 100% 276MB 129.8MB/s 00:02 [rootmaster ingress]# docker load -i ingress_nginx_controller.tar e2eb06d8af82: Loading layer 5.865MB/5.865MB ab1476f3fdd9: Loading layer 120.9MB/120.9MB ad20729656ef: Loading layer 4.096kB/4.096kB 0d5022138006: Loading layer 38.09MB/38.09MB 8f757e3fe5e4: Loading layer 21.42MB/21.42MB a933df9f49bb: Loading layer 3.411MB/3.411MB 7ce1915c5c10: Loading layer 309.8kB/309.8kB 986ee27cd832: Loading layer 6.141MB/6.141MB b94180ef4d62: Loading layer 38.37MB/38.37MB d36a04670af2: Loading layer 2.754MB/2.754MB 2fc9eef73951: Loading layer 4.096kB/4.096kB 1442cff66b8e: Loading layer 51.67MB/51.67MB 1da3c77c05ac: Loading layer 3.584kB/3.584kB Loaded image: registry.cn-hangzhou.aliyuncs.com/yutao517/ingress_nginx_controller:v1.1.0 您在 /var/spool/mail/root 中有新邮件 [rootmaster ingress]# docker load -i kube-webhook-certgen-v1.1.0.tar.gz c0d270ab7e0d: Loading layer 3.697MB/3.697MB ce7a3c1169b6: Loading layer 45.38MB/45.38MB Loaded image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1 [rootmaster ingress]# 使用ingress-controller-deploy.yaml 文件去启动ingress controller [rootmaster ingress]# kubectl apply -f ingress-controller-deploy.yaml namespace/ingress-nginx created serviceaccount/ingress-nginx created configmap/ingress-nginx-controller created clusterrole.rbac.authorization.k8s.io/ingress-nginx created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created role.rbac.authorization.k8s.io/ingress-nginx created rolebinding.rbac.authorization.k8s.io/ingress-nginx created service/ingress-nginx-controller-admission created service/ingress-nginx-controller created deployment.apps/ingress-nginx-controller created ingressclass.networking.k8s.io/nginx created validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created serviceaccount/ingress-nginx-admission created clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created role.rbac.authorization.k8s.io/ingress-nginx-admission created rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created job.batch/ingress-nginx-admission-create created job.batch/ingress-nginx-admission-patch created 您在 /var/spool/mail/root 中有新邮件 [rootmaster ingress]# [rootmaster ingress]# kubectl get ns NAME STATUS AGE default Active 46h ingress-nginx Active 26s kube-node-lease Active 46h kube-public Active 46h kube-system Active 46h [rootmaster ingress]# kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller NodePort 10.99.32.216 none 80:32351/TCP,443:32209/TCP 52s ingress-nginx-controller-admission ClusterIP 10.108.207.217 none 443/TCP 52s [rootmaster ingress]# kubectl get pod -n ingress-nginx NAME READY STATUS RESTARTS AGE ingress-nginx-admission-create-6wrfc 0/1 Completed 0 59s ingress-nginx-admission-patch-z4hwb 0/1 Completed 1 59s ingress-nginx-controller-589dccc958-9cbht 1/1 Running 0 59s ingress-nginx-controller-589dccc958-r79rt 1/1 Running 0 59s [rootmaster ingress]# 接下来创建pod和暴露pod的服务 [rootmaster ingress]# cat sc-nginx-svc-1.yaml apiVersion: apps/v1 kind: Deployment metadata:name: sc-nginx-deploylabels:app: sc-nginx-feng spec:replicas: 3selector:matchLabels:app: sc-nginx-fengtemplate:metadata:labels:app: sc-nginx-fengspec:containers:- name: sc-nginx-fengimage: nginximagePullPolicy: IfNotPresentports:- containerPort: 80 --- apiVersion: v1 kind: Service metadata:name: sc-nginx-svclabels:app: sc-nginx-svc spec:selector:app: sc-nginx-fengports:- name: name-of-service-portprotocol: TCPport: 80targetPort: 80 [rootmaster ingress]# [rootmaster ingress]# kubectl apply -f sc-nginx-svc-1.yaml deployment.apps/sc-nginx-deploy created service/sc-nginx-svc created [rootmaster ingress]# [rootmaster ingress]# kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE sc-nginx-deploy 3/3 3 3 8m7s [rootmaster ingress]# [rootmaster ingress]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 none 443/TCP 47h myweb-svc NodePort 10.98.10.240 none 8080:30001/TCP 22h sc-nginx-svc ClusterIP 10.111.4.156 none 80/TCP 9m27s [rootmaster ingress]# [rootmaster ingress]# kubectl apply -f sc-ingress.yaml ingress.networking.k8s.io/sc-ingress created [rootmaster ingress]# kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE sc-ingress nginx www.feng.com,www.zhang.com 80 8s [rootmaster ingress]# [rootmaster ingress]# kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE sc-ingress nginx www.feng.com,www.zhang.com 192.168.182.143,192.168.182.144 80 27s 您在 /var/spool/mail/root 中有新邮件 [rootmaster ingress]# 查看ingress controller 里的nginx.conf 文件里是否有ingress对应的规则 [rootmaster ingress]# kubectl get pod -n ingress-nginx NAME READY STATUS RESTARTS AGE ingress-nginx-admission-create-6wrfc 0/1 Completed 0 68m ingress-nginx-admission-patch-z4hwb 0/1 Completed 1 68m ingress-nginx-controller-589dccc958-9cbht 1/1 Running 0 68m ingress-nginx-controller-589dccc958-r79rt 1/1 Running 0 68m 您在 /var/spool/mail/root 中有新邮件 [rootmaster ingress]# [rootmaster ingress]# kubectl exec -it ingress-nginx-controller-589dccc958-9cbht -n ingress-nginx -- bash bash-5.1$ cat nginx.conf|grep zhang.com## start server www.zhang.comserver_name www.zhang.com ;## end server www.zhang.com bash-5.1$ 在其他的宿主机nfs服务器上或者windows机器上使用域名进行访问 先添加hosts [rootnfs ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.182.143 www.feng.com 192.168.182.144 www.zhang.com [rootnfs-server ~]# [rootnfs etc]# curl www.feng.com !DOCTYPE html html head titleWelcome to nginx!/title style html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } /style /head body h1Welcome to nginx!/h1 pIf you see this page, the nginx web server is successfully installed and working. Further configuration is required./ppFor online documentation and support please refer to a hrefhttp://nginx.org/nginx.org/a.br/ Commercial support is available at a hrefhttp://nginx.com/nginx.com/a./ppemThank you for using nginx./em/p /body /html [rootnfs etc]# 启动2个服务和pod使用了pvpvcnfs[rootmaster ingress]# cat nfs-pv.yaml apiVersion: v1 kind: PersistentVolume metadata:name: sc-nginx-pvlabels:type: sc-nginx-pv spec:capacity:storage: 10Gi accessModes:- ReadWriteManystorageClassName: nfsnfs:path: /web #nfs共享的目录server: 192.168.182.140 #nfs服务器的ip地址readOnly: false [rootmaster ingress]# [rootmaster ingress]# kubectl apply -f nfs-pv.yaml persistentvolume/sc-nginx-pv created[rootmaster ingress]# cat nfs-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata:name: sc-nginx-pvc spec:accessModes:- ReadWriteMany resources:requests:storage: 1GistorageClassName: nfs #使用nfs类型的pv [rootmaster ingress]# kubectl apply -f nfs-pvc.yaml persistentvolumeclaim/sc-nginx-pvc created [rootmaster ingress]# [rootmaster ingress]# kubectl get pv,pvc NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/sc-nginx-pv 10Gi RWX Retain Bound default/sc-nginx-pvc nfs 31s persistentvolume/sc-nginx-pv-2 5Gi RWX Retain Bound default/sc-nginx-pvc-2 nfs 31hNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/sc-nginx-pvc Bound sc-nginx-pv 10Gi RWX nfs 28s persistentvolumeclaim/sc-nginx-pvc-2 Bound sc-nginx-pv-2 5Gi RWX nfs 31h [rootmaster ingress]# 启第二个pod和第二个服务 [rootmaster ingress]# kubectl apply -f nginx-deployment-nginx-svc-2.yaml deployment.apps/nginx-deployment created service/sc-nginx-svc-2 created [rootmaster ingress]# [rootmaster ingress]# kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller NodePort 10.99.32.216 none 80:32351/TCP,443:32209/TCP 81m ingress-nginx-controller-admission ClusterIP 10.108.207.217 none 443/TCP 81m [rootmaster ingress]# 访问宿主机暴露的端口号32351或者80都可以 [rootnfs ~]# curl www.zhang.com welcome to my-web welcome to changsha Halou-gh [rootnfs ~]# [rootnfs ~]# curl www.feng.com !DOCTYPE html html head titleWelcome to nginx!/title style html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } /style /head body h1Welcome to nginx!/h1 pIf you see this page, the nginx web server is successfully installed and working. Further configuration is required./ppFor online documentation and support please refer to a hrefhttp://nginx.org/nginx.org/a.br/ Commercial support is available at a hrefhttp://nginx.com/nginx.com/a./ppemThank you for using nginx./em/p /body /html [rootnfs ~]# 5.7、部署和访问 Kubernetes 仪表板Dashboard wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml 使用的dashboard的版本是v2.7.0下载yaml文件 recommended.yaml修改配置文件将service对应的类型设置为NodePort[rootmaster dashboard]# vim recommended.yaml ---kind: Service apiVersion: v1 metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard spec:type: NodePort #指定类型ports:- port: 443targetPort: 8443nodePort: 30088 #指定宿主机端口号selector:k8s-app: kubernetes-dashboard--- 其他的配置都不修改应用上面的配置启动dashboard相关的实例启动dashboard [rootmaster dashboard]# kubectl apply -f recommended.yaml namespace/kubernetes-dashboard created serviceaccount/kubernetes-dashboard created service/kubernetes-dashboard created secret/kubernetes-dashboard-certs created secret/kubernetes-dashboard-csrf created secret/kubernetes-dashboard-key-holder created configmap/kubernetes-dashboard-settings created role.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created deployment.apps/kubernetes-dashboard created service/dashboard-metrics-scraper created deployment.apps/dashboard-metrics-scraper created [rootmaster dashboard]# 查看是否启动dashboard的pod [rootmaster dashboard]# kubectl get pod --all-namespaces|grep dashboard kubernetes-dashboard dashboard-metrics-scraper-66dd8bdd86-nvwsj 1/1 Running 0 2m20s kubernetes-dashboard kubernetes-dashboard-785c75749d-nqsm7 1/1 Running 0 2m20s [rootmaster dashboard]# 查看服务是否启动 [rootmaster dashboard]# kubectl get svc --all-namespaces|grep dash kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.103.48.55 none 8000/TCP 3m45s kubernetes-dashboard kubernetes-dashboard NodePort 10.96.62.20 none 443:30088/TCP 3m45s 您在 /var/spool/mail/root 中有新邮件 [rootmaster dashboard]# 在浏览器里访问使用https协议去访问30088端口 https://192.168.182.142:30088/出现一个登录画图需要输入token 获取dashboard 的secret的名字 kubectl get secret -n kubernetes-dashboard|grep dashboard-token [rootmaster dashboard]# kubectl get secret -n kubernetes-dashboard|grep dashboard-token kubernetes-dashboard-token-9hsh5 kubernetes.io/service-account-token 3 6m58s 您在 /var/spool/mail/root 中有新邮件 [rootmaster dashboard]# kubectl describe secret kubernetes-dashboard-token-9hsh5 -n kubernetes-dashboard [rootmaster dashboard]# kubectl describe secret kubernetes-dashboard-token-9hsh5 -n kubernetes-dashboard Name: kubernetes-dashboard-token-9hsh5 Namespace: kubernetes-dashboard Labels: none Annotations: kubernetes.io/service-account.name: kubernetes-dashboardkubernetes.io/service-account.uid: d05961ce-a39b-4445-bc1b-643439b59f41Type: kubernetes.io/service-account-tokenDataca.crt: 1066 bytes namespace: 20 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IkRNdlRFVE9XeDFPdU95Q3FEcEtYUXJHZ0dvcnJPdlBUdEp3MEVtSzF5MHcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi05aHNoNSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImQwNTk2MWNlLWEzOWItNDQ0NS1iYzFiLTY0MzQzOWI1OWY0MSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.sbnbgil-sHV71WF1K4nOKTQKOOXNIam-NbUTFqfCdx6lBNN3IVnQFiISdXsjmDELi3q6kmVfpw000KPdavZ307Em2cGLI2F7aOy281dafcelZzIBjdMhw5KHrlzc0JkbL-jQfDvgk7t6T5zABqKfC8LsdButSsMviw8N0eFC5Iz9gSlxDieZDzzPCXVXUnCBWmAxcpOhUfJn81HyoFk6deVK71lwR5zm_KnbjCoTQAYbaCXfoB8fjn3-cyVFMtHbt0rU3mPyV5kYJEuH4WlGGYYMxQfrm0I8elQbyyENKtlI0DK_15Am_wp0I1Gw81eLg53h67FFQrSKHe9QxPx6Cw [rootmaster dashboard]# 登录成功后发现dashboard不能访问任何的资源对象因为没有权限需要RBAC鉴权 授权kubernetes-dashboard,防止找不到namespace资源 kubectl create clusterrolebinding serviceaccount-cluster-admin --clusterrolecluster-admin --usersystem:serviceaccount:kubernetes-dashboard:kubernetes-dashboard [rootmaster dashboard]# kubectl create clusterrolebinding serviceaccount-cluster-admin --clusterrolecluster-admin --usersystem:serviceaccount:kubernetes-dashboard:kubernetes-dashboard clusterrolebinding.rbac.authorization.k8s.io/serviceaccount-cluster-admin created 您在 /var/spool/mail/root 中有新邮件 [rootmaster dashboard]# 然后刷新一下页面就有了 如果要删除角色绑定 [rootmaster ~]#kubectl delete clusterrolebinding serviceaccount-cluster-admin 5.8、使用ab工具对整个k8s集群里的web服务进行压力测试 压力测试软件ab ab是Apache自带的一个压力测试软件可以通过ab命令和选项对某个URL进行压力测试。 ab建议在linux环境下使用。 ab的主要命令: ab主要使用的两个选项就是-n和-c。 其他选项使用命合ab-h进行查看。 命命格式是: ab -n10 -c10 URL 编写yaml文件 [rootmaster hpa]# cat nginx.yaml apiVersion: apps/v1 kind: Deployment metadata:name: ab-nginx spec:selector:matchLabels:run: ab-nginxtemplate:metadata:labels:run: ab-nginxspec:nodeName: node-2containers:- name: ab-nginximage: 192.168.182.140:8089/gao/ghweb:1.0imagePullPolicy: IfNotPresentports:- containerPort: 8080resources:limits:cpu: 100mrequests:cpu: 50m --- apiVersion: v1 kind: Service metadata:name: ab-nginx-svclabels:run: ab-nginx-svc spec:type: NodePortports:- port: 8080targetPort: 8080nodePort: 31000selector:run: ab-nginx --- apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata:name: ab-nginx spec:scaleTargetRef:apiVersion: apps/v1kind: Deploymentname: ab-nginxminReplicas: 5maxReplicas: 20targetCPUUtilizationPercentage: 50 [rootmaster hpa]# 启动开启了HPA功能的nginx的部署控制器 [rootmaster hpa]# kubectl apply -f nginx.yaml deployment.apps/ab-nginx unchanged service/ab-nginx-svc created horizontalpodautoscaler.autoscaling/ab-nginx created [rootmaster hpa]# [rootmaster hpa]# kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE ab-nginx 5/5 5 5 30s myweb 3/3 3 3 31m nginx-deployment 3/3 3 3 44m sc-nginx-deploy 3/3 3 3 70m[rootmaster hpa]# kubectl get pod NAME READY STATUS RESTARTS AGE ab-nginx-6d7db4b69f-2j5dz 1/1 Running 0 27s ab-nginx-6d7db4b69f-6dwcq 1/1 Running 0 27s ab-nginx-6d7db4b69f-7wkkd 1/1 Running 0 27s ab-nginx-6d7db4b69f-8mjp6 1/1 Running 0 27s ab-nginx-6d7db4b69f-gfmsq 1/1 Running 0 43s myweb-69786769dc-jhsf8 1/1 Running 0 31m myweb-69786769dc-kfjgk 1/1 Running 0 31m myweb-69786769dc-msxrf 1/1 Running 0 31m nginx-deployment-6c685f999-dkkfg 1/1 Running 0 44m nginx-deployment-6c685f999-khjsp 1/1 Running 0 44m nginx-deployment-6c685f999-svcvz 1/1 Running 0 44m sc-nginx-deploy-7bb895f9f5-pmbcd 1/1 Running 0 70m sc-nginx-deploy-7bb895f9f5-wf55g 1/1 Running 0 70m sc-nginx-deploy-7bb895f9f5-zbjr9 1/1 Running 0 70m sc-pv-pod-nfs 1/1 Running 1 31h [rootmaster hpa]# [rootmaster hpa]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE ab-nginx Deployment/ab-nginx 0%/50% 5 20 5 84s 您在 /var/spool/mail/root 中有新邮件 [rootmaster hpa]# 去访问31000端口 下载ab压力测试软件,在nfs机器上装不要在集群内部装 [rootnfs ~]# yum install httpd-tools -y在master上一直盯着hpa看 [rootmaster hpa]# kubectl get hpa --watch NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE ab-nginx Deployment/ab-nginx 0%/50% 5 20 5 5m2s给master新建个会话查看pod的变化 [rootmaster hpa]# kubectl get pod --watch NAME READY STATUS RESTARTS AGE ab-nginx-6d7db4b69f-2j5dz 1/1 Running 0 5m13s ab-nginx-6d7db4b69f-6dwcq 1/1 Running 0 5m13s ab-nginx-6d7db4b69f-7wkkd 1/1 Running 0 5m13s ab-nginx-6d7db4b69f-8mjp6 1/1 Running 0 5m13s ab-nginx-6d7db4b69f-gfmsq 1/1 Running 0 5m29s myweb-69786769dc-jhsf8 1/1 Running 0 36m myweb-69786769dc-kfjgk 1/1 Running 0 36m myweb-69786769dc-msxrf 1/1 Running 0 36m nginx-deployment-6c685f999-dkkfg 1/1 Running 0 49m nginx-deployment-6c685f999-khjsp 1/1 Running 0 49m nginx-deployment-6c685f999-svcvz 1/1 Running 0 49m sc-nginx-deploy-7bb895f9f5-pmbcd 1/1 Running 0 75m sc-nginx-deploy-7bb895f9f5-wf55g 1/1 Running 0 75m sc-nginx-deploy-7bb895f9f5-zbjr9 1/1 Running 0 75m sc-pv-pod-nfs 1/1 Running 1 31h 开始测试 ab -n1000 -c50 http://192.168.182.142:31000/ 一直增加ab-nginx Deployment/ab-nginx 60%/50% 5 20 5 7m54s ab-nginx Deployment/ab-nginx 86%/50% 5 20 6 8m9s ab-nginx Deployment/ab-nginx 83%/50% 5 20 9 8m25s ab-nginx Deployment/ab-nginx 69%/50% 5 20 9 8m40s ab-nginx Deployment/ab-nginx 55%/50% 5 20 9 8m56s ab-nginx Deployment/ab-nginx 55%/50% 5 20 10 9m11s ab-nginx Deployment/ab-nginx 14%/50% 5 20 10 9m41s 超过50%就会创建ab-nginx-6d7db4b69f-zdv2h 0/1 ContainerCreating 0 0s ab-nginx-6d7db4b69f-zdv2h 0/1 ContainerCreating 0 1s ab-nginx-6d7db4b69f-zdv2h 1/1 Running 0 2s ab-nginx-6d7db4b69f-l4vbw 0/1 Pending 0 0s ab-nginx-6d7db4b69f-5qb9p 0/1 Pending 0 0s ab-nginx-6d7db4b69f-vzcn7 0/1 Pending 0 0s ab-nginx-6d7db4b69f-l4vbw 0/1 ContainerCreating 0 0s ab-nginx-6d7db4b69f-5qb9p 0/1 ContainerCreating 0 0s ab-nginx-6d7db4b69f-vzcn7 0/1 ContainerCreating 0 0s ab-nginx-6d7db4b69f-l4vbw 0/1 ContainerCreating 0 1s ab-nginx-6d7db4b69f-5qb9p 0/1 ContainerCreating 0 1s ab-nginx-6d7db4b69f-vzcn7 0/1 ContainerCreating 0 2s ab-nginx-6d7db4b69f-l4vbw 1/1 Running 0 2s ab-nginx-6d7db4b69f-vzcn7 1/1 Running 0 2s ab-nginx-6d7db4b69f-5qb9p 1/1 Running 0 2s不压力测试的时候 就会慢慢降低下来 6、搭建Prometheus服务器 6.1、为了方便多台机器操作先部署ansible在堡垒机上 安装ansible [rootjump ~]# yum install epel-release -y [rootjump ~]# yum install ansible -y 修改hosts文件 [rootjump ~]# cd /etc/ansible/ [rootjump ansible]# ls ansible.cfg hosts roles [rootjump ansible]# vim hosts #加入要控制的机器 [k8s] 192.168.182.142 192.168.182.143 192.168.182.144[nfs] 192.168.182.140[firewall] 192.168.182.177在ansible服务器和其他的服务器之间建立免密通道单向信任关系 1.生成密钥对 ssh-keygen 2.上传公钥到其他服务器 ssh-copy-id -i /root/.ssh/id_rsa.pub root192.168.182.142 3.测试ansible服务器能否控制所有的服务器 [rootjump ansible]# ansible all -m shell -a ip add6.2、搭建prometheus 服务器和grafana出图软件,监控所有的服务器 1.提前下载好所需要的软件 [rootjump prom]# ls grafana-enterprise-9.5.1-1.x86_64.rpm prometheus-2.44.0-rc.1.linux-amd64.tar.gz node_exporter-1.7.0.linux-amd64.tar.gz [rootjump prom]# 2.解压源码包 [rootjump prom]# tar xf prometheus-2.44.0-rc.1.linux-amd64.tar.gz 修改名字 [rootjump prom]# mv prometheus-2.44.0-rc.1.linux-amd64 prometheus [rootjump prom]# ls grafana-enterprise-9.5.1-1.x86_64.rpm prometheus-2.44.0-rc.1.linux-amd64.tar.gz node_exporter-1.7.0.linux-amd64.tar.gz prometheus [rootjump prom]# 临时和永久修改PATH变量添加prometheus的路径 [rootjump prom]# PATH/prom/prometheus:$PATH [rootjump prom]# echo PATH/prom/prometheus:$PATH /etc/profile [rootjump prom]# which prometheus /prom/prometheus/prometheus [rootjump prom]# 把prometheus做成一个服务来进行管理非常方便日后维护和使用 [rootjump prom]# vim /usr/lib/systemd/system/prometheus.service [Unit] Descriptionprometheus [Service] ExecStart/prom/prometheus/prometheus --config.file/prom/prometheus/prometheus.yml ExecReload/bin/kill -HUP $MAINPID KillModeprocess Restarton-failure [Install] WantedBymulti-user.target重新加载systemd相关的服务识别Prometheus服务的配置文件 [rootjump prom]# systemctl daemon-reload启动Prometheus服务 [rootjump prom]# systemctl start prometheus [rootjump prom]# systemctl restart prometheus [rootjump prom]# 查看服务是否启动 [rootjump prom]# ps aux|grep prome root 17551 0.1 2.2 796920 42344 ? Ssl 13:15 0:00 /prom/prometheus/prometheus --config.file/prom/prometheus/prometheus.yml root 17561 0.0 0.0 112824 972 pts/0 S 13:16 0:00 grep --colorauto prome [rootjump prom]# 设置开机自启动 [rootjump prom]# systemctl enable prometheus Created symlink from /etc/systemd/system/multi-user.target.wants/prometheus.service to /usr/lib/systemd/system/prometheus.service. [rootjump prom]# 去访问9090端口6.2.1、安装exporter 将node-exporter传递到所有的服务器上的/root目录下 ansible all -m copy -a srcnode_exporter-1.7.0.linux-amd64.tar.gz dest/root/ [rootjump prom]# ansible all -m copy -a srcnode_exporter-1.7.0.linux-amd64.tar.gz dest/root/编写在其他机器上安装node_exporter的脚本 [rootjump prom]# vim install_node_exporter.sh #!/bin/bashtar xf /root/node_exporter-1.7.0.linux-amd64.tar.gz -C / cd / mv node_exporter-1.7.0.linux-amd64/ node_exporter cd /node_exporter/ PATH/node_exporter/:$PATH echo PATH/node_exporter/:$PATH /etc/profile#生成nodeexporter.service文件 cat /usr/lib/systemd/system/node_exporter.service EOF [Unit] Descriptionnode_exporter [Service] ExecStart/node_exporter/node_exporter --web.listen-address 0.0.0.0:9090 ExecReload/bin/kill -HUP $MAINPID KillModeprocess Restarton-failure [Install] WantedBymulti-user.target EOF#让systemd进程识别node_exporter服务 systemctl daemon-reload #设置开机启动 systemctl enable node_exporter #启动node_exporter systemctl start node_exporter在ansible服务器上执行安装node_exporter的脚本 [rootjump prom]# ansible all -m script -a /prom/install_node_exporter.sh在其他的服务器上查看是否安装node_exporter成功 [rootfirewalld ~]# ps aux|grep node root 24717 0.0 0.4 1240476 9200 ? Ssl 13:24 0:00 /node_exporter/node_exporter --web.listen-address 0.0.0.0:9090 root 24735 0.0 0.0 112824 972 pts/0 S 13:29 0:00 grep --colorauto node [rootfirewalld ~]# 6.2.2、在Prometheus服务器上添加被监控的服务器 在prometheus服务器上添加抓取数据的配置添加node节点服务器将抓取的数据存储到时序数据库里 [rootjump prometheus]# ls console_libraries consoles LICENSE NOTICE prometheus prometheus.yml promtool [rootjump prometheus]# vim prometheus.yml #添加下面的配置- job_name: masterstatic_configs:- targets: [192.168.182.142:9090]- job_name: node1static_configs:- targets: [192.168.182.143:9090]- job_name: node2static_configs:- targets: [192.168.182.144:9090]- job_name: nfsstatic_configs:- targets: [192.168.182.140:9090]- job_name: firewalldstatic_configs:- targets: [192.168.182.177:9090]重启Prometheus服务 [rootjump prometheus]# service prometheus restart Redirecting to /bin/systemctl restart prometheus.service [rootjump prometheus]# 6.2.3、安装grafana出图展示 安装 [rootjump prom]# yum install grafana-enterprise-9.5.1-1.x86_64.rpm 启动grafana [rootjump prom]# systemctl start grafana-server设置开机自启动 [rootjump prom]# systemctl enable grafana-server Created symlink from /etc/systemd/system/multi-user.target.wants/grafana-server.service to /usr/lib/systemd/system/grafana-server.service.查看是否启动 [rootjump prom]# ps aux|grep grafana grafana 17968 2.8 5.5 1288920 103588 ? Ssl 13:41 0:02 /usr/share/grafana/bin/grafana server --config/etc/grafana/grafana.ini --pidfile/var/run/grafana/grafana-server.pid --packagingrpm cfg:default.paths.logs/var/log/grafana cfg:default.paths.data/var/lib/grafana cfg:default.paths.plugins/var/lib/grafana/plugins cfg:default.paths.provisioning/etc/grafana/provisioning root 18030 0.0 0.0 112824 972 pts/0 S 13:43 0:00 grep --colorauto grafana [rootjump prom]# netstat -antplu|grep grafana tcp 0 0 192.168.182.141:42942 34.120.177.193:443 ESTABLISHED 17968/grafana tcp6 0 0 :::3000 :::* LISTEN 17968/grafana [rootjump prom]# 登录在浏览器里登录端口是3000 账号密码都是admin 添加数据源 选择模板 7、进行跳板机和防火墙的配置 7.1、将k8s集群里的机器还有nfs服务器进行tcp wrappers的配置只允许堡垒机ssh进来拒绝其他的机器ssh过去。 [rootjump ~]# cd /prom [rootjump prom]# vim set_tcp_wrappers.sh [rootjump prom]# cat set_tcp_wrappers.sh #!/bin/bash#set /etc/hosts.allow文件的内容只允许堡垒机访问sshd服务echo sshd:192.168.182.141 /etc/hosts.allow #单独允许我的windows系统也可以访问echo sshd:192.168.40.93 /etc/hosts.allow #拒绝其他的所有的机器访问sshd echo sshd:ALL /etc/hosts.deny [rootjump prom]# ansible k8s -m script -a /prom/set_tcp_wrappers.sh ansible nfs -m script -a /prom/set_tcp_wrappers.sh测试是否生效只允许堡垒机ssh过去 拿nfs去跳master [rootnfs ~]# ssh root192.168.182.142 ssh_exchange_identification: read: Connection reset by peer [rootnfs ~]# 拿jump机器去跳 [rootjump prom]# ssh root192.168.182.142 Last login: Sat Mar 30 13:55:24 2024 from 192.168.182.141 [rootmaster ~]# exit 登出 Connection to 192.168.182.142 closed. [rootjump prom]# 7.2、搭建防火墙服务器 关闭防火墙和selinux service firewalld stop systemctl disable firewalld setenforce 0 sed -i s/SELINUXenforcing/SELINUXdisabled/g /etc/selinux/config修改ip地址 一个wan口192.168.40.87 一个lan口: 192.168.182.177[rootfirewalld network-scripts]# cat ifcfg-ens33 BOOTPROTOnone DEFROUTEyes NAMEens33 UUIDe3072a9e-9e43-4855-9941-cabf05360e32 DEVICEens33 ONBOOTyes IPADDR192.168.40.87 PREFIX24 GATEWAY192.168.40.166 DNS1114.114.114.114 [rootfirewalld network-scripts]# [rootfirewalld network-scripts]# cat ifcfg-ens34 BOOTPROTOnone DEFROUTEyes NAMEens34 UUID0d04766d-7a98-4a68-b9a9-eb7377a4df80 DEVICEens34 ONBOOTyes IPADDR192.168.182.177 PREFIX24 [rootfirewalld network-scripts]# 永久打开路由功能 [rootfirewalld ~]# vim /etc/sysctl.conf net.ipv4.ip_forward 1 [rootfirewalld ~]# sysctl -p 让内核读取配置文件开启路由功能 net.ipv4.ip_forward 1 [rootfirewalld ~]# 7.3、编写dnat和snat策略 编写策略snat和dnat [rootfirewalld ~]# mkdir /nat [rootfirewalld ~]# cd /nat/ [rootfirewalld nat]# vim set_snat_dnat.sh [rootfirewalld nat]# cat set_snat_dnat.sh #!/bin/bash#开启路由功能 echo 1 /proc/sys/net/ipv4/ip_forward #修改/etc/sysctl.conf里添加下面的配置 #net.ipv4.ip_forward 1#清除防火墙规则 iptables/usr/sbin/iptables$iptables -F $iptables -t nat -F#set snat policy $iptables -t nat -A POSTROUTING -s 192.168.182.0/24 -o ens33 -j MASQUERADE#set dnat policy,发布我的web服务 $iptables -t nat -A PREROUTING -d 192.168.40.87 -i ens33 -p tcp --dport 30001 -j DNAT --to-destination 192.168.182.142:30001 $iptables -t nat -A PREROUTING -d 192.168.40.87 -i ens33 -p tcp --dport 31000 -j DNAT --to-destination 192.168.182.142:31000#发布堡垒机,访问防火墙的2233端口转发到堡垒机的22端口 $iptables -t nat -A PREROUTING -d 192.168.40.87 -i ens33 -p tcp --dport 2233 -j DNAT --to-destination 192.168.182.141:22 [rootfirewalld nat]# 执行 [rootfirewalld nat]# bash set_snat_dnat.sh查看脚本的执行效果 iptables -L -t nat -n [rootfirewalld nat]# iptables -L -t nat -n Chain PREROUTING (policy ACCEPT) target prot opt source destination DNAT tcp -- 0.0.0.0/0 192.168.40.87 tcp dpt:30001 to:192.168.182.142:30001 DNAT tcp -- 0.0.0.0/0 192.168.40.87 tcp dpt:31000 to:192.168.182.142:31000 DNAT tcp -- 0.0.0.0/0 192.168.40.87 tcp dpt:2233 to:192.168.182.141:22Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- 192.168.182.0/24 0.0.0.0/0 [rootfirewalld nat]# 保存规则 [rootfirewalld nat]# iptables-save /etc/sysconfig/iptables_rules设置snat和dnat策略开机启动 [rootfirewalld nat]# vim /etc/rc.local [rootfirewalld nat]# vim /etc/rc.local iptables-restore /etc/sysconfig/iptables_rules [rootfirewalld nat]# [rootfirewalld nat]# chmod x /etc/rc.d/rc.local7.4、将整个k8s集群里的服务器的网关设置为防火墙服务器的LAN口的ip地址192.168.182.177 以master为例 [rootmaster network-scripts]# cat ifcfg-ens33 BOOTPROTOnone DEFROUTEyes NAMEens33 UUIDe2cd1765-6b1c-4ff5-88e0-a2bf8bd4203e DEVICEens33 ONBOOTyes IPADDR192.168.182.142 PREFIX24 GATEWAY192.168.182.177 DNS1114.114.114.114 [rootmaster network-scripts]# [rootmaster network-scripts]# service network restart Restarting network (via systemctl): [ 确定 ]7.5、测试SNAT功能 [rootmaster ~]# ping www.baidu.com PING www.a.shifen.com (183.2.172.185) 56(84) bytes of data. 64 bytes from 183.2.172.185 (183.2.172.185): icmp_seq1 ttl50 time40.0 ms 64 bytes from 183.2.172.185 (183.2.172.185): icmp_seq2 ttl50 time33.0 ms 64 bytes from 183.2.172.185 (183.2.172.185): icmp_seq3 ttl50 time34.7 ms ^C --- www.a.shifen.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2003ms rtt min/avg/max/mdev 33.048/35.938/40.021/2.972 ms [rootmaster ~]# 7.6、测试dnat功能 7.7、测试堡垒机发布 成功了 8、项目心得 在配置snat和dnat的时候想着把里边的环境都改成hostonly结果服务瘫了又换成了nat在做项目期间挂起之后导致master和node直接的连接断了不得不重新恢复快照继续做不管是部署k8s还是防火墙都得仔细仔细再仔细不然漏了一个步骤排查起来很不方便有很多镜像都是在国外拉不进来的 解决办法是找国内的平替或者直接买一个新加坡的服务器从那台拉再导出来 在部署 Kubernetes 仪表板Dashboard登录需要token验证只有15分钟 需要修改recommend.yaml 修改参数增加时常 master节点上是不跑业务pod上边设置了污点在启动新pod的时候删除那些不重要的pod不然会出现node节点爆满pod起不来
http://www.ho-use.cn/article/10822086.html

相关文章:

  • 网站上图片的链接怎么做济南建设网站需要
  • 免费网站软件大全wordpress中文二次元
  • 深圳市建设网站手机制作ppt的步骤图解
  • 网站的备案要求吗吉林省干部网络培训
  • 手机app开发网站网站建设排行榜
  • 建设拍卖网站seo与网站建设的关联
  • 网站上切换语言是怎么做的led灯 东莞网站建设
  • 做淘宝网站要求与想法wordpress分类seo标题
  • 多用户建站系统源码wordpress落叶插件
  • 网站建设仟首先金手指14wordpress禁止搜索页面
  • 网站建设备案优化设石家庄做网站排名
  • 建设工程自学网站深圳seo专家
  • 做的最成功的网站陶哲轩wordpress
  • 网站的空间怎么查智能广告投放平台
  • 国外移动端网站模板固镇建设局网站
  • 支付网站建设推广的会计分录移动界面设计案例
  • 旅游网站开发设计寻求一个专业网站制作公司
  • 网站app生成器下载安卓网站建站系统
  • 图片点开是网站怎么做下载天眼查企业查询官网
  • 烟台建网站辽宁住房建设部网站
  • 岑溪网站开发工作室网页特效管理系统
  • p2p网站建设要多少钱网站建设销售好做吗
  • 网站开发的前后端是哪些专业seo站长工具全面查询网站
  • nas可做网站服务器吗网站后台修改网站首页怎么做
  • 跳转网站怎么做制作一个网站的流程有哪些
  • 网站转移空间备案是不是就没有了网站数据接口怎么做
  • 哪里做网站好网站怎么做sem优化
  • 网站建设方案视频教程网站里怎样做点击量查询
  • 网站推广在哪好重庆企业网络推广价格
  • dede 网站源码营销网站建设与管理