calico ipam使用

2023-11-01 16:36
文章标签 使用 calico ipam

本文主要是介绍calico ipam使用,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

calico ipam使用
前面的文章pod获取ip地址的过程中提到过calico使用的IP地址的管理模块是其自己开发的模块calico-ipam,本篇文章来讲述下其具体用法。

一、环境信息

  • 版本信息
本环境使用版本是k8s 1.25.3
[root@node1 ~]# kubectl get node 
NAME    STATUS   ROLES                  AGE    VERSION
node1   Ready    control-plane,worker   206d   v1.25.3
node2   Ready    worker                 206d   v1.25.3
node3   Ready    worker                 206d   v1.25.3###集群已经部署了calico cni
[root@node1 ~]# kubectl get po -n kube-system   | grep calico 
calico-kube-controllers-75c594996d-x49mw   1/1     Running   5 (12d ago)    206d
calico-node-htq5b                          1/1     Running   1 (12d ago)    206d
calico-node-x6xwl                          1/1     Running   1 (12d ago)    206d
calico-node-xdx46                          1/1     Running   1 (12d ago)    206d
[root@node1 ~]# ####查看calico的默认配置
[root@node1 ~]# cat /etc/cni/net.d/10-calico.conflist 
{"name": "k8s-pod-network","cniVersion": "0.3.1","plugins": [{"type": "calico",   ###插件类型"log_level": "info","log_file_path": "/var/log/calico/cni/cni.log","datastore_type": "kubernetes","nodename": "node1","mtu": 0,"ipam": {"type": "calico-ipam"    ####ipam类型是calico-ipam},"policy": {"type": "k8s"},"kubernetes": {"kubeconfig": "/etc/cni/net.d/calico-kubeconfig"}},{"type": "portmap","snat": true,"capabilities": {"portMappings": true}},{"type": "bandwidth","capabilities": {"bandwidth": true}}]
}[root@node1 ~]# 
  • 网络模式
###查看目前使用的IP地址池
[root@node1 ~]# calicoctl  get ippool -o wide 
NAME                  CIDR             NAT    IPIPMODE   VXLANMODE   DISABLED   DISABLEBGPEXPORT   SELECTOR   
default-ipv4-ippool   10.233.64.0/18   true   Always     Never       false      false              all()      ###查看网路详细信息
[root@node1 ~]# calicoctl  get ippool default-ipv4-ippool -o yaml 
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:creationTimestamp: "2023-04-08T17:18:59Z"name: default-ipv4-ippoolresourceVersion: "647"uid: 7b9d84e1-ac69-4660-b298-c52e2267ab08
spec:allowedUses:- Workload- TunnelblockSize: 24   ###每个block的大小是24为掩码cidr: 10.233.64.0/18ipipMode: Always  ###网络模式是ipipnatOutgoing: truenodeSelector: all()  ###所有节点可用vxlanMode: Never

二、环境物理拓扑

每个物理机位于不同的机架上,每个物理节点分配不同的ippool

               -------------------|    router       |-------------------|                 |
---------------   ---------------   ---------------
| rack-1      |   | rack-2      |   | rack-3      |
---------------   ---------------   ---------------
| node-1      |   | node-2      |   | node-3      |
- - - - - - - -   - - - - - - - -   - - - - - - - -

三、为节点分配网络

  • 为node打label
[root@node1 ~]# kubectl label node node1 rack=1
node/node1 labeled
[root@node1 ~]# kubectl label node node2 rack=2
node/node2 labeled
[root@node1 ~]# kubectl label node node3 rack=3
node/node3 labeled
[root@node1 ~]# 
  • 为node创建ippool
1:首先要禁用环境中默认的ippool,因为我环境中有使用默认ippool的pod,不做删除操作
[root@node1 ~]# calicoctl  get ippool -o wide 
NAME                  CIDR             NAT    IPIPMODE   VXLANMODE   DISABLED   DISABLEBGPEXPORT   SELECTOR   
default-ipv4-ippool   10.233.64.0/18   true   Always     Never       false      false              all()  ########################
2:使用patch命令修改disabled=true
[root@node1 ~]# calicoctl patch ipPool default-ipv4-ippool --patch '{"spec":{"disabled": true}}'
Successfully patched 1 'IPPool' resource
[root@node1 ~]# calicoctl  get ippool -o wide
NAME                  CIDR             NAT    IPIPMODE   VXLANMODE   DISABLED   DISABLEBGPEXPORT   SELECTOR   
default-ipv4-ippool   10.233.64.0/18   true   Always     Never       true       false              all()      
[root@node1 ~]# ########################
3:为三个node创建ippool,注意不要和其他网路冲突
[root@node1 ~]# calicoctl create -f -<<EOF
> apiVersion: projectcalico.org/v3
> kind: IPPool
> metadata:
>   name: rack-1-ippool
> spec:
>   cidr: 172.16.1.0/24
>   ipipMode: Always
>   natOutgoing: true
>   nodeSelector: rack == "1"     #####此处标签与之前为node打的label 对应
> EOF
Successfully created 1 'IPPool' resource(s)
[root@node1 ~]# calicoctl create -f -<<EOF
> apiVersion: projectcalico.org/v3
> kind: IPPool
> metadata:
>   name: rack-2-ippool
> spec:
>   cidr: 172.16.2.0/24
>   ipipMode: Always
>   natOutgoing: true
>   nodeSelector: rack == "2"
> EOF
Successfully created 1 'IPPool' resource(s)
[root@node1 ~]# calicoctl create -f -<<EOF
> apiVersion: projectcalico.org/v3
> kind: IPPool
> metadata:
>   name: rack-3-ippool
> spec:
>   cidr: 172.16.3.0/24
>   ipipMode: Always
>   natOutgoing: true
>   nodeSelector: rack == "3"
> EOF
Successfully created 1 'IPPool' resource(s)
[root@node1 ~]# ###########
4:查看创建好的ippool
[root@node1 ~]# calicoctl  get ippool -o wide 
NAME                  CIDR             NAT    IPIPMODE   VXLANMODE   DISABLED   DISABLEBGPEXPORT   SELECTOR      
default-ipv4-ippool   10.233.64.0/18   true   Always     Never       true       false              all()         
rack-1-ippool         172.16.1.0/24    true   Always     Never       false      false              rack == "1"   
rack-2-ippool         172.16.2.0/24    true   Always     Never       false      false              rack == "2"   
rack-3-ippool         172.16.3.0/24    true   Always     Never       false      false              rack == "3" 

四、验证网络

1:编辑yaml文件
---
apiVersion: apps/v1
kind: Deployment
metadata:name: nginxlabels:app: nginx
spec:replicas: 10selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: docker.io/library/nginx:latestimagePullPolicy: IfNotPresentports:- containerPort: 80#############################
2:启动pod,查看pod获取ip情况
[root@node1 ~]# kubectl apply -f yaml/nginx.yaml 
deployment.apps/nginx created
[root@node1 ~]# kubectl get po -o wide 
NAME                                      READY   STATUS    RESTARTS       AGE    IP             NODE    NOMINATED NODE   READINESS GATES
nginx-5977dc5756-22kfl                    1/1     Running   0              7s     172.16.1.131   node1   <none>           <none>
nginx-5977dc5756-4lvpq                    1/1     Running   0              7s     172.16.3.129   node3   <none>           <none>
nginx-5977dc5756-59jkh                    1/1     Running   0              7s     172.16.1.129   node1   <none>           <none>
nginx-5977dc5756-9lm7p                    1/1     Running   0              7s     172.16.3.132   node3   <none>           <none>
nginx-5977dc5756-jdcqf                    1/1     Running   0              7s     172.16.1.130   node1   <none>           <none>
nginx-5977dc5756-jvwkf                    1/1     Running   0              7s     172.16.2.1     node2   <none>           <none>
nginx-5977dc5756-nq46g                    1/1     Running   0              7s     172.16.2.3     node2   <none>           <none>
nginx-5977dc5756-tsjf7                    1/1     Running   0              7s     172.16.3.131   node3   <none>           <none>
nginx-5977dc5756-xqmwz                    1/1     Running   0              7s     172.16.2.2     node2   <none>           <none>
nginx-5977dc5756-xt648                    1/1     Running   0              7s     172.16.3.130   node3   <none>           <none>
[root@node1 ~]# 
以上可以看到每个pod在对应的节点获取到的ip和ippool对应##############################
3:测试网络联通性,在node1可以ping通其他两个节点的pod ip
[root@node1 ~]# ping 172.16.1.131
PING 172.16.1.131 (172.16.1.131) 56(84) bytes of data.
64 bytes from 172.16.1.131: icmp_seq=1 ttl=64 time=0.306 ms
^C
--- 172.16.1.131 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms
[root@node1 ~]# ping 172.16.2.1
PING 172.16.2.1 (172.16.2.1) 56(84) bytes of data.
64 bytes from 172.16.2.1: icmp_seq=1 ttl=63 time=1.25 ms
64 bytes from 172.16.2.1: icmp_seq=2 ttl=63 time=0.906 ms
^C
--- 172.16.2.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.906/1.080/1.255/0.177 ms
[root@node1 ~]# ping 172.16.3.131
PING 172.16.3.131 (172.16.3.131) 56(84) bytes of data.
64 bytes from 172.16.3.131: icmp_seq=1 ttl=63 time=2.26 ms
64 bytes from 172.16.3.131: icmp_seq=2 ttl=63 time=1.52 ms
^C
--- 172.16.3.131 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 1.528/1.898/2.269/0.373 ms
[root@node1 ~]# ######################################
注意:Calico IPAM不会将IP地址重新分配给已经运行的pod。若要使用新配置的IP池中的IP地址,需要更新正在运行的pod,需要重建它们。

五、迁移ip到新ippool

1: 使用旧的ip新建pod,以便后续测试
[root@node1 ~]# kubectl apply -f yaml/nginx.yaml 
deployment.apps/nginx created[root@node1 ~]# kubectl get po -o wide 
NAME                                      READY   STATUS    RESTARTS       AGE    IP             NODE    NOMINATED NODE   READINESS GATES
nginx-5977dc5756-8jm8g                    1/1     Running   0              48s    10.233.90.36   node1   <none>           <none>
nginx-5977dc5756-8vz6r                    1/1     Running   0              48s    10.233.96.68   node2   <none>           <none>
nginx-5977dc5756-c6ltc                    1/1     Running   0              48s    10.233.92.61   node3   <none>           <none>
nginx-5977dc5756-gmr27                    1/1     Running   0              48s    10.233.96.69   node2   <none>           <none>
nginx-5977dc5756-h7tz5                    1/1     Running   0              48s    10.233.92.60   node3   <none>           <none>
nginx-5977dc5756-k7jpx                    1/1     Running   0              48s    10.233.92.59   node3   <none>           <none>
nginx-5977dc5756-kzfpm                    1/1     Running   0              48s    10.233.92.62   node3   <none>           <none>
nginx-5977dc5756-nnzxt                    1/1     Running   0              48s    10.233.90.34   node1   <none>           <none>
nginx-5977dc5756-ppcxz                    1/1     Running   0              48s    10.233.90.35   node1   <none>           <none>
nginx-5977dc5756-rk9nk                    1/1     Running   0              48s    10.233.96.70   node2   <none>           <none>###########################
2:新建ippool
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:name: new-ipv4-ippool
spec:allowedUses:- Workload- TunnelblockSize: 24cidr: 172.16.0.0/16ipipMode: AlwaysnatOutgoing: truenodeSelector: all()vxlanMode: Never[root@node1 ~]# calicoctl apply -f  ippool.yaml 
Successfully applied 1 'IPPool' resource(s)
[root@node1 ~]# calicoctl  get ippool -o wide 
NAME                  CIDR             NAT    IPIPMODE   VXLANMODE   DISABLED   DISABLEBGPEXPORT   SELECTOR   
default-ipv4-ippool   10.233.64.0/18   true   Always     Never       false      false              all()      
new-ipv4-ippool       172.16.0.0/16    true   Always     Never       false      false              all() ###############################
3:禁用旧的ippool,不会影响现有 pod 的网络
[root@node1 ~]# calicoctl patch ipPool default-ipv4-ippool --patch '{"spec":{"disabled":  true}}'
Successfully patched 1 'IPPool' resource查看默认的IPPOOL  DISABLED 为 true
[root@node1 ~]# calicoctl  get ippool -o wide 
NAME                  CIDR             NAT    IPIPMODE   VXLANMODE   DISABLED   DISABLEBGPEXPORT   SELECTOR   
default-ipv4-ippool   10.233.64.0/18   true   Always     Never       true       false              all()      
new-ipv4-ippool       172.16.0.0/16    true   Always     Never       false      false              all()      [root@node1 ~]# ###################################
4:重启新建好的pod,看获取ip情况
[root@node1 ~]# kubectl rollout restart  deploy nginx
deployment.apps/nginx restarted
[root@node1 ~]# kubectl get po -owide 
NAME                                      READY   STATUS    RESTARTS       AGE    IP             NODE    NOMINATED NODE   READINESS GATES
nginx-8499ccc976-6q5d8                    1/1     Running   0              8s     172.16.154.1   node1   <none>           <none>
nginx-8499ccc976-8mw42                    1/1     Running   0              8s     172.16.44.1    node2   <none>           <none>
nginx-8499ccc976-9x84p                    1/1     Running   0              8s     172.16.28.2    node3   <none>           <none>
nginx-8499ccc976-f8n28                    1/1     Running   0              8s     172.16.44.2    node2   <none>           <none>
nginx-8499ccc976-fxfft                    1/1     Running   0              6s     172.16.28.3    node3   <none>           <none>
nginx-8499ccc976-jj8hg                    1/1     Running   0              6s     172.16.44.3    node2   <none>           <none>
nginx-8499ccc976-kjf75                    1/1     Running   0              8s     172.16.28.1    node3   <none>           <none>
nginx-8499ccc976-rms74                    1/1     Running   0              6s     172.16.154.2   node1   <none>           <none>
nginx-8499ccc976-trcn8                    1/1     Running   0              5s     172.16.28.4    node3   <none>           <none>
nginx-8499ccc976-z28fw                    1/1     Running   0              5s     172.16.154.3   node1   <none>           <none>
可以看到pod重启后获取到了新ippool的ip################################
5:测试网络连通性
[root@node1 ~]# ping 172.16.44.1
PING 172.16.44.1 (172.16.44.1) 56(84) bytes of data.
64 bytes from 172.16.44.1: icmp_seq=1 ttl=63 time=1.32 ms
^C
--- 172.16.44.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.327/1.327/1.327/0.000 ms
[root@node1 ~]# ping 172.16.28.2
PING 172.16.28.2 (172.16.28.2) 56(84) bytes of data.
64 bytes from 172.16.28.2: icmp_seq=1 ttl=63 time=2.66 ms
64 bytes from 172.16.28.2: icmp_seq=2 ttl=63 time=1.07 ms
^C
--- 172.16.28.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 1.076/1.868/2.660/0.792 ms
[root@node1 ~]# ping 172.16.154.2
PING 172.16.154.2 (172.16.154.2) 56(84) bytes of data.
64 bytes from 172.16.154.2: icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from 172.16.154.2: icmp_seq=2 ttl=64 time=0.125 ms
^C
--- 172.16.154.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.125/0.194/0.263/0.069 ms

这篇关于calico ipam使用的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/324320

相关文章

详解Vue如何使用xlsx库导出Excel文件

《详解Vue如何使用xlsx库导出Excel文件》第三方库xlsx提供了强大的功能来处理Excel文件,它可以简化导出Excel文件这个过程,本文将为大家详细介绍一下它的具体使用,需要的小伙伴可以了解... 目录1. 安装依赖2. 创建vue组件3. 解释代码在Vue.js项目中导出Excel文件,使用第三

Linux alias的三种使用场景方式

《Linuxalias的三种使用场景方式》文章介绍了Linux中`alias`命令的三种使用场景:临时别名、用户级别别名和系统级别别名,临时别名仅在当前终端有效,用户级别别名在当前用户下所有终端有效... 目录linux alias三种使用场景一次性适用于当前用户全局生效,所有用户都可调用删除总结Linux

java图像识别工具类(ImageRecognitionUtils)使用实例详解

《java图像识别工具类(ImageRecognitionUtils)使用实例详解》:本文主要介绍如何在Java中使用OpenCV进行图像识别,包括图像加载、预处理、分类、人脸检测和特征提取等步骤... 目录前言1. 图像识别的背景与作用2. 设计目标3. 项目依赖4. 设计与实现 ImageRecogni

python管理工具之conda安装部署及使用详解

《python管理工具之conda安装部署及使用详解》这篇文章详细介绍了如何安装和使用conda来管理Python环境,它涵盖了从安装部署、镜像源配置到具体的conda使用方法,包括创建、激活、安装包... 目录pytpshheraerUhon管理工具:conda部署+使用一、安装部署1、 下载2、 安装3

Mysql虚拟列的使用场景

《Mysql虚拟列的使用场景》MySQL虚拟列是一种在查询时动态生成的特殊列,它不占用存储空间,可以提高查询效率和数据处理便利性,本文给大家介绍Mysql虚拟列的相关知识,感兴趣的朋友一起看看吧... 目录1. 介绍mysql虚拟列1.1 定义和作用1.2 虚拟列与普通列的区别2. MySQL虚拟列的类型2

使用MongoDB进行数据存储的操作流程

《使用MongoDB进行数据存储的操作流程》在现代应用开发中,数据存储是一个至关重要的部分,随着数据量的增大和复杂性的增加,传统的关系型数据库有时难以应对高并发和大数据量的处理需求,MongoDB作为... 目录什么是MongoDB?MongoDB的优势使用MongoDB进行数据存储1. 安装MongoDB

关于@MapperScan和@ComponentScan的使用问题

《关于@MapperScan和@ComponentScan的使用问题》文章介绍了在使用`@MapperScan`和`@ComponentScan`时可能会遇到的包扫描冲突问题,并提供了解决方法,同时,... 目录@MapperScan和@ComponentScan的使用问题报错如下原因解决办法课外拓展总结@

mysql数据库分区的使用

《mysql数据库分区的使用》MySQL分区技术通过将大表分割成多个较小片段,提高查询性能、管理效率和数据存储效率,本文就来介绍一下mysql数据库分区的使用,感兴趣的可以了解一下... 目录【一】分区的基本概念【1】物理存储与逻辑分割【2】查询性能提升【3】数据管理与维护【4】扩展性与并行处理【二】分区的

使用Python实现在Word中添加或删除超链接

《使用Python实现在Word中添加或删除超链接》在Word文档中,超链接是一种将文本或图像连接到其他文档、网页或同一文档中不同部分的功能,本文将为大家介绍一下Python如何实现在Word中添加或... 在Word文档中,超链接是一种将文本或图像连接到其他文档、网页或同一文档中不同部分的功能。通过添加超

Linux使用fdisk进行磁盘的相关操作

《Linux使用fdisk进行磁盘的相关操作》fdisk命令是Linux中用于管理磁盘分区的强大文本实用程序,这篇文章主要为大家详细介绍了如何使用fdisk进行磁盘的相关操作,需要的可以了解下... 目录简介基本语法示例用法列出所有分区查看指定磁盘的区分管理指定的磁盘进入交互式模式创建一个新的分区删除一个存