centOS7.2使用yum安装kubernetes

2024-04-18 00:32

本文主要是介绍centOS7.2使用yum安装kubernetes,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

2015年9月1日,CentOS 已经把 Kubernetes 加入官方源,所以现在安装Kubernetes已经方便很多。

master包含kube-apiserver kube-scheduler kube-controller-manager etcd四个组件
node包含kube-proxy kubelet flannel 3个组件

  1. kube-apiserver:位于master节点,接受用户请求。
  2. kube-scheduler:位于master节点,负责资源调度,即pod建在哪个node节点。
  3. kube-controller-manager:位于master节点,包含ReplicationManager,Endpointscontroller,Namespacecontroller,and Nodecontroller等。
  4. etcd:分布式键值存储系统,共享整个集群的资源对象信息。
  5. kubelet:位于node节点,负责维护在特定主机上运行的pod。
  6. kube-proxy:位于node节点,它起的作用是一个服务代理的角色

1.准备工作

在3台服务器上都执行下面的操作。
master:192.168.52.130
node:192.168.52.132

1关闭防火墙

每台机器禁用iptables 避免和docker 的iptables冲突:

#systemctl stop firewalld
#systemctl disable firewalld
#iptables -P FORWARD ACCEPT

2安装NTP

为了让各个服务器的时间保持一致,还需要为所有的服务器安装NTP:

#yum -y install ntp
#systemctl start ntpd
#systemctl enable ntpd

3禁用selinux

#vi /etc/selinux/config

#SELINUX=enforcing
SELINUX=disabled

2.部署master

1.安装etcd和kubernetes(这会自动安装docker)

[root@localhost etc]#yum -y install etcd kubernetes-master

2.修改etcd.conf

[root@localhost etc]# vi /etc/etcd/etcd.conf    
ETCD_NAME=node1 
#数据存放位置
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_SNAPSHOT_COUNT="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#监听其他 Etcd 实例的地址
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
#监听客户端地址
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""#[cluster]
~ #通知其他 Etcd 实例地址
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.52.130:2380"
#if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
#初始化集群内节点地址
ETCD_INITIAL_CLUSTER="node1=http://192.168.52.130:2380,node2=http://192.168.52.132:2380"
#初始化集群状态,new 表示新建
ETCD_INITIAL_CLUSTER_STATE="new"
#初始化集群 token
ETCD_INITIAL_CLUSTER_TOKEN="mritd-etcd-cluster"
#通知 客户端地址
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.52.130:2379,http://192.168.52.130:4001"

3.修改kube-master配置文件

 root@localhost kubernetes]# vi /etc/kubernetes/apiserver 
###
#kubernetes system config
#
#The following values are used to configure the kube-apiserver
#
#The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
##The port on the local server to listen on.
#KUBE_API_PORT="--port=8080"
#Port minions listen on
KUBELET_PORT="--kubelet-port=10250"
#Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"
#Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16 --service-node-port-range=1-65535"
#default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"
#去掉ServiceAccount,解决kubectl get pods时 No resources found.问题
#Add your own!
KUBE_API_ARGS=""
[root@localhost /]# vi /etc/kubernetes/controller-manager
###
#the following values are used to configure the kubernetes controller-manager#defaults from config and apiserver should be adequate
#Add your own!
#KUBE_CONTROLLER_MANAGER_ARGS=""
KUBE_CONTROLLER_MANAGER_ARGS="--node-monitor-grace-period=10s --pod-eviction-timeout=10s"
~
[root@localhost /]# vi /etc/kubernetes/config  
###
#kubernetes system config
#
#the following values are used to configure various aspects of all
#kubernetes services, including
#
#kube-apiserver.service
#kube-controller-manager.service
#kube-scheduler.service
#kubelet.service#kube-proxy.service
#logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"#journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"#Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"#How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.52.130:8080"

其中的8080,如果被占用了,或者不想用这个端口,可以修改

4.启动服务

让 etcd kube-apiserver kube-scheduler kube-controller-manager 随开机启动

[root@localhost /]# systemctl enable etcd kube-apiserver kube-scheduler kube-controller-manager

启动

[root@localhost /]# systemctl start etcd kube-apiserver kube-scheduler kube-controller-manager

5.配置etcd中的网络

定义etcd中的网络配置,nodeN中的flannel service会拉取此配置

[root@localhost /]# etcdctl mk /coreos.com/network/config '{"Network":"172.17.0.0/16"}'

3.部署minions(node节点)

1安装kubernetes-node和 flannel(会自动安装docker)

[root@localhost ~]# yum -y install kubernetes-node flannel

2修改kube-node

[root@localhost ~]# vi /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"#journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"#Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"#How the controller-manager, scheduler, and proxy find the apiserver
#KUBE_MASTER="--master=http://127.0.0.1:8080"
KUBE_MASTER="--master=http://192.168.52.130:8080"

hostname改为node自己的ip或名称

[root@localhost ~]# vi /etc/kubernetes/kubelet

###
#kubernetes kubelet (minion) config#The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=127.0.0.1"#The port for the info server to serve on
#KUBELET_PORT="--port=10250"#You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=http://192.168.52.132"#location of the api-server
KUBELET_API_SERVER="--api-servers=http://http://192.168.52.130:8080"#pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"#Add your own!
#KUBELET_ARGS=""
KUBELET_ARGS="--pod-infra-container-image=kubernetes/pause"

3修改flannel

为etcd服务配置flannel,修改配置文件 /etc/sysconfig/flanneld

[root@localhost ~]# vi /etc/sysconfig/flanneld 
#etcd url location.  Point this to the server where etcd runs
#FLANNEL_ETCD="http://127.0.0.1:2379"
FLANNEL_ETCD="http://192.168.52.130:2379"#etcd config key.  This is the configuration key that flannel queries
#For address range assignment
#FLANNEL_ETCD_KEY="/atomic.io/network"
FLANNEL_ETCD_KEY="/coreos.com/network"#Any additional options that you want to pass
FLANNEL_OPTIONS=" -iface=ens33"FLANNEL_OPTIONS=" -iface=ens33" 其中的ens33是网卡名称(用ifconfig可查询出来,centos7如果你没有改网卡名,那可以是enoXXXXX)

4.启动服务

[root@localhost ~]# systemctl restart flanneld docker
[root@localhost ~]# systemctl start kubelet kube-proxy
[root@localhost ~]# systemctl enable flanneld kubelet kube-proxy

ifconfig下,看到每个minions(node)会有docker0和flannel0这2个网卡。这2个网卡在不同的minons都是不同的.

4.验证

1.Testing Create First Pod

在master上创建docker Nginx Pod

[root@localhost ~]#  kubectl create deployment nginx --image=nginx
[root@localhost ~]#  kubectl describe deployment nginx

创建服务端口

[root@localhost ~]# kubectl create service nodeport nginx --tcp=80:80[root@localhost ~]# kubectl describe service nginx
Name:			nginx
Namespace:		default
Labels:			app=nginx
Selector:		app=nginx
Type:			NodePort
IP:			10.254.4.244
Port:			80-80	80/TCP
NodePort:		80-80	30862/TCP
Endpoints:		172.17.48.2:80
Session Affinity:	None
No events.

2.在node上查看docker nignx容器ip是否对应的 Endpoints

[root@localhost ~]# docker inspect 423a3b8b26b2
[{"Id": "423a3b8b26b2f511ceed97cdc5c5c14e0c4ce69dae5f5818406f0013566da67b""Created": "2019-02-26T01:02:22.4188594Z","Path": "/pause","Args": [],"State": {"Status": "running","Running": true,"Paused": false,"Restarting": false,"OOMKilled": false,"Dead": false,"Pid": 25352,"ExitCode": 0,"Error": "","StartedAt": "2019-02-26T01:02:24.196708758Z","FinishedAt": "0001-01-01T00:00:00Z"},"Image": "sha256:f9d5de0795395db6c50cb1ac82ebed1bd8eb3eefcebb1aa724e0123"ResolvConfPath": "/var/lib/docker/containers/423a3b8b26b2f511ceed97cdc5013566da67b/resolv.conf","HostnamePath": "/var/lib/docker/containers/423a3b8b26b2f511ceed97cdc5c53566da67b/hostname","HostsPath": "/var/lib/docker/containers/423a3b8b26b2f511ceed97cdc5c5c146da67b/hosts","LogPath": "","Name": "/k8s_POD.c73fd98d_nginx-3121059884-4k8vd_default_baed9fe9-38b9-406c","RestartCount": 0,"Driver": "overlay2","MountLabel": "","ProcessLabel": "","AppArmorProfile": "","ExecIDs": null,"HostConfig": {"Binds": null,"ContainerIDFile": "","LogConfig": {"Type": "journald","Config": {}},"NetworkMode": "default","PortBindings": {},"RestartPolicy": {"Name": "","MaximumRetryCount": 0},"AutoRemove": false,"VolumeDriver": "","VolumesFrom": null,"CapAdd": null,"CapDrop": null,"Dns": ["192.168.52.2"],"DnsOptions": null,"DnsSearch": ["localdomain"],"ExtraHosts": null,"GroupAdd": null,"IpcMode": "","Cgroup": "","Links": null,"OomScoreAdj": -998,"PidMode": "","Privileged": false,"PublishAllPorts": false,"ReadonlyRootfs": false,"SecurityOpt": ["seccomp=unconfined"],"UTSMode": "","UsernsMode": "","ShmSize": 67108864,"Runtime": "docker-runc","ConsoleSize": [0,0],"Isolation": "","CpuShares": 2,"Memory": 0,"NanoCpus": 0,"CgroupParent": "","BlkioWeight": 0,"BlkioWeightDevice": null,"BlkioDeviceReadBps": null,"BlkioDeviceWriteBps": null,"BlkioDeviceReadIOps": null,"BlkioDeviceWriteIOps": null,"CpuPeriod": 0,"CpuQuota": 0,"CpuRealtimePeriod": 0,"CpuRealtimeRuntime": 0,"CpusetCpus": "","CpusetMems": "","Devices": [],"DiskQuota": 0,"KernelMemory": 0,"MemoryReservation": 0,"MemorySwap": -1,"MemorySwappiness": -1,"OomKillDisable": false,"PidsLimit": 0,"Ulimits": null,"CpuCount": 0,"CpuPercent": 0,"IOMaximumIOps": 0,"IOMaximumBandwidth": 0},"GraphDriver": {"Name": "overlay2","Data": {"LowerDir": "/var/lib/docker/overlay2/1bf4efe3b1c04a93dc5efcdea2729e3e6f43b-init/diff:/var/lib/docker/overlay2/8b2860fbde3dec06a9b19e127c49cc9ac62c5/diff:/var/lib/docker/overlay2/5558c6c8eb694182c22e68a223223ff03cd64c70c6612ar/lib/docker/overlay2/602d9c3d734dba42cceddf7e88775efed7e477b95894775a7772149cc"MergedDir": "/var/lib/docker/overlay2/1bf4efe3b1c04a93dc5efcdead729e3e6f43b/merged","UpperDir": "/var/lib/docker/overlay2/1bf4efe3b1c04a93dc5efcdea2729e3e6f43b/diff","WorkDir": "/var/lib/docker/overlay2/1bf4efe3b1c04a93dc5efcdea2b29e3e6f43b/work"}},"Mounts": [],"Config": {"Hostname": "nginx-3121059884-4k8vd","Domainname": "","User": "","AttachStdin": false,"AttachStdout": false,"AttachStderr": false,"Tty": false,"OpenStdin": false,"StdinOnce": false,"Env": ["KUBERNETES_SERVICE_PORT=443","NGINX_SERVICE_HOST=10.254.4.244","NGINX_PORT=tcp://10.254.4.244:80","NGINX_PORT_80_TCP_PORT=80","NGINX_PORT_80_TCP_ADDR=10.254.4.244","KUBERNETES_PORT_443_TCP_ADDR=10.254.0.1","NGINX_PORT_80_TCP=tcp://10.254.4.244:80","KUBERNETES_SERVICE_HOST=10.254.0.1","KUBERNETES_SERVICE_PORT_HTTPS=443","NGINX_SERVICE_PORT_80_80=80","NGINX_PORT_80_TCP_PROTO=tcp","KUBERNETES_PORT=tcp://10.254.0.1:443","KUBERNETES_PORT_443_TCP=tcp://10.254.0.1:443","KUBERNETES_PORT_443_TCP_PROTO=tcp","KUBERNETES_PORT_443_TCP_PORT=443","NGINX_SERVICE_PORT=80","HOME=/","PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/b],"Cmd": null,"Image": "kubernetes/pause","Volumes": null,"WorkingDir": "","Entrypoint": ["/pause"],"OnBuild": null,"Labels": {"io.kubernetes.container.hash": "c73fd98d","io.kubernetes.container.name": "POD","io.kubernetes.container.restartCount": "0","io.kubernetes.container.terminationMessagePath": "","io.kubernetes.pod.name": "nginx-3121059884-4k8vd","io.kubernetes.pod.namespace": "default","io.kubernetes.pod.terminationGracePeriod": "30","io.kubernetes.pod.uid": "baed9fe9-38b9-11e9-bd0c-000c291410a9"}},"NetworkSettings": {"Bridge": "","SandboxID": "0fe4522e3b99b460d78ed3fea3beddef34044ce8e339f134998ca4"HairpinMode": false,"LinkLocalIPv6Address": "","LinkLocalIPv6PrefixLen": 0,"Ports": {},"SandboxKey": "/var/run/docker/netns/0fe4522e3b99","SecondaryIPAddresses": null,"SecondaryIPv6Addresses": null,"EndpointID": "e31d625288e553a87e5e7bae1da03badbfeddf83305dc3d43196e"Gateway": "172.17.48.1","GlobalIPv6Address": "","GlobalIPv6PrefixLen": 0,"IPAddress": "172.17.48.2","IPPrefixLen": 24,"IPv6Gateway": "","MacAddress": "02:42:ac:11:30:02","Networks": {"bridge": {"IPAMConfig": null,"Links": null,"Aliases": null,"NetworkID": "901845a1f83d292f893a37bfe735ab0ca022ed0be45817"EndpointID": "e31d625288e553a87e5e7bae1da03badbfeddf83305dc"Gateway": "172.17.48.1","IPAddress": "172.17.48.2","IPPrefixLen": 24,"IPv6Gateway": "","GlobalIPv6Address": "","GlobalIPv6PrefixLen": 0,"MacAddress": "02:42:ac:11:30:02"}}}}
]

3.在master上执行

[root@localhost ~]# kubectl get nodes
NAME              STATUS    AGE
192.168.52.132   Ready     20m
[root@localhost ~]#kubectl get pods
NAME                                           READY     STATUS        RESTARTS   AGE
nginx-3121059884-4k8vd   1/1         Running       12         21h
[root@localhost /]kubectl describe pods nginx-3121059884-4k8vd
Name:		nginx-3121059884-4k8vd
Namespace:	default
Node:		192.168.52.132/192.168.52.132
Start Time:	Mon, 25 Feb 2019 12:56:48 +0800
Labels:		app=nginxpod-template-hash=3121059884
Status:		Running
IP:		172.17.48.2
Controllers:	ReplicaSet/nginx-3121059884
Containers:
nginx:Container ID:		docker://b1f59f8025255f03c5f7f1a9c5c7847fc9e178d5d4bf5c51b6855db328894a70Image:			nginxImage ID:			docker-pullable://docker.io/nginx@sha256:dd2d0ac3fff2f007d99e033b64854be0941e19a2ad51f174d9240dda20d9f534Port:			State:			RunningStarted:			Tue, 26 Feb 2019 09:02:30 +0800Last State:			TerminatedReason:			CompletedExit Code:		0Started:			Mon, 25 Feb 2019 17:03:29 +0800Finished:			Tue, 26 Feb 2019 09:02:08 +0800Ready:			TrueRestart Count:		12Volume Mounts:		<none>Environment Variables:	<none>
Conditions:
Type		Status
Initialized 	True 
Ready 	True 
PodScheduled 	True 
No volumes.
QoS Class:	BestEffort
Tolerations:	<none>
No events.
[root@localhost ~]# kubectl get svc
NAME         CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   10.254.0.1     <none>        443/TCP        4d
nginx        10.254.4.244   <nodes>       80:30862/TCP   4d
[root@localhost ~]#kubectl describe svc nginx
Name:			nginx
Namespace:		default
Labels:			app=nginx
Selector:		app=nginx
Type:			NodePort
IP:			10.254.4.244
Port:			80-80	80/TCP
NodePort:		80-80	30862/TCP
Endpoints:		172.17.48.2:80
Session Affinity:	None
No events.

4.在node机上测试

[root@localhost ~]#docker container ls
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
b1f59f802525        nginx               "nginx -g 'daemon ..."   About an hour ago   Up About an hour                        k8s_nginx.9179dbd3_nginx-3121059884-4k8vd_default_baed9fe9-38b9-11e9-bd0c-000c291410a9_e251e27f
423a3b8b26b2        kubernetes/pause    "/pause"                 About an hour ago   Up About an hour                        k8s_POD.c73fd98d_nginx-3121059884-4k8vd_default_baed9fe9-38b9-11e9-bd0c-000c291410a9_bcb3406c

查看监听端口
[root@localhost ~]# netstat -lnpt|grep kube-proxy

tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      25180/kube-proxy    
tcp6       0      0 :::30862                :::*                    LISTEN      25180/kube-proxy    
[root@localhost ~]#curl http://192.168.52.132:30862/
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>body {width: 35em;margin: 0 auto;font-family: Tahoma, Verdana, Arial, sans-serif;}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p><p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p>
</body>
</html>

2.ie上访问

http://192.168.52.132:30862/

在这里插入图片描述
这样etcd+flannel + kubernetes在centOS7上就搭建起来了.

这篇关于centOS7.2使用yum安装kubernetes的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/913287

相关文章

Zookeeper安装和配置说明

一、Zookeeper的搭建方式 Zookeeper安装方式有三种,单机模式和集群模式以及伪集群模式。 ■ 单机模式:Zookeeper只运行在一台服务器上,适合测试环境; ■ 伪集群模式:就是在一台物理机上运行多个Zookeeper 实例; ■ 集群模式:Zookeeper运行于一个集群上,适合生产环境,这个计算机集群被称为一个“集合体”(ensemble) Zookeeper通过复制来实现

CentOS7安装配置mysql5.7 tar免安装版

一、CentOS7.4系统自带mariadb # 查看系统自带的Mariadb[root@localhost~]# rpm -qa|grep mariadbmariadb-libs-5.5.44-2.el7.centos.x86_64# 卸载系统自带的Mariadb[root@localhost ~]# rpm -e --nodeps mariadb-libs-5.5.44-2.el7

Centos7安装Mongodb4

1、下载源码包 curl -O https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel70-4.2.1.tgz 2、解压 放到 /usr/local/ 目录下 tar -zxvf mongodb-linux-x86_64-rhel70-4.2.1.tgzmv mongodb-linux-x86_64-rhel70-4.2.1/

中文分词jieba库的使用与实景应用(一)

知识星球:https://articles.zsxq.com/id_fxvgc803qmr2.html 目录 一.定义: 精确模式(默认模式): 全模式: 搜索引擎模式: paddle 模式(基于深度学习的分词模式): 二 自定义词典 三.文本解析   调整词出现的频率 四. 关键词提取 A. 基于TF-IDF算法的关键词提取 B. 基于TextRank算法的关键词提取

使用SecondaryNameNode恢复NameNode的数据

1)需求: NameNode进程挂了并且存储的数据也丢失了,如何恢复NameNode 此种方式恢复的数据可能存在小部分数据的丢失。 2)故障模拟 (1)kill -9 NameNode进程 [lytfly@hadoop102 current]$ kill -9 19886 (2)删除NameNode存储的数据(/opt/module/hadoop-3.1.4/data/tmp/dfs/na

Hadoop数据压缩使用介绍

一、压缩原则 (1)运算密集型的Job,少用压缩 (2)IO密集型的Job,多用压缩 二、压缩算法比较 三、压缩位置选择 四、压缩参数配置 1)为了支持多种压缩/解压缩算法,Hadoop引入了编码/解码器 2)要在Hadoop中启用压缩,可以配置如下参数

Makefile简明使用教程

文章目录 规则makefile文件的基本语法:加在命令前的特殊符号:.PHONY伪目标: Makefilev1 直观写法v2 加上中间过程v3 伪目标v4 变量 make 选项-f-n-C Make 是一种流行的构建工具,常用于将源代码转换成可执行文件或者其他形式的输出文件(如库文件、文档等)。Make 可以自动化地执行编译、链接等一系列操作。 规则 makefile文件

使用opencv优化图片(画面变清晰)

文章目录 需求影响照片清晰度的因素 实现降噪测试代码 锐化空间锐化Unsharp Masking频率域锐化对比测试 对比度增强常用算法对比测试 需求 对图像进行优化,使其看起来更清晰,同时保持尺寸不变,通常涉及到图像处理技术如锐化、降噪、对比度增强等 影响照片清晰度的因素 影响照片清晰度的因素有很多,主要可以从以下几个方面来分析 1. 拍摄设备 相机传感器:相机传

Centos7安装JDK1.8保姆版

工欲善其事,必先利其器。这句话同样适用于学习Java编程。在开始Java的学习旅程之前,我们必须首先配置好适合的开发环境。 通过事先准备好这些工具和配置,我们可以避免在学习过程中遇到因环境问题导致的代码异常或错误。一个稳定、高效的开发环境能够让我们更加专注于代码的学习和编写,提升学习效率,减少不必要的困扰和挫折感。因此,在学习Java之初,投入一些时间和精力来配置好开发环境是非常值得的。这将为我

pdfmake生成pdf的使用

实际项目中有时会有根据填写的表单数据或者其他格式的数据,将数据自动填充到pdf文件中根据固定模板生成pdf文件的需求 文章目录 利用pdfmake生成pdf文件1.下载安装pdfmake第三方包2.封装生成pdf文件的共用配置3.生成pdf文件的文件模板内容4.调用方法生成pdf 利用pdfmake生成pdf文件 1.下载安装pdfmake第三方包 npm i pdfma