本文主要是介绍OBD部署OceanBase集群-配置文件方式,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
前一篇文章介绍了OBD白屏可视化方式部署OceanBase集群 ,其原理是把可视化设置生成为一个配置文件,然后使用OBD命令部署集群
本篇想使用命令行加配置文件方式,只部署OceanBase和ODProxy两个组件
服务器参数配置和 oceanbase-all-in-one-*.tar.gz 软件包下载,请参考上一篇文章
三台服务器 192.168.113.161 、162、163 ,都需要在 /etc/sysctl.conf 中加入配置:
vm.max_map_count=655360
fs.file-max=6573688
使配置生效:sysctl -p
1.安装 all-in-one包
在192.168.113.161下安装all-in-one包,且会自动安装好ODB软件
tar -xzf oceanbase-all-in-one-*.tar.gz
cd oceanbase-all-in-one/bin/
./install.sh
source ~/.oceanbase-all-in-one/bin/env.sh
tar解压后,在 oceanbase-all-in-one/obd/usr/obd/example/ 目录下,有示例配置文件,因我只要部署OceanBase和OBProxy,就使用了 distributed-with-obproxy-example.yaml ,复制到root目录:cp distributed-with-obproxy-example.yaml /root/zycluster-deploy.yaml
# 解压在 /mnt/software/oceanbase-all-in-one 目录
[root@db1 ~]# cd /mnt/software/oceanbase-all-in-one/obd/usr/obd/example/
[root@db1 example]# ll
total 112
-rw-r--r-- 1 root root 15398 Dec 6 08:34 all-components-min.yaml
-rw-r--r-- 1 root root 15601 Dec 6 08:34 all-components.yaml
drwxrwxrwx 2 root root 262 Dec 6 08:34 autodeploy
-rw-r--r-- 1 root root 7193 Dec 6 08:34 default-components-min.yaml
-rw-r--r-- 1 root root 7396 Dec 6 08:34 default-components.yaml
-rw-r--r-- 1 root root 4240 Dec 6 08:34 distributed-example.yaml
-rw-r--r-- 1 root root 5765 Feb 18 08:48 distributed-with-obproxy-example.yaml
drwxrwxrwx 2 root root 129 Dec 6 08:34 grafana
-rw-r--r-- 1 root root 2289 Dec 6 08:34 local-example.yaml
-rw-r--r-- 1 root root 4226 Dec 6 08:34 mini-distributed-example.yaml
-rw-r--r-- 1 root root 5736 Dec 6 08:34 mini-distributed-with-obproxy-example.yaml
-rwxr-xr-x 1 root root 2453 Dec 6 08:34 mini-local-example.yaml
-rwxr-xr-x 1 root root 2721 Dec 6 08:34 mini-single-example.yaml
-rw-r--r-- 1 root root 4197 Dec 6 08:34 mini-single-with-obproxy-example.yaml
drwxrwxrwx 2 root root 135 Dec 6 08:34 obagent
drwxrwxrwx 2 root root 109 Dec 6 08:34 ob-configserver
drwxrwxrwx 2 root root 84 Dec 6 08:34 obproxy
drwxrwxrwx 2 root root 4096 Dec 6 08:34 oceanbase-3.x
drwxrwxrwx 2 root root 35 Dec 6 08:34 ocp-express
drwxrwxrwx 2 root root 102 Dec 6 08:34 prometheus
-rw-r--r-- 1 root root 2557 Dec 6 08:34 single-example.yaml
-rw-r--r-- 1 root root 4068 Dec 6 08:34 single-with-obproxy-example.yaml
- 本地单机部署配置样例:mini-local-example.yaml
- 单机部署配置样例:mini-single-example.yaml
- 单机部署 + ODP 配置样例:mini-single-with-obproxy-example.yaml
- 分布式部署 + ODP 配置样例:mini-distributed-with-obproxy-example.yaml
- 分布式部署 + ODP + OCP Express 配置样例:default-components-min.yaml
- 分布式部署全部组件:all-components-min.yaml
- 专业开发模式,适用于高配置 ECS 或物理服务器(可用资源不低于 16 核 64 GB)
- 本地单机部署配置样例:local-example.yaml
- 单机部署配置样例:single-example.yaml
- 单机部署 + ODP 配置样例:single-with-obproxy-example.yaml
- 分布式部署 + ODP 配置样例:distributed-with-obproxy-example.yaml
- 分布式部署 + ODP + OCP Express 配置样例:default-components.yaml
- 分布式部署全部组件:all-components.yaml
根据我服务器配置调整 zycluster-deploy.yaml ,内容如下:
## Only need to configure when remote login is required
user:username: rootpassword: 123456key_file: /root/.ssh/id_rsa
# port: your ssh port, default 22
# timeout: ssh connection timeout (second), default 30
oceanbase-ce:servers:- name: server1# Please don't use hostname, only IP can be supportedip: 192.168.113.161- name: server2ip: 192.168.113.162- name: server3ip: 192.168.113.163global:# Starting from observer version 4.2, the network selection for the observer is based on the 'local_ip' parameter, and the 'devname' parameter is no longer mandatory.# If the 'local_ip' parameter is set, the observer will first use this parameter for the configuration, regardless of the 'devname' parameter.# If only the 'devname' parameter is set, the observer will use the 'devname' parameter for the configuration.# If neither the 'devname' nor the 'local_ip' parameters are set, the 'local_ip' parameter will be automatically assigned the IP address configured above.# devname: eth0# if current hardware's memory capacity is smaller than 50G, please use the setting of "mini-single-example.yaml" and do a small adjustment.memory_limit: 10G # The maximum running memory for an observer# The reserved system memory. system_memory is reserved for general tenants. The default value is 30G.system_memory: 3Gdatafile_size: 12G # Size of the data file. datafile_next: 2Gdatafile_maxsize: 20Glog_disk_size: 12G # The size of disk space used by the clog files.cpu_count: 8mysql_port: 2881rpc_port: 2882production_mode: falseenable_syslog_wf: false # Print system logs whose levels are higher than WARNING to a separate log file. The default value is true.enable_syslog_recycle: true # Enable auto system log recycling or not. The default value is false.max_syslog_file_count: 4 # The maximum number of reserved log files before enabling auto recycling. The default value is 0.# observer cluster name, consistent with obproxy's cluster_nameappname: zyclusterroot_password: /aVi*H8(0%FS_YwZ-|dmo&[hjlT7pe@E # root user password, can be emptyproxyro_password: /aVi*H8(0%FS_YwZ-|dmo&[hjlT7pe@E # proxyro user pasword, consistent with obproxy's observer_sys_password, can be empty# In this example , support multiple ob process in single node, so different process use different ports.# If deploy ob cluster in multiple nodes, the port and path setting can be same. server1:mysql_port: 2881 # External port for OceanBase Database. The default value is 2881. DO NOT change this value after the cluster is started.rpc_port: 2882 # Internal port for OceanBase Database. The default value is 2882. DO NOT change this value after the cluster is started.# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.home_path: /root/zycluster# The directory for data storage. The default value is $home_path/store.# data_dir: /data# The directory for clog, ilog, and slog. The default value is the same as the data_dir value.# redo_dir: /redozone: zone1server2:mysql_port: 2881 # External port for OceanBase Database. The default value is 2881. DO NOT change this value after the cluster is started.rpc_port: 2882 # Internal port for OceanBase Database. The default value is 2882. DO NOT change this value after the cluster is started.# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.home_path: /root/zycluster# The directory for data storage. The default value is $home_path/store.# data_dir: /data# The directory for clog, ilog, and slog. The default value is the same as the data_dir value.# redo_dir: /redozone: zone2server3:mysql_port: 2881 # External port for OceanBase Database. The default value is 2881. DO NOT change this value after the cluster is started.rpc_port: 2882 # Internal port for OceanBase Database. The default value is 2882. DO NOT change this value after the cluster is started.# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.home_path: /root/zycluster# The directory for data storage. The default value is $home_path/store.# data_dir: /data# The directory for clog, ilog, and slog. The default value is the same as the data_dir value.# redo_dir: /redozone: zone3
obproxy-ce:# Set dependent components for the component.# When the associated configurations are not done, OBD will automatically get the these configurations from the dependent components.depends:- oceanbase-ceservers:- 192.168.113.161global:listen_port: 2883 # External port. The default value is 2883.prometheus_listen_port: 2884 # The Prometheus port. The default value is 2884.home_path: /root/obproxy# oceanbase root server list# format: ip:mysql_port;ip:mysql_port. When a depends exists, OBD gets this value from the oceanbase-ce of the depends.rs_list: 192.168.113.161:2881;192.168.113.162:2881;192.168.113.163:2881enable_cluster_checkout: false# observer cluster name, consistent with oceanbase-ce's appname. When a depends exists, OBD gets this value from the oceanbase-ce of the depends.cluster_name: zyclusterskip_proxy_sys_private_check: trueenable_strict_kernel_release: falseobproxy_sys_password: /aVi*H8(0%FS_YwZ-|dmo&[hjlT7pe@E # obproxy sys user password, can be empty. When a depends exists, OBD gets this value from the oceanbase-ce of the depends.observer_sys_password: /aVi*H8(0%FS_YwZ-|dmo&[hjlT7pe@E # proxyro user pasword, consistent with oceanbase-ce's proxyro_password, can be empty. When a depends exists, OBD gets this value from the oceanbase-ce of the depends.
2.部署集群
部署: obd cluster deploy zycluster -c zycluster-deploy.yaml
销毁: obd cluster destory zycluster ,然后删除目录 rm -rf xxxx
部署成功后,在/root 目录下生成了一个 zycluster 目录,数据和日志默认存储 home_path (也就是 zycluster/store 目录下),在真实场景下,请把data_dir 和 redo_dir 分别配置在独立的磁盘上,以提高IO性能和可用性
3.启动集群
启动:obd cluster start zycluster
4.使用Navicat连接集群sys系统租户
use oceanbase
查看资源,创建租户mq_t1
1.1.查看所有资源规格信息
SELECT * FROM DBA_OB_UNIT_CONFIGS;
1.2.删除资源规格
drop resource unit S1_unit_config;
1.3.创建资源规格(请按服务器真实配置和业务需求来设置合适资源大小)
CREATE RESOURCE UNIT S1_unit_config MEMORY_SIZE = '4G', MAX_CPU = 1, MIN_CPU = 1, LOG_DISK_SIZE = '2G', MAX_IOPS = 10000, MIN_IOPS = 10000, IOPS_WEIGHT=1;2.1.查看所有资源池信息
SELECT * FROM DBA_OB_RESOURCE_POOLS;
2.2.创建资源池
CREATE RESOURCE POOL mq_pool_01 UNIT='S1_unit_config', UNIT_NUM=1, ZONE_LIST=('zone1','zone2','zone3');3.1.查看所有的租户信息,其LOCALITY字段为租户副本分布
SELECT * FROM DBA_OB_TENANTS;
3.2.创建租户mq_t1,primary_zone=zone1
CREATE TENANT IF NOT EXISTS mq_t1 PRIMARY_ZONE='zone1', RESOURCE_POOL_LIST=('mq_pool_01') set OB_TCP_INVITED_NODES='%';
3.3.创建租户mq_t1, primary_zone=zone1;zone2;zone3,3个主zone同时读写提升数据库性能
CREATE TENANT IF NOT EXISTS mq_t1 PRIMARY_ZONE='zone1;zone2;zone3', RESOURCE_POOL_LIST=('mq_pool_01') set OB_TCP_INVITED_NODES='%';
3.4.删除租户
drop tenant mq_t1;
3.5查询租户
SELECT * FROM DBA_OB_TENANTS WHERE TENANT_NAME = 'mq_t1';4.关联查询租户资源配置信息
SELECT c.TENANT_ID, e.TENANT_NAME, concat(c.NAME, ': ', d.NAME) `pool:conf`,concat(c.UNIT_COUNT, ' unit: ', d.min_cpu, 'C/', ROUND(d.MEMORY_SIZE/1024/1024/1024,0), "G") unit_info FROM DBA_OB_RESOURCE_POOLS c, DBA_OB_UNIT_CONFIGS d, DBA_OB_TENANTS e WHERE c.UNIT_CONFIG_ID=d.UNIT_CONFIG_ID AND c.TENANT_ID=e.TENANT_ID AND c.TENANT_ID>1000 ORDER BY c.TENANT_ID;5.查看租户的资源单元部署位置
SELECT a.TENANT_NAME,a.TENANT_ID,b.SVR_IP FROM DBA_OB_TENANTS a,GV$OB_UNITS b WHERE a.TENANT_ID=b.TENANT_ID;6.查看节点的 Unit 信息
SELECT * FROM GV$OB_UNITS;
SELECT * FROM GV$OB_UNITS where TENANT_ID=1002;7.查看 OBServer 的信息
SELECT * FROM GV$OB_SERVERS;-- 停止服务节点
-- alter system start server '192.168.113.162:2882';
-- ALTER SYSTEM MINOR FREEZE SERVER = ('192.168.113.162:2882');
下图可看到mq_t1租户创建成功,该租户初始密码为空,使用:obclient -h192.168.113.161 -P2883 -uroot -p’/aVi*H8(0%FS_YwZ-|dmo&[hjlT7pe@E’ -Doceanbase -A 进入mq_t1租户
还可通过Navicat进入 mq_t1租户,初始密码为空,进入oceanbase后修改租户密码
5.创建数据库和添加数据
在 mq_t1 租户下创建zypcy数据库,创建person表,添加2条数据
这篇关于OBD部署OceanBase集群-配置文件方式的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!