Ob quick start (#499)
* Add Chapter One of the tutorial. * Updated some descriptions. * Added some urls. * Add chapter02 : How to deploy an oceanbase-ce cluster * Add all chapters of the tutorial. Waiting for review. * Modifiy some files as the first preview suggested. * Modifiy some files as the first preview suggested. * I temporarily remove some chapters for this PR, I will add them back in the later PR. * change OB to OceanBase. * 1.Remove pictures. 2. Add some description about Prometheus and Grafana. * Relocate the images. * Change OceanBase Deploy to OBD. * Fixed some format errors.
This commit is contained in:
parent
fd0fb2433c
commit
9a80911e62
154
docs/docs/junior-training/ob-quick-start/chapter01/1.md
Normal file
154
docs/docs/junior-training/ob-quick-start/chapter01/1.md
Normal file
@ -0,0 +1,154 @@
|
||||
# 第 1 章:OceanBase 数据库概述
|
||||
|
||||
OceanBase 数据库是一个原生的分布式关系数据库,它是完全由阿里巴巴和蚂蚁集团自主研发的项目。OceanBase 数据库构建在通用服务器集群上,基于 Paxos 协议和分布式架构,提供金融级高可用和线性伸缩能力,不依赖特定硬件架构,具备高可用、线性扩展、高性能、低成本等核心技术优势。
|
||||
|
||||
OceanBase 数据库具有如下特点:
|
||||
|
||||
+ 高可用
|
||||
单服务器故障能够自愈,支持跨城多机房容灾,数据零丢失,可满足金融行业 6 级容灾标准(RPO=0,RTO<=30 秒)。
|
||||
+ 线性扩展
|
||||
透明扩展,自动负载均衡,应用透明的水平扩展,集群规模可超过 1500 节点,数据量可达 PB 级,单表记录万亿行。
|
||||
+ MySQL/ORACLE 高度兼容
|
||||
社区版兼容 MySQL 协议、语法和使用习惯,MySQL 客户端工具可以直接访问 OceanBase 数据库。MySQL 从 5.6 开始。
|
||||
企业版兼容 MySQL、ORACLE 协议。ORACLE 从 ORACLE 11g 开始兼容。需要使用 OceanBase 自己的驱动才可以访问 OceanBase 的 ORACLE 租户。
|
||||
+ 高性能
|
||||
准内存级数据变更操作、独创的编码压缩技术,结合线性水平扩展,TPC-C 测试达到 7.07 亿 tpmC。
|
||||
+ 低成本
|
||||
使用 PC 服务器和低端 SSD,高存储压缩率降低存储成本,高性能降低计算成本,多租户混部充分利用系统资源。
|
||||
+ 多租户
|
||||
原生支持多租户构架,同一套数据库集群可以为多个独立业务提供服务,租户间数据隔离,降低部署和运维成本。
|
||||
|
||||
OceanBase 数据库支持支付宝的全部核心业务,以及银行、保险、证券、运营商等多个行业的数百个客户的核心业务系统。
|
||||
|
||||
## OceanBase 发展历史
|
||||
|
||||
在使用 OceanBase 之前,我们先对 OceanBase 的历史做一个简单的了解。
|
||||
|
||||

|
||||
|
||||
+ 诞生 : 2010年,OceanBase 创始人阳振坤博士带领初创团队启动了 OceanBase 项目。第一个应用是淘宝的收藏夹业务。如今收藏夹依然是 OceanBase 的客户。收藏夹单表数据量非常大,OceanBase 用独创的方法解决了其高并发的大表连接小表的需求。
|
||||
+ 关系数据库 : 早期的版本中,应用通过定制的 API 库访问 OceanBase 。2012年,OceanBase 发布了支持 SQL 的版本,初步成为一个功能完整的通用关系数据库。
|
||||
+ 初试金融业务 : OceanBase 进入支付宝(后来的蚂蚁集团),开始应用于金融级的业务场景。2014年”双11“大促活动,OceanBase 开始承担交易库部分流量。此后,新成立的网商银行把所有核心交易库都运行在 OceanBase 上。
|
||||
+ 金融级核心库 : 2016年,OceanBase 发布了架构重新设计后的1.0版本,支持了分布式事务,提升了高并发写业务中的扩展,同时实现了多租户架构,这个整体架构延续至今。同时,到 2016 年”双11“时,支付宝全部核心库的业务流量 100% 运行在OceanBase ,包括交易、支付、会员和最重要的账务库。
|
||||
+ 走向外部市场 : 2017年,OceanBase 开始试点外部业务,成功应用于南京银行。
|
||||
+ 商业化加速 : 2018年,OceanBase 发布 2.0 版本,开始支持 Oracle 兼容模式。这一特性降低应用改造适配成本,在外部客户中快速推广开来。
|
||||
+ 登峰造极 : 2019年,OceanBase 2.2 版本参加代表 OLTP 数据库最权威的 TPC-C 评测,以 6000万 tpmC 的成绩登顶世界第一。随后,在 2020 年,又以 7亿 tpmC 刷新纪录,截止目前依然稳居第一。这充分证明了 OceanBase 优秀的扩展性和稳定性。OceanBase 是第一个也是截止目前唯一一个上榜 TPC-C 的中国数据库产品。
|
||||
+ HTAP 混合负载 : 2021年,OceanBase 3.0 基于全新的向量化执行引擎,在TPC-H 30000GB 的评测中以 1526 万 QphH 的成绩刷新了评测榜单。这标志着 OceanBase 一套引擎处理 AP 和 TP 混合负载的能力取得了基础性的突破。
|
||||
+ 开源开放 : 2021年六一儿童节,OceanBase宣布全面开源,开放合作,共建生态。
|
||||
|
||||
OceanBase 在 2021年6月份正式推出社区版并开放源码,版本从 3.1.0 开始,源码托管地址:[github.com/oceanbase](github.com/oceanbase) 。同时代码也同步发布到开源中国网站:[gitee.com/oceanbase](gitee.com/oceanbase) 。
|
||||
开源的内容包括:
|
||||
|
||||
+ 数据库内核 OceanBase
|
||||
+ 反向访问代理 `obproxy`
|
||||
+ 数据库客户端命令行工具 `obclient`
|
||||
+ 自动化部署工具 `OBD`
|
||||
+ c 语言驱动 `obconnector-c`
|
||||
+ CDC 组件 `oblogproxy` 和 `canal` 插件
|
||||
+ OB监控客户端组件 `obagent`
|
||||
+ spark 插件 `obspark`(待开源)
|
||||
|
||||
## OceanBase 业务案例
|
||||
|
||||
跟其他开源数据库不一样的地方是, OceanBase 先有企业版后有社区版。先有大企业商业版案例,再有社区版案例。社区版和企业版的核心能力是一样的。
|
||||
|
||||
典型客户如下:
|
||||
|
||||
+ 自用:蚂蚁集团(包括支付宝、网商银行)。
|
||||
+ 银行:中国工商银行;南京银行、东莞银行、天津银行、苏州银行;常熟农商行。
|
||||
+ 保险:中国人保、中华保险。
|
||||
+ 证券:招商证券、上投摩根。
|
||||
+ 非金融行业:浙江移动、山东移动;数字江西;中国石化。
|
||||
|
||||
详细案例请查看:`https://www.oceanbase.com/customer/home` 。更多行业和客户还没有在这里列出。
|
||||
|
||||
OceanBase 本质上是个单进程软件,独立部署,跟硬件、云平台没有绑定关系。可以部署在各个云厂商的云服务器上。OceanBase 在阿里云也有公有云数据库服务(`https://www.aliyun.com/product/oceanbase`)。
|
||||
|
||||
OceanBase 在公有云上(包括在 ECS 上独立部署的)客户案例有:
|
||||
|
||||
+ 中华联合财险
|
||||
+ 菲律宾版支付GCash
|
||||
+ 印度尼西亚电子钱包 DANA
|
||||
|
||||
## OceanBase 社区版简介
|
||||
|
||||
OceanBase 数据库社区版使用 [MulanPubL - 2.0 许可证](http://license.coscl.org.cn/MulanPubL-2.0/index.html) 。您可以免费复制及使用源代码。当您修改或分发源代码时,请遵守木兰协议。
|
||||
OceanBase 社区版官方网站地址是: [open.oceanbase.com](open.oceanbase.com) 。
|
||||
|
||||
### 下载方法
|
||||
|
||||
+ 官网下载:[https://open.oceanbase.com/softwareCenter/community](https://open.oceanbase.com/softwareCenter/community)
|
||||
+ GitHub 下载:[https://github.com/oceanbase/oceanbase/releases/](https://github.com/oceanbase/oceanbase/releases/)
|
||||
+ 阿里云 Yum 源:[https://mirrors.aliyun.com/oceanbase/OceanBase.repo](https://mirrors.aliyun.com/oceanbase/OceanBase.repo)
|
||||
|
||||
### 支持的操作系统
|
||||
|
||||
OceanBase 社区版支持的操作系统包括:
|
||||
|
||||
+ CentOS :推荐7.2 以后版本。
|
||||
+ Debian :推荐 9.8, 10.9 版本。
|
||||
+ openSUSE :推荐 15.2 版本。
|
||||
+ OpenAnolis:推荐 8.2 版本。
|
||||
+ SUSE : 推荐 15.2 版本。
|
||||
+ Ubuntu:推荐 16.04 、18.04、20.04 等版本。
|
||||
|
||||
### 跟MySQL 数据库的不同
|
||||
|
||||
OceanBase 社区版兼容 MySQL 语法功能(主要是 5.6 的绝大部分语法,部分 8.0 的新特性等),底层原理跟 MySQL 完全没有关系,不依赖开源 MySQL 组件,没有 InnoDB 引擎等。
|
||||
OceanBase 自身的存储引擎相比 MySQL 的存储,空间压缩效果更明显,社区版的压缩效果可以做到 MySQL 空间的四分之一。
|
||||
|
||||
OceanBase 是分布式数据库集群产品,生产环境默认数据三副本,并且三副本之前同步协议不是异步同步、半同步同步技术,而是使用 Paxos 协议同步事务日志。OceanBase 集群可以跨机房跨城市部署,机器或者机房故障时,集群内部多副本自动切换,不丢数据。OceanBase 天然适合两地三中心异地容灾和多活建设。
|
||||
|
||||
OceanBase 集群支持多租户(也叫多实例),所有的租户按需分配,弹性伸缩,具备高可用能力,类似云数据库服务。运维人员只需要维护少数几套集群,就可以提供很多实例给业务使用,易用性非常好。
|
||||
|
||||
OceanBase 支持水平拆分技术,具体就是分区表,不需要分库分表,SQL 和事务对业务完全透明,功能上没有限制。分区表线性扩展性也很好,目前已知案例最大单租户节点规模是 1500 台。
|
||||
|
||||
OceanBase 的 SQL 引擎能力远比 MySQL 功能强大,支持 SQL 解析和执行计划缓存,支持复杂的 SQL 运算,支持大纲技术干预 SQL 执行计划等。同时一套 SQL 引擎 一个数据源 同时支持 OLTP 和 ROLAP 类型的混合场景需求。即通常说的 HTAP 能力。
|
||||
|
||||
### 社区版核心功能
|
||||
|
||||
OceanBase 社区版包含 OceanBase 企业版的所有核心功能,如下:
|
||||
|
||||
+ 多副本高可用、强同步能力。
|
||||
+ 多租户能力。
|
||||
+ 在线弹性伸缩能力。
|
||||
+ 异地容灾/多活能力(包括两地三中心、三地五中心等)。
|
||||
+ 分区表、复制表等分布式能力。
|
||||
+ HTAP 能力。
|
||||
+ MySQL 兼容性。
|
||||
+ 备份恢复能力。
|
||||
+ CDC 能力。
|
||||
|
||||
OceanBase 社区版跟企业版的差异在于企业版会包含更多高级功能。如商业特性兼容、图形化管理工具、操作审计、安全加密、高可用扩展等。有关企业版信息请查看企业版官方网站([`oceanbase.com`](oceanbase.com))。
|
||||
|
||||
## 适合社区版的业务场景
|
||||
|
||||
+ MySQL 5.6/5.7 实例规模很大的场景。
|
||||
|
||||
MySQL 实例规模大,需要自动化运维平台。自动化运维平台在处理 MySQL 异常宕机切换和主备不一致问题时很可能需要 DBA 介入。高可用和强一致问题是MySQL 最大的风险。
|
||||
OceanBase 的多租户、高可用和强一致能力可以彻底解决这个痛点。
|
||||
|
||||
+ MySQL 5.6/5.7 数据量非常大存储成本高的场景。
|
||||
|
||||
MySQL 业务数据量增长到 几T 以上时,查询和读写性能可能会下降,大表 DDL 时间变长风险增加。单机磁盘容量可能到达扩容瓶颈。
|
||||
OceanBase MySQL租户的在线 DDL,数据存储高压缩比可以解决这些痛点。
|
||||
|
||||
+ 业务访问压力大或者变化大的场景。
|
||||
|
||||
业务访问压力大,基于MySQL 改造的分布式数据库中间件产品能一定程度分担业务压力和存储空间压力,但是缺乏跨节点的强一致性查询,以及需要分布式事务中间件协调事务,扩容的时候可能要数据逻辑拆分(俗称拆库拆表),运维成本高,风险高。
|
||||
OceanBase MySQL 租户提供分区表的水平拆分方案,提供原生的 SQL 和事务能力,对业务透明。并且支持在线扩容和缩容,内部数据迁移异步进行,具备高可用能力,不怕扩容和缩容过程中出现故障,可以解决上面这些痛点。
|
||||
|
||||
+ 交易数据库上的复杂查询场景。
|
||||
|
||||
交易数据库上有少量复杂的查询场景,涉及到的数据量很大,传统解决方案是通过数据同步到数据仓库进行查询。OceanBase 数据库的 SQL 引擎同时满足 OLTP 和 OLAP 场景,采用经过 ORACLE 复杂业务场景检验的先进的SQL优化器技术,能支持复杂的SQL优化和高效执行。因此可以在交易数据库上直接做复杂查询,减少不必要的数据同步。此外,OceanBase 还提供不同程度的读写分离技术来控制复杂查询对交易场景的影响。
|
||||
|
||||
其他更多场景待实践总结,敬请关注。
|
||||
|
||||
## 如何联系我们
|
||||
|
||||
欢迎广大 OceanBase 爱好者、用户和客户有任何问题联系我们反馈:
|
||||
|
||||
+ 企业版官网:[https://oceanbase.com](https://oceanbase.com) 。
|
||||
+ 社区版官网:[https://open.oceanbase.com](https://open.oceanbase.com) 。
|
||||
+ 社区版项目网站提 `Issue`:[https://github.com/oceanbase/oceanbase/issues](https://github.com/oceanbase/oceanbase/issues) 。
|
||||
+ 钉钉群:群号 `33254054` 。
|
27
docs/docs/junior-training/ob-quick-start/chapter02/2.0.md
Normal file
27
docs/docs/junior-training/ob-quick-start/chapter02/2.0.md
Normal file
@ -0,0 +1,27 @@
|
||||
# 第 2 章:如何部署 OceanBase 社区版
|
||||
|
||||
本章主要介绍如何手动或自动部署 OceanBase 社区版集群,包括单副本和三副本集群。
|
||||
|
||||
## 本章目录
|
||||
|
||||
+ [部署准备](2.1.md)
|
||||
+ [如何快速体验 OB](2.2.md)
|
||||
+ [如何规划 OceanBa se集群部署](2.3.md)
|
||||
+ [如何初始化服务器环境](2.4.md)
|
||||
+ [如何安装 OBD 自动化部署软件](2.5.md)
|
||||
+ [如何使用 OBD 自动化部署单节点集群](2.6.md)
|
||||
+ [如何使用 OBD 自动化部署多节点集群](2.7.md)
|
||||
+ [如何查看和修改 OceanBase 集群参数](2.8.md)
|
||||
+ [如何部署 OBAgent](2.9.md)
|
||||
+ [如何重启 OceanBase 集群](2.10.md)
|
||||
+ [(高级)如何手动部署 OceanBase 集群](2.11.md)
|
||||
+ [常见问题](2.12.md)
|
||||
+ [附录](2.13.md)
|
||||
|
||||
## 如何联系我们
|
||||
|
||||
欢迎广大 OceanBase 爱好者、用户和客户有任何问题联系我们反馈:
|
||||
|
||||
+ 社区版官网论坛:[https://open.oceanbase.com/answer](https://open.oceanbase.com/answer) 。
|
||||
+ 社区版项目网站提 `Issue`:[https://github.com/oceanbase/oceanbase/issues](https://github.com/oceanbase/oceanbase/issues) 。
|
||||
+ 钉钉群:群号 `33254054` 。
|
66
docs/docs/junior-training/ob-quick-start/chapter02/2.1.md
Normal file
66
docs/docs/junior-training/ob-quick-start/chapter02/2.1.md
Normal file
@ -0,0 +1,66 @@
|
||||
# 部署准备
|
||||
|
||||
OceanBase 是一个分布式集群产品,在生产环境至少是三台机器。学习环境可以部署单机版本。
|
||||
OceanBase 的部署跟传统数据库的部署有很多共同的地方,对操作系统硬件、软件设置、文件系统等会有一些最佳实践建议。那些是 OceanBase 发挥高性能稳定运行的基础。社区版也提供了工具能实现一定程度的自动化。
|
||||
|
||||
<!-- more -->
|
||||
|
||||
## 软件介绍
|
||||
|
||||
OceanBase 本质上是一个单进程的软件,可执行文件名叫 `observer` 。可以通过 RPM 包安装,也可以通过源码直接编译安装。本课程都是通过 RPM 包方式安装、
|
||||
|
||||
软件包下载地址有:
|
||||
|
||||
+ 官网下载:[https://open.oceanbase.com/softwareCenter/community](https://open.oceanbase.com/softwareCenter/community)
|
||||
+ GitHub 下载:[https://github.com/oceanbase/oceanbase/releases/](https://github.com/oceanbase/oceanbase/releases/)
|
||||
+ 阿里云 Yum 源:[https://mirrors.aliyun.com/oceanbase/OceanBase.repo](https://mirrors.aliyun.com/oceanbase/OceanBase.repo)
|
||||
|
||||
| 软件包名 | 进程名 | 软件用途 |
|
||||
|------------------------------------------|----------|------------------------------------|
|
||||
| oceanbase-ce-3.1.1-1.el7.x86_64.rpm | observer | oceanbase 数据库进程,常驻后台运行。 |
|
||||
| oceanbase-ce-libs-3.1.1-1.el7.x86_64.rpm | | 提供软件运行的 library,不运行。 |
|
||||
| obproxy-3.1.0-1.el7.x86_64.rpm | obproxy | oceanbase 访问反向代理,单进程,常驻后台运行。 |
|
||||
| ob-deploy-1.1.1-1.el7.x86_64 | obd | oceanbase 自动化部署软件,提供部署命令行,不常驻后台运行。 |
|
||||
| obclient-2.0.0-2.el8.x86_64.rpm | obclient| oceanbase 官方命令行客户端 |
|
||||
|
||||
注:版本号后期会变,以实际版本为主。
|
||||
|
||||
如果机器可以连公网,可以将阿里云 YUM 源添加到本地仓库,使用 yum 命令安装。
|
||||
|
||||
```bash
|
||||
yum install -y yum-utils
|
||||
yum-config-manager --add-repo https://mirrors.aliyun.com/oceanbase/OceanBase.repo
|
||||
yum -y install ob-deploy oceanbase obclient
|
||||
```
|
||||
|
||||
## 部署资源要求
|
||||
|
||||
OceanBase 数据库运行的时候会对主机资源有一些要求,主要是 CPU、内存和磁盘空间。安装 OceanBase 的目的不一样,对资源的要求也不一样。
|
||||
|
||||
| 目的 | CPU(核数) | 可用内存 | 磁盘 | 备注 |
|
||||
|-----|---------|------|----|----|
|
||||
| 功能学习 | 2 | 10G | 10G | 不初始化数据。 |
|
||||
| 性能测试 | 24 | 128G | SSD 500G以上 | 数据盘和日志盘要分开。|
|
||||
| 生产环境 | 32 | 256G | SSD 2T以上 | 数据盘和日志盘要分开。日志盘大小是内存的3-4倍。数据量增长的时候,数据盘大小也要增加。 |
|
||||
|
||||
注意:上面性能测试环境和生产环境的资源要求是建议。在社区版后续版本,会进一步降低对内存的要求。
|
||||
|
||||
OceanBase 对操作系统也有一些要求,目前支持下面这些系统:
|
||||
|
||||
+ Redhat / CentOS 7.x/8.x
|
||||
+ SUSE / OpenSUSE 15.x
|
||||
+ Anlios 7.x/8.x
|
||||
+ Debian 9.x
|
||||
+ Ubuntu 20.x
|
||||
|
||||
## 部署过程简介
|
||||
|
||||
自动化部署过程简单来说分为几步:
|
||||
|
||||
+ 初始化 OceanBase 各个节点环境。包括参数配置、文件系统目录设置等。
|
||||
+ 初始化中控机到OceanBase 各个节点的 SSH 免密登录。
|
||||
+ 准备 OBD 自动化部署配置文件。
|
||||
+ 使用 OBD 部署集群节点目录。
|
||||
+ 使用 OBD 启动并初始化集群。
|
||||
|
||||
后面还会详细介绍单节点和三节点集群的部署方法,以及手动部署的一些步骤。
|
96
docs/docs/junior-training/ob-quick-start/chapter02/2.10.md
Normal file
96
docs/docs/junior-training/ob-quick-start/chapter02/2.10.md
Normal file
@ -0,0 +1,96 @@
|
||||
# 如何重启 OceanBase 集群
|
||||
|
||||
OB 自身并没有提供“重启集群”的命令。OB 的核心能力就是高可用,前提是三副本部署。当少数派节点故障时,OB 内部可能会自动切换,依然可以为业务提供读写服务。OB 提供了停止和启动某个副本(`zone` 级别或者 `server` 级别)的功能,并且只允许停止少数派节点。
|
||||
|
||||
所以,OB 集群的重启是靠外部操作。比如说用 `kill` 命令杀进程,然后再启动进程 `observer` 。
|
||||
|
||||
上面修改参数的时候已经演示了如何杀单副本集群里的节点进程,下面演示三副本集群里的重启集群方法。在生产环境为了尽可能的缩短集群不可用时间,重启集群采取一种保险的策略:按 `zone` 或 `server` 逐个重启集群节点。这个过程可能会比较长,持续几分钟到十几分钟。在刚开始学习 OceanBase 的时候,我们先掌握简单的重启方法,后面深入介绍 OceanBase 运维的时候,再介绍安全稳妥的重启方法。
|
||||
|
||||
## 直接手动重启 OceanBase 集群节点
|
||||
|
||||
|
||||
```bash
|
||||
# ssh 到 节点 1
|
||||
ssh 172.20.249.52
|
||||
# 正常 kill 进程,除非是测试用或者评估过风险,否则不要用 `kill -9` 。
|
||||
kill `pidof observer`
|
||||
# 等待 60s,等进程完全退出
|
||||
sleep 60
|
||||
# 反复确认进程完全退出
|
||||
ps -ef | grep observer
|
||||
# 配置 LIBRARY PATH
|
||||
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/oceanbase-ce/lib/
|
||||
# 启动进程
|
||||
cd /home/admin/oceanbase-ce && bin/observer
|
||||
# 等待 10s 进程启动
|
||||
sleep 10
|
||||
# 反复确认进程启动时没有退出
|
||||
ps -ef | grep observer | grep -v grep
|
||||
# 等待 60s,等进程完全启动并恢复完毕
|
||||
sleep 60
|
||||
# 查看进程监听成功(默认监听 2881 和 2882 端口)
|
||||
netstat -ntlp
|
||||
|
||||
# 在集群中查看节点状态(`status`)、开始服务时间(`start_service_time`)是否正常。
|
||||
select a.zone,concat(a.svr_ip,':',a.svr_port) observer, cpu_total, (cpu_total-cpu_assigned) cpu_free, round(mem_total/1024/1024/1024) mem_total_gb, round((mem_total-mem_assigned)/1024/1024/1024) mem_free_gb, usec_to_time(b.last_offline_time) last_offline_time, usec_to_time(b.start_service_time) start_service_time, b.status, usec_to_time(b.stop_time) stop_time, b.build_version from __all_virtual_server_stat a join __all_server b on (a.svr_ip=b.svr_ip and a.svr_port=b.svr_port) order by a.zone, a.svr_ip;
|
||||
|
||||
```
|
||||
|
||||
只有第一个节点重启成功后,再重复操作第二个节点。
|
||||
当然,如果只是测试,不在乎可用性。就可以忽略上面的确认过程,直接杀掉所有集群节点的进程,然后启动进程。这个时候集群节点起来后也许要几分钟恢复数据和通信。如果集群重启之前有大量的数据读写,这个节点进程的恢复时间可能会很长,要十几分钟甚至几十分钟。
|
||||
|
||||
## 使用 OBD 重启集群
|
||||
|
||||
上面是手动重启 OceanBase 集群节点的原理,下面是使用 OBD 工具自动化做这个操作。但是注意,当前 OBD 的重启集群可能并没有包含必要的检查操作,所以,测试环境可以用,生产环境要谨慎使用。
|
||||
|
||||
使用 OBD 重启集群的命令是:`obd cluster restart ` 。
|
||||
|
||||
```bash
|
||||
obd cluster restart obce-3zones
|
||||
|
||||
输出:
|
||||
[admin@obce00 oceanbase-ce]$ obd cluster restart obce-3zones
|
||||
Get local repositories and plugins ok
|
||||
Open ssh connection ok
|
||||
Stop observer ok
|
||||
Stop obproxy ok
|
||||
obce-3zones stopped
|
||||
Get local repositories and plugins ok
|
||||
Open ssh connection ok
|
||||
Cluster param config check ok
|
||||
Check before start observer ok
|
||||
Check before start obproxy ok
|
||||
Start observer ok
|
||||
observer program health check ok
|
||||
Connect to observer ok
|
||||
Wait for observer init ok
|
||||
+-------------------------------------------------+
|
||||
| observer |
|
||||
+---------------+---------+------+-------+--------+
|
||||
| ip | version | port | zone | status |
|
||||
+---------------+---------+------+-------+--------+
|
||||
| 172.20.249.49 | 3.1.0 | 2881 | zone2 | active |
|
||||
| 172.20.249.51 | 3.1.0 | 2881 | zone3 | active |
|
||||
| 172.20.249.52 | 3.1.0 | 2881 | zone1 | active |
|
||||
+---------------+---------+------+-------+--------+
|
||||
|
||||
Start obproxy ok
|
||||
obproxy program health check ok
|
||||
Connect to obproxy ok
|
||||
Initialize cluster
|
||||
+-------------------------------------------------+
|
||||
| obproxy |
|
||||
+---------------+------+-----------------+--------+
|
||||
| ip | port | prometheus_port | status |
|
||||
+---------------+------+-----------------+--------+
|
||||
| 172.20.249.52 | 2883 | 2884 | active |
|
||||
| 172.20.249.49 | 2883 | 2884 | active |
|
||||
| 172.20.249.51 | 2883 | 2884 | active |
|
||||
+---------------+------+-----------------+--------+
|
||||
obce-3zones running
|
||||
```
|
||||
|
||||
上面 OBD 重启集群的时候,默认重启了所有组件(包括 `OBSERVER` 和 `OBPROXY` )。也可以通过 `-c `命令指定重启具体的组件。
|
||||
|
||||
有关 OBPROXY 的重启特点跟 OBSERVER 是一样的,也有工作目录和启动参数。这里就先不介绍了。后面在 OceanBase 的运维章节也会介绍 OBPROXY 的相关运维。
|
||||
|
459
docs/docs/junior-training/ob-quick-start/chapter02/2.11.md
Normal file
459
docs/docs/junior-training/ob-quick-start/chapter02/2.11.md
Normal file
@ -0,0 +1,459 @@
|
||||
# (高级)如何手动部署 OceanBase 集群
|
||||
|
||||
当您熟悉了 OBD 部署的 OceanBase 集群方法原理后,就可以尝试手动部署一套 OceanBase 集群。这样的好处就是当 OBD 的功能不满足您的需求时,您可以自己写程序脚本做 OceanBase 集群的部署,或者在集群出异常的时候,能够手动做一些应急处理。
|
||||
|
||||
## 部署规划
|
||||
|
||||
这一节介绍 OceanBase 集群三节点手动部署方法,需要通过中控机直接远程登录到 OceanBase 节点上部署启动 `observer` 进程,并在中控机上部署 `obproxy` 进程。
|
||||
|
||||
+ 机器信息如下:
|
||||
|
||||
| 机器类型 | 云主机 ECS |
|
||||
|------|-------------------------------|
|
||||
| IP | 172.20.249.50 |
|
||||
| 网卡名 | eth0 |
|
||||
| OS | CentOS Linux release 8.4.2105 |
|
||||
| CPU | 4C |
|
||||
| 内存 | 总内存 14G,可用内存 11G |
|
||||
| 磁盘1 | 云盘 /dev/vda 100G |
|
||||
| 磁盘2 | 云盘 /dev/vdb 100G |
|
||||
|
||||
+ 机器划分如下:
|
||||
|
||||
| 角色 | 机器 | 备注 |
|
||||
|----------|---------------|---------------------|
|
||||
| OBD | 172.20.249.50 | 中控机,自动化部署软件 |
|
||||
| OBSERVER | 172.20.249.52 | OceanBase 数据库 zone1 |
|
||||
| | 172.20.249.49 | OceanBase 数据库 zone2 |
|
||||
| | 172.20.249.51 | OceanBase 数据库 zone3 |
|
||||
| OBPROXY | 172.20.249.50 | OceanBase 访问反向代理 |
|
||||
| OBCLIENT | 172.20.249.50 | OceanBase 命令行客户端 |
|
||||
|
||||
部署之前首先要初始化服务器环境,这个请参考前面章节《如何初始化服务器环境》。
|
||||
|
||||
+ 机器三节点之间时间同步检查
|
||||
|
||||
检查本机和目标节点时间误差常用命令是: ` clockdiff ` 。
|
||||
示例:
|
||||
|
||||
```bash
|
||||
[admin@obce02 oceanbase]$ sudo clockdiff 172.20.249.52
|
||||
[sudo] password for admin:
|
||||
.
|
||||
host=172.20.249.52 rtt=750(187)ms/0ms delta=0ms/0ms Sun Sep 12 14:52:24 2021
|
||||
[admin@obce02 oceanbase]$ sudo clockdiff 172.20.249.51
|
||||
.
|
||||
host=172.20.249.51 rtt=750(187)ms/0ms delta=0ms/0ms Sun Sep 12 14:52:30 2021
|
||||
```
|
||||
|
||||
可能会有些机器使用 `clockdiff` 会报错。此时可以换下面命令判断时间同步误差。
|
||||
|
||||
```bash
|
||||
[admin@obce02 oceanbase]$ ping -T tsandaddr 172.20.249.52 -c 2
|
||||
PING 172.20.249.52 (172.20.249.52) 56(124) bytes of data.
|
||||
64 bytes from 172.20.249.52: icmp_seq=1 ttl=64 time=0.161 ms
|
||||
TS: 172.20.249.49 24851014 absolute
|
||||
172.20.249.52 -1
|
||||
172.20.249.52 0
|
||||
172.20.249.49 1
|
||||
|
||||
64 bytes from 172.20.249.52: icmp_seq=2 ttl=64 time=0.172 ms
|
||||
TS: 172.20.249.49 24852054 absolute
|
||||
172.20.249.52 -1
|
||||
172.20.249.52 0
|
||||
172.20.249.49 1
|
||||
|
||||
```
|
||||
|
||||
三节点时间同步误差如果超过 50ms,则后面初始化集群一定会失败。
|
||||
这里还要留意节点的时间误差可能有个缓慢递增的特点,也许当前集群还能正常工作,一天后由于节点时间误差扩大到 50ms 以外,该节点就掉线了。
|
||||
|
||||
## 安装 OceanBase 软件包
|
||||
|
||||
手动部署,需要安装 OceanBase 的 OBSERVER 软件 。
|
||||
|
||||
```bash
|
||||
[admin@obce02 ~]$ ls -lrth /tmp/oceanbase-ce-*.rpm
|
||||
-rw-r--r-- 1 admin admin 45M Sep 12 13:36 /tmp/oceanbase-ce-3.1.0-3.el8.x86_64.rpm
|
||||
|
||||
[admin@obce02 ~]$ sudo rpm -ivh /tmp/oceanbase-ce-*.rpm
|
||||
warning: /tmp/oceanbase-ce-3.1.0-3.el8.x86_64.rpm: Header V4 RSA/SHA1 Signature, key ID e9b4a7aa: NOKEY
|
||||
Verifying... ################# [100%]
|
||||
Preparing... ################# [100%]
|
||||
Updating / installing...
|
||||
1:oceanbase-ce-libs-3.1.0-3.el8 ################# [ 50%]
|
||||
2:oceanbase-ce-3.1.0-3.el8 ################# [100%]
|
||||
|
||||
|
||||
```
|
||||
|
||||
软件包默认安装目录是 : `/home/admin/oceanbase` 。目录结构如下:
|
||||
|
||||
```bash
|
||||
[admin@obce01 ~]$ tree oceanbase
|
||||
oceanbase
|
||||
├── bin
|
||||
│ ├── import_time_zone_info.py
|
||||
│ └── observer
|
||||
├── etc
|
||||
│ └── timezone_V1.log
|
||||
└── lib
|
||||
├── libaio.so -> libaio.so.1.0.1
|
||||
├── libaio.so.1 -> libaio.so.1.0.1
|
||||
├── libaio.so.1.0.1
|
||||
├── libmariadb.so -> libmariadb.so.3
|
||||
└── libmariadb.so.3
|
||||
```
|
||||
|
||||
提示:您也可以对 RPM 包直接解压到指定目录,就不用安装到默认目录。
|
||||
|
||||
## (可选)清理目录和数据
|
||||
|
||||
**第一次部署不需要执行这步。**
|
||||
这步主要是用于后面安装部署失败后,需要清空目录和数据重新部署。
|
||||
|
||||
```bash
|
||||
kill -9 `pidof observer`
|
||||
/bin/rm -rf ~/oceanbase/store/obdemo/*/*
|
||||
```
|
||||
|
||||
检查目录结构,跟下面一致。
|
||||
|
||||
```bash
|
||||
tree ~/oceanbase/store/ /data/ /redo/
|
||||
|
||||
输出:
|
||||
[admin@obce02 ~]$ tree ~/oceanbase/store/ /data/ /redo/
|
||||
/home/admin/oceanbase/store/
|
||||
└── obdemo
|
||||
├── clog -> /redo/obdemo/clog
|
||||
├── etc2 -> /redo/obdemo/etc2
|
||||
├── etc3 -> /data/obdemo/etc3
|
||||
├── ilog -> /redo/obdemo/ilog
|
||||
├── slog -> /redo/obdemo/slog
|
||||
└── sstable -> /data/obdemo/sstable
|
||||
/data/
|
||||
└── obdemo
|
||||
├── etc3
|
||||
└── sstable
|
||||
/redo/
|
||||
└── obdemo
|
||||
├── clog
|
||||
├── etc2
|
||||
├── ilog
|
||||
└── slog
|
||||
|
||||
15 directories, 0 files
|
||||
```
|
||||
|
||||
## 初始化数据目录
|
||||
|
||||
**这一步只用于第一次部署时执行。如果是重复部署,目录已经创建的情况下,不需要再执行这一步。**
|
||||
手动部署时,OceanBase 节点上的相关目录都需要手动创建。
|
||||
|
||||
```bash
|
||||
su - admin
|
||||
mkdir -p ~/oceanbase/store/obdemo /data/obdemo/{sstable,etc3} /redo/obdemo/{clog,ilog,slog,etc2}
|
||||
for f in {clog,ilog,slog,etc2}; do ln -s /redo/obdemo/$f ~/oceanbase/store/obdemo/$f ; done
|
||||
for f in {sstable,etc3}; do ln -s /data/obdemo/$f ~/oceanbase/store/obdemo/$f; done
|
||||
|
||||
```
|
||||
|
||||
备注意:
|
||||
|
||||
+ 首先创建工作目录下的总数据目录 `~/oceanbase/store/obdemo` 、 数据文件目录 `/data/obdemo` 和日志相关目录 `/redo/obdemo` 。
|
||||
注意,跟使用 OBD 自动化部署的 OceanBase 节点目录稍微有点不一样的地方是我在目录里加入了集群名标识(`obdemo`)。
|
||||
+ 第二个不一样的地方是在 `~/oceanbase/store/obdemo` 是真实的目录,下面的子目录是映射到其他两个文件系统路径(指 `/data/` 和 `/redo/`)。生产环境要求这两个文件系统尽可能是两块独立的物理盘,或者最低要求是两个独立的逻辑盘。
|
||||
|
||||
我们看一下初始化后的目录结构。这个目录结构很重要,有时候进程 `observer` 启动失败就跟目录结构和权限不对有关。
|
||||
|
||||
```bash
|
||||
[admin@obce02 ~]$ tree ~/oceanbase/store/ /data/ /redo/
|
||||
/home/admin/oceanbase/store/
|
||||
└── obdemo
|
||||
├── clog -> /redo/obdemo/clog
|
||||
├── etc2 -> /redo/obdemo/etc2
|
||||
├── etc3 -> /data/obdemo/etc3
|
||||
├── ilog -> /redo/obdemo/ilog
|
||||
├── slog -> /redo/obdemo/slog
|
||||
└── sstable -> /data/obdemo/sstable
|
||||
/data/
|
||||
└── obdemo
|
||||
├── etc3
|
||||
└── sstable
|
||||
/redo/
|
||||
└── obdemo
|
||||
├── clog
|
||||
├── etc2
|
||||
├── ilog
|
||||
└── slog
|
||||
|
||||
15 directories, 0 files
|
||||
|
||||
```
|
||||
|
||||
## 启动 OBSERVER 进程
|
||||
|
||||
每个机器的启动参数大部分一样,只有少数不一样,需要特别留意。
|
||||
|
||||
+ `172.20.249.52`
|
||||
|
||||
```bash
|
||||
su - admin
|
||||
echo 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/oceanbase/lib' >> ~/.bash_profile
|
||||
. ~/.bash_profile
|
||||
|
||||
cd ~/oceanbase && bin/observer -i eth0 -p 2881 -P 2882 -z zone1 -d ~/oceanbase/store/obdemo -r '172.20.249.52:2882:2881;172.20.249.49:2882:2881;172.20.249.51:2882:2881' -c 20210912 -n obdemo -o "memory_limit=8G,cache_wash_threshold=1G,__min_full_resource_pool_memory=268435456,system_memory=3G,memory_chunk_cache_size=128M,cpu_count=16,net_thread_count=4,datafile_size=50G,stack_size=1536K,config_additional_dir=/data/obdemo/etc3;/redo/obdemo/etc2" -d ~/oceanbase/store/obdemo
|
||||
|
||||
```
|
||||
|
||||
+ `172.20.249.49`
|
||||
|
||||
```bash
|
||||
su - admin
|
||||
echo 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/oceanbase/lib' >> ~/.bash_profile
|
||||
. ~/.bash_profile
|
||||
|
||||
cd ~/oceanbase && bin/observer -i eth0 -p 2881 -P 2882 -z zone2 -d ~/oceanbase/store/obdemo -r '172.20.249.52:2882:2881;172.20.249.49:2882:2881;172.20.249.51:2882:2881' -c 20210912 -n obdemo -o "memory_limit=8G,cache_wash_threshold=1G,__min_full_resource_pool_memory=268435456,system_memory=3G,memory_chunk_cache_size=128M,cpu_count=16,net_thread_count=4,datafile_size=50G,stack_size=1536K,config_additional_dir=/data/obdemo/etc3;/redo/obdemo/etc2" -d ~/oceanbase/store/obdemo
|
||||
|
||||
|
||||
```
|
||||
|
||||
+ `172.20.249.51`
|
||||
|
||||
```bash
|
||||
su - admin
|
||||
echo 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/oceanbase/lib' >> ~/.bash_profile
|
||||
. ~/.bash_profile
|
||||
|
||||
cd ~/oceanbase && bin/observer -i eth0 -p 2881 -P 2882 -z zone3 -d ~/oceanbase/store/obdemo -r '172.20.249.52:2882:2881;172.20.249.49:2882:2881;172.20.249.51:2882:2881' -c 20210912 -n obdemo -o "memory_limit=8G,cache_wash_threshold=1G,__min_full_resource_pool_memory=268435456,system_memory=3G,memory_chunk_cache_size=128M,cpu_count=16,net_thread_count=4,datafile_size=50G,stack_size=1536K,config_additional_dir=/data/obdemo/etc3;/redo/obdemo/etc2" -d ~/oceanbase/store/obdemo
|
||||
|
||||
```
|
||||
|
||||
如果三个节点机型都一致,那么启动参数里只有一个参数不一样,就是 `-z` 指定该节点是哪个 `zone` 。三个 `zone` 的三个节点初始化为一个三副本集群。后面 `-o` 参数不是必须的。这里主要是测试机器内存不足,所以需要指定一些影响内存的参数。如果您机器内存足够(如大于64G),则可以不需要 `-o` 参数部分。
|
||||
|
||||
检查三个节点进程启动正常,主要看端口监听是否正常。在中控机上批量查询
|
||||
|
||||
```bash
|
||||
[admin@obce00 oceanbase-ce]$ for OceanBase in $IPS;do echo $ob; ssh $ob "netstat -ntlp"; done
|
||||
172.20.249.52
|
||||
(Not all processes could be identified, non-owned process info
|
||||
will not be shown, you would have to be root to see it all.)
|
||||
Active Internet connections (only servers)
|
||||
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
|
||||
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
|
||||
tcp 0 0 0.0.0.0:2881 0.0.0.0:* LISTEN 10084/bin/observer
|
||||
tcp 0 0 0.0.0.0:2882 0.0.0.0:* LISTEN 10084/bin/observer
|
||||
172.20.249.49
|
||||
(Not all processes could be identified, non-owned process info
|
||||
will not be shown, you would have to be root to see it all.)
|
||||
Active Internet connections (only servers)
|
||||
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
|
||||
tcp 0 0 0.0.0.0:2881 0.0.0.0:* LISTEN 10213/bin/observer
|
||||
tcp 0 0 0.0.0.0:2882 0.0.0.0:* LISTEN 10213/bin/observer
|
||||
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
|
||||
172.20.249.51
|
||||
(Not all processes could be identified, non-owned process info
|
||||
will not be shown, you would have to be root to see it all.)
|
||||
Active Internet connections (only servers)
|
||||
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
|
||||
tcp 0 0 0.0.0.0:2881 0.0.0.0:* LISTEN 10103/bin/observer
|
||||
tcp 0 0 0.0.0.0:2882 0.0.0.0:* LISTEN 10103/bin/observer
|
||||
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
|
||||
|
||||
```
|
||||
|
||||
## 集群自举(初始化)
|
||||
|
||||
当 OceanBase 集群三个节点都正常启动,并且监听正常时,连接到任一节点(通过 2881 端口直连),进行自举(`bootstrap` 集群初始化)操作。
|
||||
初始密码是空。
|
||||
|
||||
```bash
|
||||
mysql -h 172.20.249.49 -u root -P 2881 -p -c -A
|
||||
|
||||
set session ob_query_timeout=1000000000; alter system bootstrap ZONE 'zone1' SERVER '172.20.249.52:2882', ZONE 'zone2' SERVER '172.20.249.49:2882', ZONE 'zone3' SERVER '172.20.249.51:2882' ;
|
||||
|
||||
输出:
|
||||
[admin@obce00 ~]$ mysql -h 172.20.249.49 -u root -P 2881 -p -c -A
|
||||
Enter password:
|
||||
Welcome to the MariaDB monitor. Commands end with ; or \g.
|
||||
Your MySQL connection id is 3221225472
|
||||
Server version: 5.7.25 OceanBase 3.1.0 (r3-b20901e8c84d3ea774beeaca963c67d7802e4b4e) (Built Aug 10 2021 08:10:38)
|
||||
|
||||
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
|
||||
|
||||
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
|
||||
|
||||
MySQL [(none)]> set session ob_query_timeout=1000000000; alter system bootstrap ZONE 'zone1' SERVER '172.20.249.52:2882', ZONE 'zone2' SERVER '172.20.249.49:2882', ZONE 'zone3' SERVER '172.20.249.51:2882' ;
|
||||
Query OK, 0 rows affected (0.001 sec)
|
||||
|
||||
Query OK, 0 rows affected (28.839 sec)
|
||||
|
||||
MySQL [(none)]> Bye
|
||||
[admin@obce00 ~]$ mysql -h 172.20.249.49 -u root@sys -P 2881 -p -c -A
|
||||
Enter password:
|
||||
Welcome to the MariaDB monitor. Commands end with ; or \g.
|
||||
Your MySQL connection id is 3221751629
|
||||
Server version: 5.7.25 OceanBase 3.1.0 (r3-b20901e8c84d3ea774beeaca963c67d7802e4b4e) (Built Aug 10 2021 08:10:38)
|
||||
|
||||
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
|
||||
|
||||
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
|
||||
|
||||
MySQL [(none)]> show databases;
|
||||
+--------------------+
|
||||
| Database |
|
||||
+--------------------+
|
||||
| oceanbase |
|
||||
| information_schema |
|
||||
| mysql |
|
||||
| SYS |
|
||||
| LBACSYS |
|
||||
| ORAAUDITOR |
|
||||
| test |
|
||||
+--------------------+
|
||||
7 rows in set (0.016 sec)
|
||||
```
|
||||
|
||||
通常来说,只要严格按照前面步骤设置目录结构和权限、启动参数,集群自举都能成功。如果不成功,常见原因如下:
|
||||
|
||||
+ 集群节点之间时间同步延时超过 50ms 。
|
||||
+ 集群节点之间网络延时超过 100ms 。
|
||||
+ 集群节点上 OBSERVER 相关目录结构不对或者目录权限不对。
|
||||
+ 集群节点上进程 `observer` 启动参数写的不对。注意隐含参数的名字(如`__min_full_resource_pool_memory` )、参数 `-d` 的目录是否正确、参数 `-z` 跟 IP 的对应关系、 参数中多了空格或分隔符错误(有的是 `,` ,有的是`;`)。
|
||||
+ 集群节点可用内存低于进程 `observer` 启动参数 `memory_limit` 值。
|
||||
|
||||
## 设置相关密码
|
||||
|
||||
+ 集群管理员(`root@sys`)密码
|
||||
默认集群管理员(`root@sys`)的密码是空的,这里需要设置一个密码。
|
||||
|
||||
```sql
|
||||
alter user root identified by '4S9wDbSr' ;
|
||||
```
|
||||
|
||||
+ OBPROXY 用户(`proxyro`)密码
|
||||
默认OBPROXY 连接 OceanBase 集群使用用户 `proxyro` 。这个用户不存在,需要创建。
|
||||
|
||||
```sql
|
||||
grant select on oceanbase.* to proxyro identified by 'SWoLCQRH' ;
|
||||
```
|
||||
|
||||
## 安装 OBPROXY 软件包
|
||||
|
||||
手动部署,需要安装 OceanBase 的 OBPROXY 软件 。
|
||||
|
||||
```bash
|
||||
sudo rpm -ivh /tmp/obproxy-3.1.0-1.el8.x86_64.rpm
|
||||
|
||||
```
|
||||
|
||||
社区版的 OBPROXY 软件默认安装到 `/home/admin/obproxy-版本号` 下。
|
||||
|
||||
```bash
|
||||
[admin@obce00 ~]$ tree ~/obproxy-3.1.0/
|
||||
/home/admin/obproxy-3.1.0/
|
||||
└── bin
|
||||
├── obproxy
|
||||
└── obproxyd.sh
|
||||
|
||||
1 directory, 2 files
|
||||
```
|
||||
|
||||
目前社区版的 OBPROXY 安装后的文件还是很简单的,后面可能会微调。
|
||||
|
||||
## 启动 OBPROXY 进程
|
||||
|
||||
启动 OBPROXY 进程也推荐在软件安装目录,进程 `pbproxy` 会在该目录下生成目录 `etc` 保存 OBPROXY 的运行参数,以及目录 `log` 保存运行日志。
|
||||
|
||||
```bash
|
||||
cd ~/obproxy-3.1.0/ && bin/obproxy -r "172.20.249.52:2881;172.20.249.49:2881;172.20.249.51:2881" -p 2883 -o "enable_strict_kernel_release=false,enable_cluster_checkout=false,enable_metadb_used=false" -c obdemo
|
||||
|
||||
输出:
|
||||
[admin@obce00 obproxy-3.1.0]$ cd ~/obproxy-3.1.0/ && bin/obproxy -r "172.20.249.52:2881;172.20.249.49:2881;172.20.249.51:2881" -p 2883 -o "enable_strict_kernel_release=false,enable_cluster_checkout=false,enable_metadb_used=false" -c obdemo
|
||||
bin/obproxy -r 172.20.249.52:2881;172.20.249.49:2881;172.20.249.51:2881 -p 2883 -o enable_strict_kernel_release=false,enable_cluster_checkout=false,enable_metadb_used=false -c obdemo
|
||||
rs list: 172.20.249.52:2881;172.20.249.49:2881;172.20.249.51:2881
|
||||
listen port: 2883
|
||||
optstr: enable_strict_kernel_release=false,enable_cluster_checkout=false,enable_metadb_used=false
|
||||
cluster_name: obdemo
|
||||
[admin@obce00 obproxy-3.1.0]$ ps -ef|grep obproxy
|
||||
admin 38206 1 2 15:11 ? 00:00:00 bin/obproxy -r 172.20.249.52:2881;172.20.249.49:2881;172.20.249.51:2881 -p 2883 -o enable_strict_kernel_release=false,enable_cluster_checkout=false,enable_metadb_used=false -c obdemo
|
||||
admin 38229 28904 0 15:11 pts/2 00:00:00 grep --color=auto obproxy
|
||||
[admin@obce00 obproxy-3.1.0]$
|
||||
```
|
||||
|
||||
+ 检查OBPROXY 监听正常
|
||||
|
||||
进程 `obproxy` 默认会监听2个端口:2883 和 2884 。
|
||||
|
||||
```bash
|
||||
[admin@obce00 obproxy-3.1.0]$ netstat -ntlp |grep obproxy
|
||||
(Not all processes could be identified, non-owned process info
|
||||
will not be shown, you would have to be root to see it all.)
|
||||
tcp 0 0 0.0.0.0:2883 0.0.0.0:* LISTEN 38206/bin/obproxy
|
||||
tcp 0 0 0.0.0.0:2884 0.0.0.0:* LISTEN 38206/bin/obproxy
|
||||
|
||||
```
|
||||
|
||||
+ 登录 OBPROXY 修改密码
|
||||
|
||||
登录用户名:`root@proxysys`, 端口:`2883` ,初始密码:空。
|
||||
|
||||
```bash
|
||||
mysql -h 172.20.249.50 -u root@proxysys -P 2883 -p
|
||||
|
||||
MySQL [(none)]> show proxyconfig like '%sys_password%';
|
||||
+-----------------------+-------+--------------------------------+-------------+---------------+
|
||||
| name | value | info | need_reboot | visible_level |
|
||||
+-----------------------+-------+--------------------------------+-------------+---------------+
|
||||
| observer_sys_password | | password for observer sys user | false | SYS |
|
||||
| obproxy_sys_password | | password for obproxy sys user | false | SYS |
|
||||
+-----------------------+-------+--------------------------------+-------------+---------------+
|
||||
2 rows in set (0.000 sec)
|
||||
|
||||
```
|
||||
|
||||
修改 OBPROXY 用户密码是通过修改参数的方式,命令是:`alter proxyconfig set` 。
|
||||
|
||||
```sql
|
||||
alter proxyconfig set obproxy_sys_password = 'wPhGddup' ;
|
||||
```
|
||||
|
||||
同时还需要修改 OBPROXY 连接 OceanBase 集群用户 `proxyro` 的密码。这样 OBPROXY 才能跟 OceanBase 集群正常连接。这个密码就是前面 OceanBase 集群初始化后创建的用户 `proxyro` 的密码。
|
||||
|
||||
```sql
|
||||
alter proxyconfig set observer_sys_password = 'SWoLCQRH' ;
|
||||
```
|
||||
|
||||
退出,通过 OBPROXY 连接 OceanBase 集群看看, 如果能查看所有会话,则说明 OBPROXY 部署成功。
|
||||
|
||||
```bash
|
||||
mysql -h172.20.249.50 -uroot@sys#obdemo -P2883 -p4S9wDbSr -c -A oceanbase
|
||||
|
||||
输出:
|
||||
[admin@obce00 obproxy-3.1.0]$ mysql -h172.20.249.50 -uroot@sys#obdemo -P2883 -p4S9wDbSr -c -A oceanbase
|
||||
Welcome to the MariaDB monitor. Commands end with ; or \g.
|
||||
Your MySQL connection id is 5
|
||||
Server version: 5.6.25 OceanBase 3.1.0 (r3-b20901e8c84d3ea774beeaca963c67d7802e4b4e) (Built Aug 10 2021 08:10:38)
|
||||
|
||||
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
|
||||
|
||||
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
|
||||
|
||||
MySQL [oceanbase]> show processlist;
|
||||
+------+--------+------+---------------------+-----------+-------------+-------------------+-------------------+-------+-------+
|
||||
| Id | Tenant | User | Host | db | trans_count | svr_session_count | state | tid | pid |
|
||||
+------+--------+------+---------------------+-----------+-------------+-------------------+-------------------+-------+-------+
|
||||
| 5 | sys | root | 172.20.249.50:41524 | oceanbase | 0 | 1 | MCS_ACTIVE_READER | 38206 | 38206 |
|
||||
+------+--------+------+---------------------+-----------+-------------+-------------------+-------------------+-------+-------+
|
||||
1 row in set (0.000 sec)
|
||||
|
||||
MySQL [oceanbase]> show full processlist;
|
||||
+------------+---------+--------+---------------------+-----------+---------+------+--------+-----------------------+---------------+------+--------------+
|
||||
| Id | User | Tenant | Host | db | Command | Time | State | Info | Ip | Port | Proxy_sessid |
|
||||
+------------+---------+--------+---------------------+-----------+---------+------+--------+-----------------------+---------------+------+--------------+
|
||||
| 3222013775 | root | sys | 172.20.249.50:57436 | oceanbase | Query | 0 | ACTIVE | show full processlist | 172.20.249.51 | 2881 | 4 |
|
||||
| 3221751633 | proxyro | sys | 172.20.249.50:49344 | oceanbase | Sleep | 2 | SLEEP | NULL | 172.20.249.49 | 2881 | 3 |
|
||||
+------------+---------+--------+---------------------+-----------+---------+------+--------+-----------------------+---------------+------+--------------+
|
||||
2 rows in set (0.022 sec)
|
||||
```
|
138
docs/docs/junior-training/ob-quick-start/chapter02/2.12.md
Normal file
138
docs/docs/junior-training/ob-quick-start/chapter02/2.12.md
Normal file
@ -0,0 +1,138 @@
|
||||
# 常见问题
|
||||
|
||||
## 机器环境初始化问题
|
||||
|
||||
## `ulimit` 设置不生效
|
||||
|
||||
+ 现象
|
||||
|
||||
```bash
|
||||
ulimit -a
|
||||
...
|
||||
stack size (kbytes, -s) 1024
|
||||
...
|
||||
```
|
||||
|
||||
此时,使用 admin 用户通过 ulimit -s 命令修改栈大小,操作系统报错 `cannot modify limit: Operation not permitted` 。
|
||||
|
||||
`ulimit` 问题设置不对,可能导致进程 OBSERVER 无法启动。
|
||||
|
||||
+ 原因
|
||||
|
||||
admin 用户 ulimit 配置未生效的原因可能是由于操作系统关闭了 PAM,PAM 用于限制登录用户的 ulimit 配置,如果不开启 PAM,则会使用 SSHD 的默认值(即 1024)。
|
||||
|
||||
+ 解决办法
|
||||
|
||||
修改 SSHD 配置文件 `sshd_config` ,取消对 `UsePAM yes` 的注释。
|
||||
|
||||
```bash
|
||||
sudo vim /etc/ssh/sshd_config
|
||||
UsePAM yes
|
||||
|
||||
```
|
||||
|
||||
重启 SSHD 服务。
|
||||
|
||||
```bash
|
||||
sudo systemctl restart sshd
|
||||
```
|
||||
|
||||
再次修改 `ulimit.conf` 文件
|
||||
|
||||
```
|
||||
vim /etc/security/limits.conf
|
||||
* soft nofile 655360
|
||||
* hard nofile 655360
|
||||
* soft nproc 655360
|
||||
* hard nproc 655360
|
||||
* soft core unlimited
|
||||
* hard core unlimited
|
||||
* soft stack unlimited
|
||||
* hard stack unlimited
|
||||
```
|
||||
|
||||
重新登录检查实际值,用命令:`ulimit -a` 。
|
||||
|
||||
## OBD 部署问题
|
||||
|
||||
## 目录非空
|
||||
|
||||
+ 现象
|
||||
|
||||
```bash
|
||||
Initializes cluster work home x
|
||||
[ERROR] fail to init zone1(172.20.249.53) data path: /data is not empty
|
||||
```
|
||||
|
||||
+ 原因
|
||||
|
||||
CentOS 8.0 刚初始化的文件系统里目录里会有一个默认文件夹 `lost+found` 。
|
||||
|
||||
+ 解决办法
|
||||
|
||||
清空刚建的文件系统目录。
|
||||
|
||||
`sudo /bin/rm -rf /data/* /redo/*`
|
||||
|
||||
## 其他通用报错
|
||||
|
||||
+ 现象
|
||||
|
||||
`obd` 命令出错。
|
||||
|
||||
+ 原因
|
||||
|
||||
查看 `obd` 命令日志。
|
||||
|
||||
```bash
|
||||
vim ~/.obd/log/obd + R
|
||||
```
|
||||
|
||||
+ 解决办法
|
||||
|
||||
根据错误描述去解决。
|
||||
|
||||
## OBSERVER 启动失败
|
||||
|
||||
## 找不到共享库
|
||||
|
||||
+ 现象
|
||||
|
||||
手动启动进程 OBSERVER ,提示 共享库找不到。
|
||||
|
||||
```bash
|
||||
[admin@obce02 ~]$ cd oceanbase-ce/
|
||||
[admin@obce02 oceanbase-ce]$ bin/observer
|
||||
bin/observer: error while loading shared libraries: libmariadb.so.3: cannot open shared object file: No such file or directory
|
||||
```
|
||||
|
||||
+ 原因
|
||||
|
||||
没有将 OceanBase 的 LIB 加到环境变量 `LD_LIBRARY_PATH` 里。
|
||||
|
||||
LIB 目录如下:
|
||||
|
||||
```bash
|
||||
[admin@obce02 ~]$ tree oceanbase-ce/
|
||||
oceanbase-ce/
|
||||
├── admin
|
||||
├── bin
|
||||
│ └── observer -> /home/admin/.obd/repository/oceanbase-ce/3.1.0/84bd2fe27f8b8243cc57d8a3f68b4c50f94aab80/bin/observer
|
||||
<....>
|
||||
├── lib
|
||||
│ ├── libaio.so -> /home/admin/.obd/repository/oceanbase-ce/3.1.0/84bd2fe27f8b8243cc57d8a3f68b4c50f94aab80/lib/libaio.so
|
||||
│ ├── libaio.so.1 -> /home/admin/.obd/repository/oceanbase-ce/3.1.0/84bd2fe27f8b8243cc57d8a3f68b4c50f94aab80/lib/libaio.so.1
|
||||
│ ├── libaio.so.1.0.1 -> /home/admin/.obd/repository/oceanbase-ce/3.1.0/84bd2fe27f8b8243cc57d8a3f68b4c50f94aab80/lib/libaio.so.1.0.1
|
||||
│ ├── libmariadb.so -> /home/admin/.obd/repository/oceanbase-ce/3.1.0/84bd2fe27f8b8243cc57d8a3f68b4c50f94aab80/lib/libmariadb.so
|
||||
│ └── libmariadb.so.3 -> /home/admin/.obd/repository/oceanbase-ce/3.1.0/84bd2fe27f8b8243cc57d8a3f68b4c50f94aab80/lib/libmariadb.so.3
|
||||
```
|
||||
|
||||
+ 解决办法
|
||||
|
||||
将 OceanBase 的 LIB 加到环境变量 `LD_LIBRARY_PATH` 里。也可以写到 `.bash_profile` 中。
|
||||
|
||||
```bash
|
||||
echo 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/oceanbase-ce/lib/' >> ~/.bash_profile
|
||||
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/oceanbase-ce/lib/
|
||||
|
||||
```
|
240
docs/docs/junior-training/ob-quick-start/chapter02/2.13.md
Normal file
240
docs/docs/junior-training/ob-quick-start/chapter02/2.13.md
Normal file
@ -0,0 +1,240 @@
|
||||
# 附录
|
||||
|
||||
## A1. 生产环境三节点 OceanBase 集群部署配置文件
|
||||
|
||||
生产环境机器内存大于 256G 时,参考下面配置文件。
|
||||
|
||||
```yaml
|
||||
# Only need to configure when remote login is required
|
||||
user:
|
||||
username: admin
|
||||
# password: your password if need
|
||||
key_file: /home/admin/.ssh/id_rsa.pub
|
||||
port: your ssh port, default 22
|
||||
# timeout: ssh connection timeout (second), default 30
|
||||
oceanbase-ce:
|
||||
servers:
|
||||
- name: obce01
|
||||
# Please don't use hostname, only IP can be supported
|
||||
ip: 172.20.249.53
|
||||
- name: obce02
|
||||
ip: 172.20.249.55
|
||||
- name: obce03
|
||||
ip: 172.20.249.56
|
||||
global:
|
||||
# Please set devname as the network adaptor's name whose ip is in the setting of severs.
|
||||
# if set severs as "127.0.0.1", please set devname as "lo"
|
||||
# if current ip is 192.168.1.10, and the ip's network adaptor's name is "eth0", please use "eth0"
|
||||
devname: bond0
|
||||
cluster_id: 2
|
||||
# please set memory limit to a suitable value which is matching resource.
|
||||
# memory_limit: 200G # The maximum running memory for an observer
|
||||
# system_memory: 30G # The reserved system memory. system_memory is reserved for general tenants. The default value is 30G.
|
||||
minor_freeze_times: 100
|
||||
minor_warm_up_duration_time: 0
|
||||
freeze_trigger_percentage: 40
|
||||
enable_merge_by_turn: FALSE
|
||||
datafile_disk_percentage: 50 # The percentage of the data_dir space to the total disk space. This value takes effect only when datafile_size is 0. The default value is 90.
|
||||
# datafile_size: 500G
|
||||
syslog_level: INFO # System log level. The default value is INFO.
|
||||
enable_syslog_wf: false # Print system logs whose levels are higher than WARNING to a separate log file. The default value is true.
|
||||
enable_syslog_recycle: true # Enable auto system log recycling or not. The default value is false.
|
||||
max_syslog_file_count: 50 # The maximum number of reserved log files before enabling auto recycling. The default value is 0.
|
||||
# observer cluster name, consistent with obproxy's cluster_name
|
||||
appname: obce-3zones
|
||||
root_password: 0EI5N08d # root user password, can be empty
|
||||
proxyro_password: uY7Yf8zx # proxyro user pasword, consistent with obproxy's observer_sys_password, can be empty
|
||||
obce01:
|
||||
mysql_port: 2881 # External port for OceanBase Database. The default value is 2881.
|
||||
rpc_port: 2882 # Internal port for OceanBase Database. The default value is 2882.
|
||||
# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
|
||||
home_path: /home/admin/oceanbase-ce
|
||||
# The directory for data storage. The default value is $home_path/store.
|
||||
data_dir: /data
|
||||
# The directory for clog, ilog, and slog. The default value is the same as the data_dir value.
|
||||
redo_dir: /redo
|
||||
zone: zone1
|
||||
obce02:
|
||||
mysql_port: 2881 # External port for OceanBase Database. The default value is 2881.
|
||||
rpc_port: 2882 # Internal port for OceanBase Database. The default value is 2882.
|
||||
# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
|
||||
home_path: /home/admin/oceanbase-ce
|
||||
# The directory for data storage. The default value is $home_path/store.
|
||||
data_dir: /data
|
||||
# The directory for clog, ilog, and slog. The default value is the same as the data_dir value.
|
||||
redo_dir: /redo
|
||||
zone: zone2
|
||||
obce03:
|
||||
mysql_port: 2881 # External port for OceanBase Database. The default value is 2881.
|
||||
rpc_port: 2882 # Internal port for OceanBase Database. The default value is 2882.
|
||||
# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
|
||||
home_path: /home/admin/oceanbase-ce
|
||||
# The directory for data storage. The default value is $home_path/store.
|
||||
data_dir: /data
|
||||
# The directory for clog, ilog, and slog. The default value is the same as the data_dir value.
|
||||
redo_dir: /redo
|
||||
zone: zone3
|
||||
obproxy:
|
||||
servers:
|
||||
- 172.20.249.53
|
||||
- 172.20.249.55
|
||||
- 172.20.249.56
|
||||
# Set dependent components for the component.
|
||||
# When the associated configurations are not done, OBD will automatically get the these configurations from the dependent components.
|
||||
depends:
|
||||
- oceanbase-ce
|
||||
global:
|
||||
listen_port: 2883 # External port. The default value is 2883.
|
||||
prometheus_listen_port: 2884 # The Prometheus port. The default value is 2884.
|
||||
home_path: /home/admin/obproxy
|
||||
# oceanbase root server list
|
||||
# format: ip:mysql_port;ip:mysql_port
|
||||
rs_list: 172.20.249.53:2881;172.20.249.55:2881;172.20.249.56:2881
|
||||
enable_cluster_checkout: false
|
||||
# observer cluster name, consistent with oceanbase-ce's appname
|
||||
cluster_name: obce-3zones
|
||||
obproxy_sys_password: 0MdTv1tm # obproxy sys user password, can be empty
|
||||
observer_sys_password: uY7Yf8zx # proxyro user pasword, consistent with oceanbase-ce's proxyro_password, can be empty
|
||||
```
|
||||
|
||||
## A2. 测试环境 3台ECS 模拟 6节点集群配置文件
|
||||
|
||||
每个机器起 2 个节点,分别监听 2881/2882 和 3881/3882 。
|
||||
|
||||
```yaml
|
||||
# Only need to configure when remote login is required
|
||||
user:
|
||||
username: admin
|
||||
# password: your password if need
|
||||
key_file: /home/admin/.ssh/id_rsa.pub
|
||||
port: your ssh port, default 22
|
||||
# timeout: ssh connection timeout (second), default 30
|
||||
oceanbase-ce:
|
||||
servers:
|
||||
- name: obce01
|
||||
# Please don't use hostname, only IP can be supported
|
||||
ip: 172.20.249.53
|
||||
- name: obce02
|
||||
ip: 172.20.249.55
|
||||
- name: obce03
|
||||
ip: 172.20.249.56
|
||||
- name: obce04
|
||||
# Please don't use hostname, only IP can be supported
|
||||
ip: 172.20.249.53
|
||||
- name: obce05
|
||||
ip: 172.20.249.55
|
||||
- name: obce06
|
||||
ip: 172.20.249.56
|
||||
global:
|
||||
# Please set devname as the network adaptor's name whose ip is in the setting of severs.
|
||||
# if set severs as "127.0.0.1", please set devname as "lo"
|
||||
# if current ip is 192.168.1.10, and the ip's network adaptor's name is "eth0", please use "eth0"
|
||||
devname: eth0
|
||||
cluster_id: 2
|
||||
# please set memory limit to a suitable value which is matching resource.
|
||||
memory_limit: 10G # The maximum running memory for an observer
|
||||
system_memory: 3G # The reserved system memory. system_memory is reserved for general tenants. The default value is 30G.
|
||||
stack_size: 512K
|
||||
cpu_count: 16
|
||||
cache_wash_threshold: 1G
|
||||
__min_full_resource_pool_memory: 268435456
|
||||
workers_per_cpu_quota: 10
|
||||
schema_history_expire_time: 1d
|
||||
# The value of net_thread_count had better be same as cpu's core number.
|
||||
net_thread_count: 4
|
||||
major_freeze_duty_time: Disable
|
||||
minor_warm_up_duration_time: 0
|
||||
freeze_trigger_percentage: 40
|
||||
enable_separate_sys_clog: 0
|
||||
enable_merge_by_turn: FALSE
|
||||
#datafile_disk_percentage: 20 # The percentage of the data_dir space to the total disk space. This value takes effect only when datafile_size is 0. The default value is 90.
|
||||
datafile_size: 50G
|
||||
syslog_level: WARN # System log level. The default value is INFO.
|
||||
enable_syslog_wf: false # Print system logs whose levels are higher than WARNING to a separate log file. The default value is true.
|
||||
enable_syslog_recycle: true # Enable auto system log recycling or not. The default value is false.
|
||||
max_syslog_file_count: 10 # The maximum number of reserved log files before enabling auto recycling. The default value is 0.
|
||||
# observer cluster name, consistent with obproxy's cluster_name
|
||||
appname: obce-3zones
|
||||
root_password: 0EI5N08d # root user password, can be empty
|
||||
proxyro_password: uY7Yf8zx # proxyro user pasword, consistent with obproxy's observer_sys_password, can be empty
|
||||
obce01:
|
||||
mysql_port: 2881 # External port for OceanBase Database. The default value is 2881.
|
||||
rpc_port: 2882 # Internal port for OceanBase Database. The default value is 2882.
|
||||
# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
|
||||
home_path: /home/admin/oceanbase-ce
|
||||
# The directory for data storage. The default value is $home_path/store.
|
||||
data_dir: /data/1
|
||||
# The directory for clog, ilog, and slog. The default value is the same as the data_dir value.
|
||||
redo_dir: /redo/1
|
||||
zone: zone1
|
||||
obce02:
|
||||
mysql_port: 2881 # External port for OceanBase Database. The default value is 2881.
|
||||
rpc_port: 2882 # Internal port for OceanBase Database. The default value is 2882.
|
||||
# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
|
||||
home_path: /home/admin/oceanbase-ce
|
||||
# The directory for data storage. The default value is $home_path/store.
|
||||
data_dir: /data/1
|
||||
# The directory for clog, ilog, and slog. The default value is the same as the data_dir value.
|
||||
redo_dir: /redo/1
|
||||
zone: zone2
|
||||
obce03:
|
||||
mysql_port: 2881 # External port for OceanBase Database. The default value is 2881.
|
||||
rpc_port: 2882 # Internal port for OceanBase Database. The default value is 2882.
|
||||
# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
|
||||
home_path: /home/admin/oceanbase-ce
|
||||
# The directory for data storage. The default value is $home_path/store.
|
||||
data_dir: /data/1
|
||||
# The directory for clog, ilog, and slog. The default value is the same as the data_dir value.
|
||||
redo_dir: /redo/1
|
||||
zone: zone3
|
||||
obce04:
|
||||
mysql_port: 3881 # External port for OceanBase Database. The default value is 2881.
|
||||
rpc_port: 3882 # Internal port for OceanBase Database. The default value is 2882.
|
||||
# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
|
||||
home_path: /home/admin/oceanbase-ce2
|
||||
# The directory for data storage. The default value is $home_path/store.
|
||||
data_dir: /data/2
|
||||
# The directory for clog, ilog, and slog. The default value is the same as the data_dir value.
|
||||
redo_dir: /redo/2
|
||||
zone: zone1
|
||||
obce05:
|
||||
mysql_port: 3881 # External port for OceanBase Database. The default value is 2881.
|
||||
rpc_port: 3882 # Internal port for OceanBase Database. The default value is 2882.
|
||||
# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
|
||||
home_path: /home/admin/oceanbase-ce2
|
||||
# The directory for data storage. The default value is $home_path/store.
|
||||
data_dir: /data/2
|
||||
# The directory for clog, ilog, and slog. The default value is the same as the data_dir value.
|
||||
redo_dir: /redo/2
|
||||
zone: zone2
|
||||
obce06:
|
||||
mysql_port: 3881 # External port for OceanBase Database. The default value is 2881.
|
||||
rpc_port: 3882 # Internal port for OceanBase Database. The default value is 2882.
|
||||
# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
|
||||
home_path: /home/admin/oceanbase-ce2
|
||||
# The directory for data storage. The default value is $home_path/store.
|
||||
data_dir: /data/2
|
||||
# The directory for clog, ilog, and slog. The default value is the same as the data_dir value.
|
||||
redo_dir: /redo/2
|
||||
zone: zone3
|
||||
obproxy:
|
||||
servers:
|
||||
- 172.20.249.54
|
||||
# Set dependent components for the component.
|
||||
# When the associated configurations are not done, OBD will automatically get the these configurations from the dependent components.
|
||||
depends:
|
||||
- oceanbase-ce
|
||||
global:
|
||||
listen_port: 2883 # External port. The default value is 2883.
|
||||
prometheus_listen_port: 2884 # The Prometheus port. The default value is 2884.
|
||||
home_path: /home/admin/obproxy
|
||||
# oceanbase root server list
|
||||
# format: ip:mysql_port;ip:mysql_port
|
||||
rs_list: 172.20.249.53:2881;172.20.249.55:2881;172.20.249.56:2881
|
||||
enable_cluster_checkout: false
|
||||
# observer cluster name, consistent with oceanbase-ce's appname
|
||||
# cluster_name: obce-3zones
|
||||
obproxy_sys_password: 0MdTv1tm # obproxy sys user password, can be empty
|
||||
# observer_sys_password: uY7Yf8zx # proxyro user pasword, consistent with oceanbase-ce's proxyro_password, can be empty
|
||||
|
||||
```
|
271
docs/docs/junior-training/ob-quick-start/chapter02/2.2.md
Normal file
271
docs/docs/junior-training/ob-quick-start/chapter02/2.2.md
Normal file
@ -0,0 +1,271 @@
|
||||
## 如何快速体验 OceanBase
|
||||
|
||||
在部署 OceanBase 社区版之前,建议您快速通过 Docker 环境看一下一个部署好的 OceanBase 社区版环境。我们提供了一个 OceanBase 社区版 Docker 镜像,您可以在您的笔记本或电脑上使用 Docker 技术快速部署并启动 OceanBase 社区版的 Docker 容器。
|
||||
|
||||
### 机器资源要求
|
||||
|
||||
OceanBase Docker 容器对资源的要求如下:
|
||||
|
||||
+ 机器可用内存不少于 10G 。 注意,是剩余可用内存。
|
||||
+ 机器磁盘目录空间不少于 10G 。少于 10G 后面使用可能会不是很方便。如遭遇空间目录问题。
|
||||
+ CPU 建议至少有 2个 逻辑 CPU 。
|
||||
|
||||
### 安装 Docker
|
||||
|
||||
Docker 是免费软件,在 Windows、Linux、Mac 系统里都可以安装运行。下载和安装地址请参考 : [https://docs.docker.com/get-docker/](https://docs.docker.com/get-docker/) 。
|
||||
|
||||
Docker 安装后,对默认的容器资源有限制,这里需要手动调整一下。下面以 Mac电脑上的 Docker 设置为例说明。
|
||||
|
||||

|
||||
|
||||
+ 常用 Docker 命令参考
|
||||
|
||||
```bash
|
||||
# 查看docker版本
|
||||
docker version
|
||||
# 显示docker系统的信息
|
||||
docker info
|
||||
# 日志信息
|
||||
docker logs
|
||||
# 故障检查
|
||||
service docker status
|
||||
# 启动关闭docker
|
||||
service docker start | stop
|
||||
|
||||
# 查看容器日志
|
||||
docker logs -f <容器名orID>
|
||||
|
||||
# 清理命令,危险!!!
|
||||
# 清理不用的容器
|
||||
docker container prune
|
||||
# 清理不用的镜像
|
||||
docker image prune
|
||||
# 清理不用的卷
|
||||
docker volume prune
|
||||
|
||||
```
|
||||
|
||||
### 下载镜像并启动
|
||||
|
||||
OceanBase Docker 镜像地址:[https://hub.docker.com/r/oceanbase/obce-mini](https://hub.docker.com/r/oceanbase/obce-mini) 。
|
||||
镜像的源码地址在 Github 上:[https://github.com/oceanbase/oceanbase/tree/master/tools/docker/mini](https://github.com/oceanbase/oceanbase/tree/master/tools/docker/mini) 。有兴趣的朋友可以直接看看。
|
||||
|
||||
```bash
|
||||
docker search oceanbase # 搜索 oceanbase 相关镜像
|
||||
|
||||
docker pull oceanbase/obce-mini
|
||||
|
||||
```
|
||||
|
||||
启动 OceanBase Docker 容器。
|
||||
|
||||
```bash
|
||||
docker run -p 2881:2881 --name obce-mini -d -e OB_HOME_PATH="/root/obce/" -e OB_TENANT_NAME="obmysql" oceanbase/obce-mini
|
||||
|
||||
输出:
|
||||
➜ ~ docker run -p 2881:2881 --name obce-mini -d -e OB_HOME_PATH="/root/obce/" -e OB_TENANT_NAME="obmysql" oceanbase/obce-mini
|
||||
45180d71f504981ed588b7de0e5abf952511f2c2f9ee5eac0446b6cf0d4dc02c
|
||||
➜ ~ docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
45180d71f504 oceanbase/obce-mini "/bin/sh -c _boot" 4 seconds ago Up 2 seconds 0.0.0.0:2881->2881/tcp, :::2881->2881/tcp obce-mini
|
||||
➜ ~
|
||||
```
|
||||
|
||||
### 查看容器启动日志
|
||||
|
||||
刚启动的 OceanBase 需要几分钟初始化集群。可以查看容器启动日志。
|
||||
|
||||
```bash
|
||||
docker logs obce-mini
|
||||
|
||||
输出:
|
||||
➜ ~ docker logs obce-mini
|
||||
generate boot.yaml ...
|
||||
create boot dirs and deploy OceanBase cluster ...
|
||||
Package oceanbase-ce-3.1.0 is available.
|
||||
install oceanbase-ce-3.1.0 for local ok
|
||||
+-----------------------------------------------------------------------------+
|
||||
| Packages |
|
||||
+--------------+---------+---------+------------------------------------------+
|
||||
| Repository | Version | Release | Md5 |
|
||||
+--------------+---------+---------+------------------------------------------+
|
||||
| oceanbase-ce | 3.1.0 | 2.el7 | afd11d52f83eef4b456d77969fde620c4bfba85e |
|
||||
+--------------+---------+---------+------------------------------------------+
|
||||
Open ssh connection ok
|
||||
Remote oceanbase-ce-3.1.0-afd11d52f83eef4b456d77969fde620c4bfba85e repository install ok
|
||||
Remote oceanbase-ce-3.1.0-afd11d52f83eef4b456d77969fde620c4bfba85e repository lib check !!
|
||||
[WARN] 127.0.0.1 oceanbase-ce-3.1.0-afd11d52f83eef4b456d77969fde620c4bfba85e require: libaio.so.1
|
||||
[WARN] 127.0.0.1 oceanbase-ce-3.1.0-afd11d52f83eef4b456d77969fde620c4bfba85e require: libmariadb.so.3
|
||||
|
||||
Try to get lib-repository
|
||||
Package oceanbase-ce-libs-3.1.0 is available.
|
||||
install oceanbase-ce-libs-3.1.0 for local ok
|
||||
Use oceanbase-ce-libs-3.1.0-47300ca1ac4c62493caf3e9235b105e242e533b5 for oceanbase-ce-3.1.0-afd11d52f83eef4b456d77969fde620c4bfba85e
|
||||
Remote oceanbase-ce-libs-3.1.0-47300ca1ac4c62493caf3e9235b105e242e533b5 repository install ok
|
||||
Remote oceanbase-ce-3.1.0-afd11d52f83eef4b456d77969fde620c4bfba85e repository lib check ok
|
||||
Cluster status check ok
|
||||
127.0.0.1 initializes cluster work home
|
||||
mini-ce deployed
|
||||
start OceanBase cluster ...
|
||||
Get local repositories and plugins ok
|
||||
Open ssh connection ok
|
||||
Cluster param config check ok
|
||||
Check before start observer ok
|
||||
Start observer ok
|
||||
observer program health check ok
|
||||
Connect to observer ok
|
||||
Initialize cluster
|
||||
Cluster bootstrap ok
|
||||
Wait for observer init ok
|
||||
+---------------------------------------------+
|
||||
| observer |
|
||||
+-----------+---------+------+-------+--------+
|
||||
| ip | version | port | zone | status |
|
||||
+-----------+---------+------+-------+--------+
|
||||
| 127.0.0.1 | 3.1.0 | 2881 | zone1 | active |
|
||||
+-----------+---------+------+-------+--------+
|
||||
|
||||
mini-ce running
|
||||
generate init_tenant.sql ...
|
||||
init tenant and sysbench database ...
|
||||
boot success!
|
||||
```
|
||||
|
||||
分析上面日志可以看出几点信息:
|
||||
|
||||
+ 会安装两个软件包:`oceanbase-ce-libs` 和 `oceanbase-ce-3.1.0` 。
|
||||
+ 先初始化集群目录。
|
||||
+ 然后初始化集群(`bootstrap`)。
|
||||
+ 再初始化业务租户(`tenant`)。
|
||||
|
||||
### 分析OB 进程特点
|
||||
|
||||
进入容器
|
||||
|
||||
```bash
|
||||
docker exec -it obce-mini bash
|
||||
|
||||
```
|
||||
|
||||
查看 OceanBase 社区版的 YUM 仓库
|
||||
|
||||
```bash
|
||||
[root@45180d71f504 ~]# cat /etc/yum.repos.d/OceanBase.repo
|
||||
输出:
|
||||
# OceanBase.repo
|
||||
|
||||
[oceanbase.community.stable]
|
||||
name=OceanBase-community-stable-el$releasever
|
||||
baseurl=http://mirrors.aliyun.com/oceanbase/community/stable/el/$releasever/$basearch/
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=http://mirrors.aliyun.com/oceanbase/RPM-GPG-KEY-OceanBase
|
||||
|
||||
[oceanbase.development-kit]
|
||||
name=OceanBase-development-kit-el$releasever
|
||||
baseurl=http://mirrors.aliyun.com/oceanbase/development-kit/el/$releasever/$basearch/
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=http://mirrors.aliyun.com/oceanbase/RPM-GPG-KEY-OceanBase
|
||||
```
|
||||
|
||||
查看 OBSERVER 进程特点。分析一个陌生环境的 OceanBase 集群节点进程,首先通过下面命令确定其启动位置、启动文件和启动参数等。
|
||||
|
||||
```bash
|
||||
yum -y install sysvinit-tools
|
||||
|
||||
[root@45180d71f504 ~]# ps -ef|grep observer
|
||||
root 85 1 99 01:50 ? 15:27:38 /root/.obd/repository/oceanbase-ce/3.1.0/afd11d52f83eef4b456d77969fde620c4bfba85e/bin/observer -r 127.0.0.1:2882:2881 -o __min_full_resource_pool_memory=268435456,memory_limit=8G,system_memory=4G,stack_size=512K,cpu_count=16,cache_wash_threshold=1G,workers_per_cpu_quota=10,schema_history_expire_time=1d,net_thread_count=4,sys_bkgd_migration_retry_num=3,minor_freeze_times=10,enable_separate_sys_clog=0,enable_merge_by_turn=False,enable_auto_leader_switch=False,enable_one_phase_commit=False,weak_read_version_refresh_interval=5s,trace_log_slow_query_watermark=10s,large_query_threshold=1s,clog_sync_time_warn_threshold=2000ms,syslog_io_bandwidth_limit=10M,enable_sql_audit=False,enable_perf_event=False,clog_max_unconfirmed_log_count=5000,autoinc_cache_refresh_interval=86400s,cpu_quota_concurrency=2,datafile_size=5G,enable_syslog_recycle=True,max_syslog_file_count=2,enable_early_lock_release=false tenant=all,default_compress_func=lz4_1.0,root_password=None -z zone1 -p 2881 -P 2882 -c 1 -d /root/obce//store -i lo -l WARN
|
||||
root 663 606 0 04:41 pts/0 00:00:00 grep --color=auto observer
|
||||
|
||||
[root@45180d71f504 ~]# ll /proc/`pidof observer`/{cwd,exe,cmdline}
|
||||
-r--r--r-- 1 root root 0 Sep 11 01:47 /proc/85/cmdline
|
||||
lrwxrwxrwx 1 root root 0 Sep 11 01:47 /proc/85/cwd -> /root/obce
|
||||
lrwxrwxrwx 1 root root 0 Sep 11 01:47 /proc/85/exe -> /root/.obd/repository/oceanbase-ce/3.1.0/afd11d52f83eef4b456d77969fde620c4bfba85e/bin/observer
|
||||
[root@45180d71f504 ~]# cat /proc/`pidof observer`/cmdline
|
||||
/root/.obd/repository/oceanbase-ce/3.1.0/afd11d52f83eef4b456d77969fde620c4bfba85e/bin/observer-r127.0.0.1:2882:2881-o__min_full_resource_pool_memory=268435456,memory_limit=8G,system_memory=4G,stack_size=512K,cpu_count=16,cache_wash_threshold=1G,workers_per_cpu_quota=10,schema_history_expire_time=1d,net_thread_count=4,sys_bkgd_migration_retry_num=3,minor_freeze_times=10,enable_separate_sys_clog=0,enable_merge_by_turn=False,enable_auto_leader_switch=False,enable_one_phase_commit=False,weak_read_version_refresh_interval=5s,trace_log_slow_query_watermark=10s,large_query_threshold=1s,clog_sync_time_warn_threshold=2000ms,syslog_io_bandwidth_limit=10M,enable_sql_audit=False,enable_perf_event=False,clog_max_unconfirmed_log_count=5000,autoinc_cache_refresh_interval=86400s,cpu_quota_concurrency=2,datafile_size=5G,enable_syslog_recycle=True,max_syslog_file_count=2,enable_early_lock_release=false tenant=all,default_compress_func=lz4_1.0,root_password=None-zzone1-p2881-P2882-c1-d
|
||||
/root/obce//store-ilo-lWARN
|
||||
[root@45180d71f504 ~]#
|
||||
|
||||
```
|
||||
|
||||
从上面可以看出 `observer` 进程几点信息:
|
||||
|
||||
+ 进程启动目录是在 `/root/obce` 下。
|
||||
+ 进程可执行文件目录在 `/root/.obd/repository/oceanbase-ce/3.1.0/afd11d52f83eef4b456d77969fde620c4bfba85e/bin/` 下。这个目录是 OBD 安装 OceanBase 软件的目录,里面带了具体的版本号。目录比较长,OBD 后面版本已经将这个目录映射到 `/root/obce/bin/` 下了。
|
||||
+ 进程的启动参数很长。部分参数含义后面再详细介绍。
|
||||
|
||||
查看进程监听端口。`observer` 进程会监听 2 个端口。一个 连接端口 2881, 一个 RPC 通信端口 2882 。
|
||||
|
||||
```bash
|
||||
yum install -y net-tools
|
||||
|
||||
netstat -ntlp
|
||||
|
||||
输出:
|
||||
[root@45180d71f504 85]# netstat -ntlp
|
||||
Active Internet connections (only servers)
|
||||
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
|
||||
tcp 0 0 0.0.0.0:2881 0.0.0.0:* LISTEN 85/observer
|
||||
tcp 0 0 0.0.0.0:2882 0.0.0.0:* LISTEN 85/observer
|
||||
```
|
||||
|
||||
查看 OceanBase 工作目录结构,这个很有必要。
|
||||
|
||||
```bash
|
||||
yum -y install tree
|
||||
tree /root/ob
|
||||
|
||||
[root@45180d71f504 ~]# tree /root/ob
|
||||
/root/ob [error opening dir]
|
||||
|
||||
0 directories, 0 files
|
||||
[root@45180d71f504 ~]# tree /root/obce/
|
||||
/root/obce/
|
||||
|-- admin
|
||||
|-- etc
|
||||
| |-- observer.config.bin
|
||||
| `-- observer.config.bin.history
|
||||
|-- etc2
|
||||
| |-- observer.conf.bin
|
||||
| `-- observer.conf.bin.history
|
||||
|-- etc3
|
||||
| |-- observer.conf.bin
|
||||
| `-- observer.conf.bin.history
|
||||
|-- log
|
||||
| |-- election.log
|
||||
| |-- election.log.wf
|
||||
| |-- observer.log
|
||||
| |-- observer.log.wf
|
||||
| |-- rootservice.log
|
||||
| `-- rootservice.log.wf
|
||||
|-- run
|
||||
| |-- mysql.sock
|
||||
| `-- observer.pid
|
||||
`-- store
|
||||
|-- clog
|
||||
| `-- 1
|
||||
|-- clog_shm
|
||||
|-- ilog
|
||||
| `-- 1
|
||||
|-- ilog_shm
|
||||
|-- slog
|
||||
| `-- 1
|
||||
`-- sstable
|
||||
`-- block_file
|
||||
```
|
||||
|
||||
如果您是手动部署 OceanBase 节点,这个工作目录下的子目录结构是要手动维护好。否则,`observer` 可能启动失败。使用自动化部署软件 OBD 的时候,会自动创建相应目录。
|
||||
|
||||
| 目录路径(相对于工作目录) | 备注 |
|
||||
|---------------|---------------------|
|
||||
| etc etc2 etc3 | 配置文件所在目录 |
|
||||
| log | 运行日志目录 |
|
||||
| run | 运行输出目录,输出pid文件 |
|
||||
| store | 数据(包括日志)所在总目录 |
|
||||
| store/clog | commit log所在目录 |
|
||||
| store/ilog | ilog 所在目录 |
|
||||
| store/slog | slog所在目录 |
|
||||
| store/sstable | 数据文件block file所在目录。 |
|
||||
|
||||
注意:这个 Docker 示例把 OceanBase 安装在 `root` 用户目录下,并以 `root` 用户运行,这个只是是学习用。生产环境不要以 `root` 用户部署和运行 OceanBase 。
|
54
docs/docs/junior-training/ob-quick-start/chapter02/2.3.md
Normal file
54
docs/docs/junior-training/ob-quick-start/chapter02/2.3.md
Normal file
@ -0,0 +1,54 @@
|
||||
# 如何规划 OceanBase 集群部署
|
||||
|
||||
## 集群架构规划
|
||||
|
||||
OceanBase 以集群形态运行,生产环境最小规模是 3 台服务器(节点)。整个集群里,业务数据会有三份,所以也叫三副本。
|
||||
学习测试的时候,可以部署单副本单节点 OceanBase 集群。
|
||||
这里特别说明的是,单副本跟单节点并不完全对等。单副本单节点是最小集群规模,单副本也是可以扩容为多个节点,整个集群里数据依然是一份,所以叫单副本。
|
||||
|
||||
生产环境,每个机器上启动一个 `observer` 进程,所以一台机器就对应一个节点。学习环境,一个机器可以启动多个 `observer` 进程,模拟多个节点。每个节点的监听端口(默认是 2881 和 2882 )、数据总目录是独立的,互不冲突。每个节点进程启动的最小内存是 10G ,空间需要至少 10G 。
|
||||
|
||||
所以,如果只有一台服务器,如果机器可用内存不足 10G, 则不能启动 `observer` 进程。如果可用内存在10G ~ 20G 之间,则只可以启动一个 `observer` 进程。如果可用内存在 20G ~ 30G 之间,可以启动 2 个 `observer` 进程。如果可用内存超过 30G ,则可以启动 3个 `observer` 进程。当然,内存充足的时候,也可以调大每个 `observer` 进程能获取的内存。内存越大,节点的资源能力就越大。如果有三台机器,就没必要在一个机器上模拟多个节点了。
|
||||
|
||||
除了要部署 `observer` 进程,还需要部署 `obproxy` 。 `obproxy` 也是单进程软件,是访问 OceanBase 的反向代理。虽然 `observer` 节点都可以直接访问,生产环境还是建议通过 `obproxy` 访问 OceanBase 集群。
|
||||
`obproxy` 进程部署位置没有要求。可以部署在应用服务器上,也可以部署在独立的机器上,或者部署在 OceanBase 机器上。`obproxy` 可以部署多个,生产环境建议至少部署两个。
|
||||
|
||||
## 用户规划
|
||||
|
||||
OceanBase 本质上是一个软件,可以运行在任意用户下。OceanBase 软件包默认解压目录是在 `/home/admin/` 下,生产环境默认也是安装在用户 `admin` 下。社区版的软件 RPM 包也是这个特点,支持部署在任意用户的任意目录下。
|
||||
|
||||
为了安全起见,我们不建议在 `root` 用户下直接部署。所以后面都以部署在用户 `admin` 下为前提。在部署之前初始化环境的时候,可能需要修改操作系统的配置,或者设置目录的权限等,这些操作需要 `root` 用户权限。不同客户内部主机登录规范不一样,可以通过 `su` 切换到 `root` 用户,或者给 `admin` 用户增加 `sudo` 权限。
|
||||
|
||||
## 目录规划
|
||||
|
||||
跟 `observer` 有关的目录有好几个:
|
||||
|
||||
+ 软件安装目录。
|
||||
|
||||
如果是安装 OceanBase 的 RPM 包,则需要提前创建好用户 `admin` ,并被自动安装在目录 `/home/admin/oceanbase` 下。
|
||||
|
||||
```bash
|
||||
[root@obce00 ~]# useradd admin
|
||||
[root@obce00 ~]# rpm -ivh rpm/*
|
||||
准备中... ################################# [100%]
|
||||
正在升级/安装...
|
||||
1:oceanbase-ce-libs-3.1.0-1.el7 ################################# [ 33%]
|
||||
2:oceanbase-ce-3.1.0-1.el7 ################################# [ 67%]
|
||||
3:obproxy-3.1.0-1.el7 ################################# [100%]
|
||||
|
||||
[root@obce00 ~]# rpm -ql oceanbase-ce-3.1.0-1.el7.x86_64
|
||||
/home/admin/oceanbase
|
||||
/home/admin/oceanbase/bin
|
||||
/home/admin/oceanbase/bin/import_time_zone_info.py
|
||||
/home/admin/oceanbase/bin/observer
|
||||
/home/admin/oceanbase/etc
|
||||
/home/admin/oceanbase/etc/timezone_V1.log
|
||||
[root@obce00 ~]# rpm -ql obproxy-3.1.0-1.el7
|
||||
/home/admin/obproxy-3.1.0/bin
|
||||
/home/admin/obproxy-3.1.0/bin/obproxy
|
||||
/home/admin/obproxy-3.1.0/bin/obproxyd.sh
|
||||
```
|
||||
|
||||
如果是通过OBD 软件自动化安装,则会将 RPM 包解压到 用户 HOME 目录的隐藏文件夹 `.obd` 下,如:`/.obd/repository/oceanbase-ce/3.1.0/afd11d52f83eef4b456d77969fde620c4bfba85e` 。这种方式是可以同时部署多个版本。
|
||||
|
||||
后面讲解部署方法会首先介绍 OBD 软件自动化部署方法。手动部署方法留在最后,供感兴趣的朋友参考。
|
437
docs/docs/junior-training/ob-quick-start/chapter02/2.4.md
Normal file
437
docs/docs/junior-training/ob-quick-start/chapter02/2.4.md
Normal file
@ -0,0 +1,437 @@
|
||||
# 如何初始化服务器环境
|
||||
|
||||
OceanBase 数据库是单进程软件,需要访问网络,需要打开多个文件以及开启很多 TCP 连接,所以需要修改内核参数和用户会话设置。
|
||||
|
||||
注意:OBProxy 软件如果独立服务器部署的话,也按这个要求初始化服务器环境。
|
||||
|
||||
## 内核参数修改
|
||||
|
||||
修改配置文件。
|
||||
|
||||
```bash
|
||||
vim /etc/sysctl.conf
|
||||
|
||||
net.core.somaxconn = 2048
|
||||
net.core.netdev_max_backlog = 10000
|
||||
net.core.rmem_default = 16777216
|
||||
net.core.wmem_default = 16777216
|
||||
net.core.rmem_max = 16777216
|
||||
net.core.wmem_max = 16777216
|
||||
|
||||
net.ipv4.ip_local_port_range = 3500 65535
|
||||
net.ipv4.ip_forward = 0
|
||||
net.ipv4.conf.default.rp_filter = 1
|
||||
net.ipv4.conf.default.accept_source_route = 0
|
||||
net.ipv4.tcp_syncookies = 0
|
||||
net.ipv4.tcp_rmem = 4096 87380 16777216
|
||||
net.ipv4.tcp_wmem = 4096 65536 16777216
|
||||
net.ipv4.tcp_max_syn_backlog = 16384
|
||||
net.ipv4.tcp_fin_timeout = 15
|
||||
net.ipv4.tcp_max_syn_backlog = 16384
|
||||
net.ipv4.tcp_tw_reuse = 1
|
||||
net.ipv4.tcp_tw_recycle = 1
|
||||
net.ipv4.tcp_slow_start_after_idle=0
|
||||
|
||||
vm.swappiness = 0
|
||||
vm.min_free_kbytes = 2097152
|
||||
vm.max_map_count=655360
|
||||
fs.aio-max-nr=1048576
|
||||
|
||||
```
|
||||
|
||||
让配置生效
|
||||
|
||||
```bash
|
||||
sysctl -p
|
||||
|
||||
```
|
||||
|
||||
## 修改会话变量设置
|
||||
|
||||
您可以通过配置 `limits.conf` 限制修改会话限制。 OceanBase 数据库的进程涉及的限制包括线程最大栈空间大小(Stack)、最大文件句柄数(Open Files)和 core 文件大小 (Core File Size)。
|
||||
|
||||
您可以使用以下两种方法修改资源限制:
|
||||
|
||||
+ 通过启动时在会话级别修改。如:` ulimit -c unlimited ` , 只影响当前会话。如果会话断开重连了,则又是默认配置。
|
||||
+ 通过配置文件 `/etc/security/limits.conf` 在全局级别修改。注意修改后,已经登录的会话需要退出重登录才生效。
|
||||
|
||||
更改配置文件说明。
|
||||
|
||||
将会话级别的最大栈空间大小设置为 `unlimited`,最大文件句柄数设置为 655350,Core 文件大小设置为 `unlimited` 。
|
||||
修改 `/etc/security/limits.conf` 配置文件,如果已有设置值低于这个设置值
|
||||
|
||||
```bash
|
||||
vi /etc/security/limits.conf
|
||||
|
||||
* soft nofile 655360
|
||||
* hard nofile 655360
|
||||
* soft nproc 655360
|
||||
* hard nproc 655360
|
||||
* soft core unlimited
|
||||
* hard core unlimited
|
||||
* soft stack unlimited
|
||||
* hard stack unlimited
|
||||
```
|
||||
|
||||
查看配置方法。退出当前会话,重新登录。执行以下命令,查看配置是否生效:
|
||||
|
||||
```bash
|
||||
ulimit -a
|
||||
```
|
||||
|
||||
## 关闭防火墙和 SELinux
|
||||
|
||||
不同操作系统的防火墙设置可能有点不同,下面以 CentOS 系统为例。
|
||||
|
||||
+ 关闭防火墙
|
||||
|
||||
查看防火墙状态
|
||||
|
||||
```bash
|
||||
systemctl status firewalld
|
||||
```
|
||||
|
||||
如果是 `inactive` 那就不用管。如果是 `active`,那就永久关闭
|
||||
|
||||
```bash
|
||||
systemctl disable firewalld
|
||||
systemctl stop firewalld
|
||||
systemctl status firewalld
|
||||
|
||||
```
|
||||
|
||||
+ 关闭 SELinux
|
||||
|
||||
修改 SELinux 配置文件中的 `SELINUX` 选项。
|
||||
注意:必须使用注释中的三个值之一。如果写错了,机器重启后操作系统会报错起不来,那时候就只能进入单用户模式修改了。
|
||||
|
||||
```bash
|
||||
vi /etc/selinux/config
|
||||
|
||||
# This file controls the state of SELinux on the system.
|
||||
# SELINUX= can take one of these three values:
|
||||
# enforcing - SELinux security policy is enforced.
|
||||
# permissive - SELinux prints warnings instead of enforcing.
|
||||
# disabled - No SELinux policy is loaded.
|
||||
SELINUX=disabled
|
||||
|
||||
```
|
||||
|
||||
配置文件修改后只会重启主机后生效,还需要使用下面命令立即生效。
|
||||
|
||||
```bash
|
||||
setenforce 0
|
||||
|
||||
```
|
||||
|
||||
## 配置时间同步服务
|
||||
|
||||
OceanBase 是分布式数据库产品,是一个集群软件,对各个节点之间的时间同步性有要求。技术上要求所有节点之间的时间误差控制在 50ms 以内。实际生产环境为了稳定性和性能考虑,建议时间误差控制在 10ms 以内。通常只要节点配置时间同步服务器跟公网时间保持同步即可。实际上在企业机房里,企业会有统一的时间服务器跟机房提供的时间服务器或者直接跟公网时间服务器同步,OceanBase 节点只需要跟机房统一的时间服务器进行同步即可。
|
||||
|
||||
CentOS 或 RedHat 7.x 版本推荐使用 `chrony` 服务做时间源。`Chrony` 是 NTP(`Network Time Protocol`,网络时间协议,服务器时间同步的一种协议)的另一种实现,与 `ntpd` 不同,它可以更快且更准确地同步系统时钟,最大程度的减少时间和频率误差。
|
||||
|
||||
+ 判断是否使用 `ntpd` 同步时间。
|
||||
|
||||
```bash
|
||||
systemctl status ntpd
|
||||
Unit ntpd.service could not be found.
|
||||
```
|
||||
|
||||
如果提示上面这个信息,表示没有使用 `ntpd`,那就继续。
|
||||
如果提示有 ntpd 服务,就卸载 `ntpd` 软件。
|
||||
|
||||
+ 安装 `chrony` 服务
|
||||
|
||||
这里采用 YUM 安装方法。您也可以下载相应的 RPM 包安装。
|
||||
|
||||
```bash
|
||||
yum -y install chrony
|
||||
|
||||
```
|
||||
|
||||
+ `chrony` 配置说明
|
||||
|
||||
`chrony` 服务守护进程名是 `chronyd`,`chronyc` 是用来监控 `chronyd` 性能和配置参数的命令行工具。
|
||||
`chrony` 的主配置文件: `/etc/chrony.conf` 。配置方法如下:
|
||||
|
||||
```bash
|
||||
vi /etc/chrony.conf
|
||||
|
||||
# server 后面跟时间同步服务器
|
||||
# 使用pool.ntp.org 项目中的公共服务器。按 server 配置,理论上您想添加多少时间服务器都可以。
|
||||
# 或者使用 阿里云的 ntp 服务器
|
||||
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
|
||||
server ntp.cloud.aliyuncs.com minpoll 4 maxpoll 10 iburst
|
||||
server ntp.aliyun.com minpoll 4 maxpoll 10 iburst
|
||||
server ntp1.aliyun.com minpoll 4 maxpoll 10 iburst
|
||||
server ntp1.cloud.aliyuncs.com minpoll 4 maxpoll 10 iburst
|
||||
server ntp10.cloud.aliyuncs.com minpoll 4 maxpoll 10 iburst
|
||||
|
||||
# 如果是测试环境,没有时间同步服务器,那就选取一台配置为时间同步服务器。
|
||||
# 如果选中的是本机,则取消下面 server 注释
|
||||
#server 127.127.1.0
|
||||
|
||||
# 根据实际时间计算出服务器增减时间的比率,然后记录到一个文件中,在系统重启后为系统做出最佳时间补偿调整。
|
||||
driftfile /var/lib/chrony/drift
|
||||
|
||||
# chronyd 根据需求减慢或加速时间调整,
|
||||
# 在某些情况下系统时钟可能漂移过快,导致时间调整用时过长。
|
||||
# 该指令强制 chronyd 调整时期,大于某个阀值时步进调整系统时钟。
|
||||
# 只有在因 chronyd 启动时间超过指定的限制时(可使用负值来禁用限制)没有更多时钟更新时才生效。
|
||||
makestep 1.0 3
|
||||
|
||||
# 将启用一个内核模式,在该模式中,系统时间每11分钟会拷贝到实时时钟(RTC)。
|
||||
rtcsync
|
||||
|
||||
# Enable hardware timestamping on all interfaces that support it.
|
||||
# 通过使用hwtimestamp指令启用硬件时间戳
|
||||
#hwtimestamp eth0
|
||||
#hwtimestamp eth1
|
||||
#hwtimestamp *
|
||||
|
||||
# Increase the minimum number of selectable sources required to adjust
|
||||
# the system clock.
|
||||
#minsources 2
|
||||
|
||||
# 指定一台主机、子网,或者网络以允许或拒绝NTP连接到扮演时钟服务器的机器
|
||||
#allow 192.168.0.0/16
|
||||
#deny 192.168/16
|
||||
|
||||
# 即使没有同步到时间源,也要服务时间
|
||||
local stratum 10
|
||||
|
||||
# 指定包含NTP验证密钥的文件。
|
||||
#keyfile /etc/chrony.keys
|
||||
|
||||
# 指定日志文件的目录。
|
||||
logdir /var/log/chrony
|
||||
|
||||
|
||||
|
||||
# Select which information is logged.
|
||||
#log measurements statistics tracking
|
||||
```
|
||||
|
||||
最简单的配置文件如下:
|
||||
|
||||
```bash
|
||||
server 127.127.1.0
|
||||
allow 172.20.0.0/16
|
||||
local stratum 10
|
||||
```
|
||||
|
||||
+ 常用一些命令
|
||||
|
||||
使用 `chrony` 时间服务是为了保证 OceanBase 集群各个节点时间尽可能保证同步,下面这些命令供参考。具体使用请查看 `chrony` 官方使用说明:[Chronyc Frequently Asked Questions](https://chrony.tuxfamily.org/faq.html)
|
||||
|
||||
```bash
|
||||
查看时间同步活动
|
||||
chronyc activity
|
||||
|
||||
查看时间服务器
|
||||
chronyc sources
|
||||
|
||||
查看同步状态
|
||||
chronyc sources -v
|
||||
|
||||
校准时间服务器:
|
||||
chronyc tracking
|
||||
```
|
||||
|
||||
使用 `clockdiff` 命令可以检查本机跟目标机器的时间同步误差,以这个结果为准。
|
||||
|
||||
```bash
|
||||
|
||||
```
|
||||
|
||||
## (可选)时区设置
|
||||
|
||||
如果时间显示跟当前实际时间差异很大的时候,请查看确认当前系统时区。
|
||||
|
||||
```bash
|
||||
timedatectl
|
||||
|
||||
输出:
|
||||
[root@obce00 ~]# timedatectl
|
||||
|
||||
Local time: 六 2021-09-11 07:37:22 CST
|
||||
Universal time: 五 2021-09-10 23:37:22 UTC
|
||||
RTC time: 六 2021-09-11 07:37:22
|
||||
Time zone: Asia/Shanghai (CST, +0800)
|
||||
System clock synchronized: yes
|
||||
NTP service: active
|
||||
RTC in local TZ: yes
|
||||
|
||||
Warning: The system is configured to read the RTC time in the local time zone.
|
||||
This mode cannot be fully supported. It will create various problems
|
||||
with time zone changes and daylight saving time adjustments. The RTC
|
||||
time is never updated, it relies on external facilities to maintain it.
|
||||
If at all possible, use RTC in UTC by calling
|
||||
'timedatectl set-local-rtc 0'.
|
||||
```
|
||||
|
||||
查看所有可用时区。
|
||||
|
||||
```bash
|
||||
timedatectl list-timezones
|
||||
```
|
||||
|
||||
设置当前系统时区方法如下。设置完时区后,强制同步下系统时钟。
|
||||
|
||||
```bash
|
||||
timedatectl set-timezone Asia/Shanghai
|
||||
|
||||
chronyc -a makestep
|
||||
|
||||
输出:
|
||||
[root@obce00 ~]# chronyc -a makestep
|
||||
200 OK
|
||||
```
|
||||
|
||||
## 配置安装用户
|
||||
|
||||
前面分析过,建议安装部署在普通用户下,后面都以用户 `admin` 为例。
|
||||
|
||||
注意:给用户 `admin` 赋 `sudo` 权限不是必须的,只是为了某些时候方便。您可以结合企业安全规范决定是否执行。
|
||||
|
||||
下面是创建用户 `admin` 并授予 `sudo` 权限的方法,供参考。
|
||||
|
||||
```bash
|
||||
# 新增普通用户 admin
|
||||
useradd admin
|
||||
|
||||
# 改用户密码
|
||||
passwd admin
|
||||
|
||||
# 或下面命令指定密码,密码修改为自己的。
|
||||
|
||||
echo 'admin:adminPWD123' | chpasswd
|
||||
|
||||
```
|
||||
|
||||
在 CentOS 上面给 `admin` 用户 增加 `sodu` 权限有两个方法:
|
||||
|
||||
+ 把用户加到 用户组 `wheel` 里。
|
||||
+ 把用户加到 `/etc/sudoers` 文件里。
|
||||
|
||||
```bash
|
||||
# 如果sudo 不存在,就安装 sudo
|
||||
yum install -y sudo
|
||||
|
||||
# 方法一:admin 加到用户组 wheel 里。
|
||||
[root@obce00 ~]# usermod admin -G wheel
|
||||
[root@obce00 ~]# id admin
|
||||
uid=1000(admin) gid=1000(admin) groups=1000(admin),10(wheel)
|
||||
|
||||
|
||||
# 方法二:admin 添加到 /etc/sudoers 文件中
|
||||
[root@obce00 ~]# cat /etc/sudoers |grep wheel
|
||||
## Allows people in group wheel to run all commands
|
||||
%wheel ALL=(ALL) ALL
|
||||
# %wheel ALL=(ALL) NOPASSWD: ALL
|
||||
|
||||
vim /etc/sudoers
|
||||
## Allow root to run any commands anywhere
|
||||
admin ALL=(ALL) ALL
|
||||
|
||||
```
|
||||
|
||||
验证方法,切换到 `admin` 用户下,执行命令:`sudo date` 。输入密码后能返回结果。
|
||||
|
||||
## 配置 SSH 免密登录
|
||||
|
||||
如果您是完全手动部署 OceanBase 集群,则登录到相应节点上安装相关软件包,并启动 `observer` 或 `obproxy` 进程,则不需要配置 SSH 免密登录。
|
||||
如果您是使用自动化技术部署 OceanBase 集群,则需要一台中控机。所有的命令通过中控机向 OceanBase 集群节点发出。则需要配置中控机的 OBD 运行的用户到 OceanBase 集群节点的 OBSERVER 安装的用户的 SSH 免密登录。本文示例是中控机的用户 `admin` 到 OBSERVER 节点的用户 `admin` 的免密登录。
|
||||
|
||||
这个配置 SSH 免密登录方法有很多,这里选择将中控机的 RSA 或 DSA 公钥复制到目标节点的 SSH 配置文件中。
|
||||
|
||||
+ 在中控机生成 RSA 或 DSA 公钥和私钥
|
||||
|
||||
```bash
|
||||
ssh-keygen -t rsa
|
||||
|
||||
输出:
|
||||
[admin@obce00 ~]$ ssh-keygen -t rsa
|
||||
Generating public/private rsa key pair.
|
||||
Enter file in which to save the key (/home/admin/.ssh/id_rsa):
|
||||
Created directory '/home/admin/.ssh'.
|
||||
Enter passphrase (empty for no passphrase):
|
||||
Enter same passphrase again:
|
||||
Your identification has been saved in /home/admin/.ssh/id_rsa.
|
||||
Your public key has been saved in /home/admin/.ssh/id_rsa.pub.
|
||||
The key fingerprint is:
|
||||
SHA256:7yCIks5NT8j7L1XIq+gRL3qm04cvHTSQmlaNr4gdHqc admin@obce00
|
||||
The key's randomart image is:
|
||||
+---[RSA 3072]----+
|
||||
| + |
|
||||
| = . |
|
||||
| + o . . |
|
||||
| +o .+ o . |
|
||||
|oo.*o . S |
|
||||
|.oEo+o o . |
|
||||
|o o*=o= . . |
|
||||
|oo+B*= . o |
|
||||
| =*+=+o. . |
|
||||
+----[SHA256]-----+
|
||||
[admin@obce00 ~]$
|
||||
|
||||
[admin@obce00 ~]$ ls -al .ssh/
|
||||
total 8
|
||||
drwx------ 2 admin admin 38 Sep 11 14:43 .
|
||||
drwx------ 4 admin admin 115 Sep 11 14:43 ..
|
||||
-rw------- 1 admin admin 2602 Sep 11 14:43 id_rsa
|
||||
-rw-r--r-- 1 admin admin 569 Sep 11 14:43 id_rsa.pub
|
||||
```
|
||||
|
||||
上面命令会在用户的 HOME 目录生成文件夹 `.ssh` 。注意,不要改变文件夹以及里面文件的访问权限。
|
||||
|
||||
+ 打通到本机的 SSH 免密登录
|
||||
|
||||
复制 RSA 或 DSA 公钥到目标节点,推荐使用命令 `ssh-copy-id` 。
|
||||
|
||||
```bash
|
||||
[admin@obce00 ~]$ ssh-copy-id `hostname -i`
|
||||
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/admin/.ssh/id_rsa.pub"
|
||||
The authenticity of host '172.20.249.50 (172.20.249.50)' can't be established.
|
||||
ECDSA key fingerprint is SHA256:Zyyq5dY+05pkdqGCm6K43s97l8DUGv0LjY5t+zrdVkE.
|
||||
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
|
||||
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
|
||||
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
|
||||
admin@172.20.249.50's password:
|
||||
|
||||
Number of key(s) added: 1
|
||||
|
||||
Now try logging into the machine, with: "ssh '172.20.249.50'"
|
||||
and check to make sure that only the key(s) you wanted were added.
|
||||
|
||||
[admin@obce00 ~]$ ls -al .ssh
|
||||
total 16
|
||||
drwx------ 2 admin admin 80 Sep 11 14:44 .
|
||||
drwx------ 4 admin admin 115 Sep 11 14:43 ..
|
||||
-rw------- 1 admin admin 569 Sep 11 14:44 authorized_keys
|
||||
-rw------- 1 admin admin 2602 Sep 11 14:43 id_rsa
|
||||
-rw-r--r-- 1 admin admin 569 Sep 11 14:43 id_rsa.pub
|
||||
-rw-r--r-- 1 admin admin 175 Sep 11 14:44 known_hosts
|
||||
[admin@obce00 ~]$
|
||||
|
||||
```
|
||||
|
||||
## 磁盘文件系统划分
|
||||
|
||||
OceanBase 读写磁盘主要是三类文件:
|
||||
|
||||
+ 运行日志。在启动目录下的 `log` 目录里。主要记录进程 `observer` 的运行日志、选举服务的运行日志和 `rootservice` 的运行日志。主要读写特点是顺序写。
|
||||
+ 数据文件。主要是指数据文件 `block_file` ,一次性初始化大小,后面可以在线扩容,但是不能缩容。主要读写特点是随机读、顺序写。偶尔密集的随机写。
|
||||
+ 事务日志文件。主要是指事务和 `sstable` 相关的日志 ,包括 `clog`、`ilog` 和 `slog` 等。主要读写特点是顺序写。
|
||||
|
||||
这三个文件尽可能的分散在不同的磁盘上存储。如果物理上只有一块盘,则可以使用 `fdisk` 或 `lvm` 划分为多个逻辑盘。
|
||||
下面针对机器提供的裸盘(`/dev/vdb`) 演示如何分盘。
|
||||
|
||||
+ 方法一是使用 `fdisk` 直接将 `/dev/vdb` 划分为两个逻辑盘 (`/dev/vdb1` 和 `/dev/vdb2` )。
|
||||
这个方法的缺陷是这里 `/dev/vdb` 是云盘,后期还可以扩容,使用 `fdisk` 分盘后,扩容比较麻烦。
|
||||
+ 方法二是对 `/dev/vdb` 使用 LVM 技术,划分出两个 LV 出来,一个给数据文件用,一个给日志文件。
|
||||
|
||||
`fdisk` 或者 `parted`,以及 LVM 技术都是磁盘划分组合的手段。这里就不详细描述方法。
|
||||
不管是哪种办法,优先考虑事务日志文件的大小,生产环境建议是可用内存大小的 3-4 倍。剩余的大小再留给数据文件。如果是学习环境,总的盘大小本身就很小,可以不遵守这个规则,日志文件大小比内存大 1-2 倍也可以。
|
||||
|
||||
注意: OBProxy 独立部署的服务器就不用做这个文件系统划分了。OBProxy 只有运行日志目录。
|
167
docs/docs/junior-training/ob-quick-start/chapter02/2.5.md
Normal file
167
docs/docs/junior-training/ob-quick-start/chapter02/2.5.md
Normal file
@ -0,0 +1,167 @@
|
||||
# 如何安装 OBD
|
||||
|
||||
OBD 全称是 OceanBase Deploy,是 OceanBase 社区版的命令行下自动化部署软件。
|
||||
根据中控机器能否连接公网,提供两个安装方法:离线和在线。二选一。
|
||||
|
||||
## 安装 OBD 软件(离线)
|
||||
|
||||
首先在中控机上部署 OBD 软件。如果中控机不能上网,则需要提前下载好 OBD 、 OBSERVER 和 OBPROXY 相关软件包。
|
||||
|
||||
+ 下载相关软件包
|
||||
|
||||
软件包地址:[https://mirrors.aliyun.com/oceanbase/community/stable/el/8/x86_64/](https://mirrors.aliyun.com/oceanbase/community/stable/el/8/x86_64/)
|
||||
|
||||
```bash
|
||||
wget https://mirrors.aliyun.com/oceanbase/community/stable/el/8/x86_64/ob-deploy-1.1.0-1.el8.x86_64.rpm
|
||||
wget https://mirrors.aliyun.com/oceanbase/community/stable/el/8/x86_64/oceanbase-ce-3.1.0-3.el8.x86_64.rpm
|
||||
wget https://mirrors.aliyun.com/oceanbase/community/stable/el/8/x86_64/oceanbase-ce-libs-3.1.0-3.el8.x86_64.rpm
|
||||
wget https://mirrors.aliyun.com/oceanbase/community/stable/el/8/x86_64/obclient-2.0.0-2.el8.x86_64.rpm
|
||||
wget https://mirrors.aliyun.com/oceanbase/community/stable/el/8/x86_64/libobclient-2.0.0-2.el8.x86_64.rpm
|
||||
wget https://mirrors.aliyun.com/oceanbase/community/stable/el/8/x86_64/obproxy-3.1.0-1.el8.x86_64.rpm
|
||||
|
||||
```
|
||||
|
||||
将上面文件都复制到中控机上临时目录。
|
||||
|
||||
+ 离线安装 OBD
|
||||
|
||||
```bash
|
||||
[admin@obce00 obd]$ sudo rpm -ivh ob-deploy-1.1.0-1.el8.x86_64.rpm
|
||||
[sudo] password for admin:
|
||||
warning: ob-deploy-1.1.0-1.el8.x86_64.rpm: Header V4 RSA/SHA1 Signature, key ID e9b4a7aa: NOKEY
|
||||
Verifying... ################# [100%]
|
||||
Preparing... ################# [100%]
|
||||
Updating / installing...
|
||||
1:ob-deploy-1.1.0-1.el8 ################# [100%]
|
||||
Installation of obd finished successfully
|
||||
Please source /etc/profile.d/obd.sh to enable it
|
||||
|
||||
[admin@obce00 obd]$ source /etc/profile.d/obd.sh
|
||||
[admin@obce00 obd]$ which obd
|
||||
/usr/bin/obd
|
||||
```
|
||||
|
||||
`ob-deploy` 软件默认安装在 `/usr/obd` 下。不同版本可能有点变化。可以通过下面命令查看位置。
|
||||
|
||||
```bash
|
||||
rpm -ql `rpm -qa|grep ob-deploy`
|
||||
```
|
||||
|
||||
但是 OBD 工作的文件都在当前用户 HOME 目录下:`~/.obd/`
|
||||
|
||||
```bash
|
||||
[admin@obce00 ~]$ tree ~/.obd -L 1
|
||||
/home/admin/.obd
|
||||
├── cluster
|
||||
├── log
|
||||
├── mirror
|
||||
├── obd.conf
|
||||
├── plugins
|
||||
├── repository
|
||||
└── version
|
||||
|
||||
5 directories, 2 files
|
||||
|
||||
```
|
||||
|
||||
命令 `obd` 使用帮助,可以直接用 `-h` 查看。
|
||||
|
||||
```bash
|
||||
obd -h
|
||||
|
||||
输出:
|
||||
[admin@obce00 ~]$ obd -h
|
||||
Usage: obd <command> [options]
|
||||
|
||||
Available commands:
|
||||
|
||||
cluster Deploy and manage a cluster.
|
||||
|
||||
mirror Manage a component repository for OBD.
|
||||
|
||||
repo Manage local repository for OBD.
|
||||
|
||||
test Run test for a running deploy deployment.
|
||||
|
||||
update Update OBD.
|
||||
|
||||
|
||||
Options:
|
||||
--version show program's version number and exit
|
||||
-h, --help Show help and exit.
|
||||
-v, --verbose Activate verbose output.
|
||||
|
||||
```
|
||||
|
||||
+ 将软件包加到离线仓库
|
||||
|
||||
首先要删除远程仓库,使用下面命令。
|
||||
注意:下面命令要在部署运行 OBD 的操作系统用户下操作。这里是用户 `admin` 。
|
||||
|
||||
```bash
|
||||
/bin/rm -rf ~/.obd/mirror/remote/OceanBase.repo
|
||||
```
|
||||
|
||||
然后将前面的软件包都复制到本地仓库,使用下面命令。
|
||||
|
||||
```bash
|
||||
obd mirror clone /tmp/obd/*.rpm
|
||||
|
||||
```
|
||||
|
||||
查看仓库的RPM列表。
|
||||
|
||||
```bash
|
||||
obd mirror list local
|
||||
|
||||
输出:
|
||||
[admin@obce00 ~]$ obd mirror list local
|
||||
+-------------------------------------------------------------------------------------------+
|
||||
| local Package List |
|
||||
+-------------------+---------+---------+--------+------------------------------------------+
|
||||
| name | version | release | arch | md5 |
|
||||
+-------------------+---------+---------+--------+------------------------------------------+
|
||||
| libobclient | 2.0.0 | 2.el8 | x86_64 | 358a90b4a47da193140c3bee023b2450126de4c6 |
|
||||
| obclient | 2.0.0 | 2.el8 | x86_64 | 71753559d82e9f6c0b8a6d949b9a5194c6c53dc6 |
|
||||
| ob-deploy | 1.1.0 | 1.el8 | x86_64 | 0c84129b699aca0b43fdfb01fb2c4439f36ff856 |
|
||||
| obproxy | 3.1.0 | 1.el8 | x86_64 | d242ea5fe45222b8f61c3135ba2aaa778c61ea22 |
|
||||
| oceanbase-ce | 3.1.0 | 3.el8 | x86_64 | 84bd2fe27f8b8243cc57d8a3f68b4c50f94aab80 |
|
||||
| oceanbase-ce-libs | 3.1.0 | 3.el8 | x86_64 | 1c20be0df8929f843e9bdd509de4916f883d62f8 |
|
||||
+-------------------+---------+---------+--------+------------------------------------------+
|
||||
[admin@obce00 ~]$
|
||||
```
|
||||
|
||||
## 安装 OBD 软件(在线)
|
||||
|
||||
首先在中控机上部署 OBD 软件。如果中控机能上网,则可以直接添加 OceanBase 的仓库,使用 YUM 安装。
|
||||
|
||||
```bash
|
||||
yum install -y yum-utils
|
||||
yum-config-manager --add-repo https://mirrors.aliyun.com/oceanbase/OceanBase.repo
|
||||
yum install -y ob-deploy
|
||||
|
||||
```
|
||||
|
||||
查看一下 `OceanBase.repo` 内容。
|
||||
|
||||
```bash
|
||||
cat /etc/yum.repos.d/OceanBase.repo
|
||||
|
||||
输出:
|
||||
# OceanBase.repo
|
||||
|
||||
[oceanbase.community.stable]
|
||||
name=OceanBase-community-stable-el$releasever
|
||||
baseurl=http://mirrors.aliyun.com/oceanbase/community/stable/el/$releasever/$basearch/
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=http://mirrors.aliyun.com/oceanbase/RPM-GPG-KEY-OceanBase
|
||||
|
||||
[oceanbase.development-kit]
|
||||
name=OceanBase-development-kit-el$releasever
|
||||
baseurl=http://mirrors.aliyun.com/oceanbase/development-kit/el/$releasever/$basearch/
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=http://mirrors.aliyun.com/oceanbase/RPM-GPG-KEY-OceanBase
|
||||
|
||||
```
|
387
docs/docs/junior-training/ob-quick-start/chapter02/2.6.md
Normal file
387
docs/docs/junior-training/ob-quick-start/chapter02/2.6.md
Normal file
@ -0,0 +1,387 @@
|
||||
# 如何使用 OBD 自动化部署单节点集群
|
||||
|
||||
OBD 对 OceanBase 的管理权限很高,所以 OBD 要部署在数据库服务器的中控机上,需要 DBA 有完全的控制权限。
|
||||
|
||||
## 部署规划
|
||||
|
||||
这里我使用一台机器。
|
||||
|
||||
+ 机器信息如下:
|
||||
|
||||
| 机器类型 | 云主机 ECS |
|
||||
|------|-------------------------------|
|
||||
| IP | 172.20.249.50 |
|
||||
| 网卡名 | eth0 |
|
||||
| OS | CentOS Linux release 8.4.2105 |
|
||||
| CPU | 4C |
|
||||
| 内存 | 总内存 14G,可用内存 11G |
|
||||
| 磁盘1 | 云盘 /dev/vda 100G |
|
||||
| 磁盘2 | 云盘 /dev/vdb 100G |
|
||||
|
||||
+ 机器和角色划分:
|
||||
|
||||
| 角色 | 机器 | 备注 |
|
||||
|----------|---------------|------------------|
|
||||
| OBD | 172.20.249.50 | 中控机,自动化部署软件 |
|
||||
| OBSERVER | 172.20.249.50 | OceanBase 数据库 |
|
||||
| OBPROXY | 172.20.249.50 | OceanBase 访问反向代理 |
|
||||
| OBCLIENT | 172.20.249.50 | OceanBase 命令行客户端 |
|
||||
|
||||
磁盘划分,这里就使用 LVM 技术对 `/dev/vdb` 进行划分。LVM 划分 LV 大小时请根据实际磁盘大小调整参数。
|
||||
|
||||
```bash
|
||||
# lvm 分盘
|
||||
pvcreate /dev/vdb
|
||||
vgcreate obvg /dev/vdb
|
||||
lvcreate -L 20G obvg -n lvredo
|
||||
lvcreate -l 100%FREE obvg -n lvdata
|
||||
|
||||
# 格式化文件系统
|
||||
mkfs.ext4 /dev/obvg/lvdata
|
||||
mkfs.ext4 /dev/obvg/lvredo
|
||||
|
||||
# 修改 mount 参数文件
|
||||
vim /etc/fstab
|
||||
/dev/obvg/lvredo /redo ext4 defaults,noatime,nodiratime,nodelalloc,barrier=0 0 0
|
||||
/dev/obvg/lvdata /data ext4 defaults,noatime,nodiratime,nodelalloc,barrier=0 0 0
|
||||
|
||||
# 挂载文件系统
|
||||
mkdir -p /data /redo
|
||||
vim /etc/fstab
|
||||
mount /data
|
||||
mount /redo
|
||||
chown -R admin.admin /data /redo
|
||||
|
||||
# 检查
|
||||
df -h
|
||||
|
||||
输出:
|
||||
文件系统 容量 已用 可用 已用% 挂载点
|
||||
/dev/mapper/obvg-lvdata 59G 53M 56G 1% /data
|
||||
/dev/mapper/obvg-lvredo 20G 45M 19G 1% /redo
|
||||
|
||||
```
|
||||
|
||||
## 编辑 OBD 配置文件
|
||||
|
||||
OBD 针对不同的部署场景提供不同的配置文件。这些配置文件示例在 OceanBase 开源项目地址里,具体是:[https://github.com/oceanbase/obdeploy/tree/master/example](https://github.com/oceanbase/obdeploy/tree/master/example) 。
|
||||
|
||||
如果是部署单节点版本,就下载其中两个配置文件:
|
||||
+ 部署单节点 `observer` 进程: [https://github.com/oceanbase/obdeploy/blob/master/example/mini-single-example.yaml](https://github.com/oceanbase/obdeploy/blob/master/example/mini-single-example.yaml)
|
||||
+ 部署单节点 `observer` 和 `obproxy` 进程:[https://github.com/oceanbase/obdeploy/blob/master/example/mini-single-with-obproxy-example.yaml](https://github.com/oceanbase/obdeploy/blob/master/example/mini-single-with-obproxy-example.yaml)
|
||||
|
||||
|
||||
这里简单起见,只部署单节点 `observer` 进程,所以下载第一个配置文件。
|
||||
注意,后续版本的配置文件格式可能会有些变化,请参考 OBD 工具具体使用说明。
|
||||
|
||||
```yaml
|
||||
[admin@obce00 ~]$ cat obce-single.yaml
|
||||
# Only need to configure when remote login is required
|
||||
# user:
|
||||
# username: your username
|
||||
# password: your password if need
|
||||
# key_file: your ssh-key file path if need
|
||||
# port: your ssh port, default 22
|
||||
# timeout: ssh connection timeout (second), default 30
|
||||
oceanbase-ce:
|
||||
servers:
|
||||
# Please don't use hostname, only IP can be supported
|
||||
- 172.20.249.50
|
||||
global:
|
||||
# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
|
||||
home_path: /home/admin/oceanbase-ce
|
||||
# The directory for data storage. The default value is $home_path/store.
|
||||
data_dir: /data
|
||||
# The directory for clog, ilog, and slog. The default value is the same as the data_dir value.
|
||||
redo_dir: /redo
|
||||
# Please set devname as the network adaptor's name whose ip is in the setting of severs.
|
||||
# if set severs as "127.0.0.1", please set devname as "lo"
|
||||
# if current ip is 192.168.1.10, and the ip's network adaptor's name is "eth0", please use "eth0"
|
||||
devname: eth0
|
||||
mysql_port: 2881 # External port for OceanBase Database. The default value is 2881.
|
||||
rpc_port: 2882 # Internal port for OceanBase Database. The default value is 2882.
|
||||
zone: zone1
|
||||
cluster_id: 1
|
||||
# please set memory limit to a suitable value which is matching resource.
|
||||
memory_limit: 8G # The maximum running memory for an observer
|
||||
system_memory: 3G # The reserved system memory. system_memory is reserved for general tenants. The default value is 30G.
|
||||
stack_size: 512K
|
||||
cpu_count: 16
|
||||
cache_wash_threshold: 1G
|
||||
__min_full_resource_pool_memory: 268435456
|
||||
workers_per_cpu_quota: 10
|
||||
schema_history_expire_time: 1d
|
||||
# The value of net_thread_count had better be same as cpu's core number.
|
||||
net_thread_count: 4
|
||||
major_freeze_duty_time: Disable
|
||||
minor_freeze_times: 10
|
||||
enable_separate_sys_clog: 0
|
||||
enable_merge_by_turn: FALSE
|
||||
# datafile_disk_percentage: 20 # The percentage of the data_dir space to the total disk space. This value takes effect only when datafile_size is 0. The default value is 90.
|
||||
datafile_size: 50G
|
||||
syslog_level: WARN # System log level. The default value is INFO.
|
||||
enable_syslog_wf: false # Print system logs whose levels are higher than WARNING to a separate log file. The default value is true.
|
||||
enable_syslog_recycle: true # Enable auto system log recycling or not. The default value is false.
|
||||
max_syslog_file_count: 10 # The maximum number of reserved log files before enabling auto recycling. The default value is 0.
|
||||
root_password: bzNvgyhB # root user password, can be empty
|
||||
```
|
||||
|
||||
这个配置文件是专门针对最小内存(可用内存大于 8G)的节点配置,里面指定了很多进程 `observer` 的启动参数。注意 `yaml` 的格式,每个配置项后面冒号(`:`) 跟后面的值之间必须有个空格(`' '`)。
|
||||
下面就关键的几个参数补充说明如下:
|
||||
|
||||
| 配置项名 | 配置值 | 备注 |
|
||||
|------------------------------------------|--------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| servers | 172.20.249.50 | 本示例是在中控机上部署 OBSERVER,所以写中控机IP。可以写实际IP,也可以写 127.0.0.1(仅学习用)。 |
|
||||
| home_path | /home/admin/oceanbase-ce | 指定到普通用户(admin)的目录下,为区别于企业版,文件名叫 `oceanbase-ce` 。 |
|
||||
| data_dir | /data | 指向独立的磁盘,这里使用前面分配的 LV(`lvdata`)。实际存储 OceanBase 的数据文件目录(`sstable`)。 |
|
||||
| redo_dir | /redo | 指向独立的磁盘,这里使用前面分配的LV(`lvredo`)。实际存储 OceanBase 的事务日志目录(`clog、slog和 ilog`)。 |
|
||||
| devname | eth0 | 这个是跟 servers 里指定的 IP 对应的网卡。如果前面 IP 是 127.0.0.1 ,那么这里就填 lo 。通过 `ip addr` 命令可以查看 IP 和网卡对应关系。 |
|
||||
| mysql_port | 2881 | 进程 `observer` 的连接端口,默认是 2881 。后面 OceanBase 客户端直连这个端口可以访问该节点。 |
|
||||
| rpc_port | 2882 | 进程 `observer` 跟其他节点进程之间的 RPC 通信端口,默认是 2882 。 |
|
||||
| zone | zone1 | `zone` 是逻辑机房的概念。单副本集群下只有一个 `zone`,默认取名 `zone1`。三副本集群会有三个 `zone`,名字随意,不要重复即可。 |
|
||||
| cluster_id | 1 | OceanBase 集群ID 标识,不同集群不要重复即可。 |
|
||||
| memory_limit | 8G | 进程 `observer` 能从OS 获取的最大内存,最小不少于 8G 。如果机器内存丰富的话,这个参数可以大一些。 |
|
||||
| system_memory | 4G | 进程 `observer` 留给集群内部用的保留内存,这个会占用上面 `memory_limit` 的内存,留给业务租户的就更少。 |
|
||||
| datafile_size datafile_disk_percentage | | 这两个参数 2 选 1。用来指定该节点数据文件(`block_file`)大小的。可以按大小指定,或者按磁盘空间的百分比指定。这里示例磁盘空间很小,为精确控制,指定数据文件大小。当数据文件和事务日志文件是共用一个磁盘的时候,则必须指定数据文件具体大小,以避免日志文件的目录空间不够。 |
|
||||
| syslog_level | WARN 或 ERROR | 运行日志的日志级别,有 INFO 、WARN、 ERROR 等几个级别。级别越低,日志量越大。进程 `observer` 的日志量非常大,如果磁盘空间不大的话,就调整为 WARN 或 ERROR 。 |
|
||||
| enable_syslog_recycle | TRUE | 指定运行日志是否以滚动方式输出,最多保留 指定数量的运行日志。 |
|
||||
| max_syslog_file_count | 10 | 根据磁盘空间大小定,这里默认保留最多 10 个历史运行日志文件。 |
|
||||
| root_password | 随机字符串 | OceanBase 集群的超级管理员 `root@sys` 的密码,默认是空,建议设置复杂的密码。 |
|
||||
|
||||
当上面部署成功后,OBD 会把配置文件 `obce-single.yaml` 复制到自己的工作目录里(`~/.obd/cluster/obce-single/config.yaml` ),后期再改外面这个 `obce-single.yaml` 文件,是不生效的。
|
||||
|
||||
注意:如果你机器内存大于 64G 时,上面参数跟内存有关的参数可以不设置。
|
||||
|
||||
## OBD 开始部署集群
|
||||
|
||||
配置文件准备好后,就可以部署这个配置文件对应的集群了,部署内容主要包含:
|
||||
+ 复制软件到相应节点,并安装软件。
|
||||
+ 在相应节点创建相关目录。
|
||||
|
||||
部署使用命令:`obd cluster deploy [集群名] -c 集群配置文件 ` 。
|
||||
这个集群名只是这个配置文件在 OBD 里的唯一标识,可以跟配置文件中的集群名一样,也可以跟文件名一样,这个不强要求。
|
||||
|
||||
|
||||
```bash
|
||||
obd cluster deploy obce-single -c obce-single.yaml
|
||||
|
||||
输出:
|
||||
[admin@obce00 ~]$ obd cluster deploy obce-single -c obce-single.yaml
|
||||
oceanbase-ce-3.1.0 already installed.
|
||||
+-----------------------------------------------------------------------------+
|
||||
| Packages |
|
||||
+--------------+---------+---------+------------------------------------------+
|
||||
| Repository | Version | Release | Md5 |
|
||||
+--------------+---------+---------+------------------------------------------+
|
||||
| oceanbase-ce | 3.1.0 | 3.el8 | 84bd2fe27f8b8243cc57d8a3f68b4c50f94aab80 |
|
||||
+--------------+---------+---------+------------------------------------------+
|
||||
Repository integrity check ok
|
||||
Parameter check ok
|
||||
Open ssh connection ok
|
||||
Remote oceanbase-ce-3.1.0-84bd2fe27f8b8243cc57d8a3f68b4c50f94aab80 repository install ok
|
||||
Remote oceanbase-ce-3.1.0-84bd2fe27f8b8243cc57d8a3f68b4c50f94aab80 repository lib check ok
|
||||
Cluster status check ok
|
||||
Initializes cluster work home ok
|
||||
obce-single deployed
|
||||
[admin@obce00 ~]$
|
||||
|
||||
```
|
||||
|
||||
检查一下部署的结果。
|
||||
+ 首先看部署状态,用命令 `obd cluster list` 。
|
||||
|
||||
```bash
|
||||
obd cluster list
|
||||
|
||||
输出:
|
||||
[admin@obce00 ~]$ obd cluster list
|
||||
|
||||
+----------------------------------------------------------------------+
|
||||
| Cluster List |
|
||||
+-------------+--------------------------------------+-----------------+
|
||||
| Name | Configuration Path | Status (Cached) |
|
||||
+-------------+--------------------------------------+-----------------+
|
||||
| obce-single | /home/admin/.obd/cluster/obce-single | deployed |
|
||||
+-------------+--------------------------------------+-----------------+
|
||||
```
|
||||
|
||||
+ 第二主要看目录结构。其中 目录 `/store` 、`/data` 和 `/redo` 的目录关系是重点。总体结构不变,后期映射关系可能会细微调整。
|
||||
|
||||
```bash
|
||||
[admin@obce00 ~]$ tree /home/admin/oceanbase-ce/
|
||||
/home/admin/oceanbase-ce/
|
||||
├── admin
|
||||
├── bin
|
||||
│ └── observer -> /home/admin/.obd/repository/oceanbase-ce/3.1.0/84bd2fe27f8b8243cc57d8a3f68b4c50f94aab80/bin/observer
|
||||
├── etc
|
||||
├── lib
|
||||
│ ├── libaio.so -> /home/admin/.obd/repository/oceanbase-ce/3.1.0/84bd2fe27f8b8243cc57d8a3f68b4c50f94aab80/lib/libaio.so
|
||||
│ ├── libaio.so.1 -> /home/admin/.obd/repository/oceanbase-ce/3.1.0/84bd2fe27f8b8243cc57d8a3f68b4c50f94aab80/lib/libaio.so.1
|
||||
│ ├── libaio.so.1.0.1 -> /home/admin/.obd/repository/oceanbase-ce/3.1.0/84bd2fe27f8b8243cc57d8a3f68b4c50f94aab80/lib/libaio.so.1.0.1
|
||||
│ ├── libmariadb.so -> /home/admin/.obd/repository/oceanbase-ce/3.1.0/84bd2fe27f8b8243cc57d8a3f68b4c50f94aab80/lib/libmariadb.so
|
||||
│ └── libmariadb.so.3 -> /home/admin/.obd/repository/oceanbase-ce/3.1.0/84bd2fe27f8b8243cc57d8a3f68b4c50f94aab80/lib/libmariadb.so.3
|
||||
├── log
|
||||
└── store -> /data
|
||||
[admin@obce00 ~]$ tree /data
|
||||
/data
|
||||
├── clog -> /redo/clog
|
||||
├── ilog -> /redo/ilog
|
||||
├── slog -> /redo/slog
|
||||
└── sstable
|
||||
|
||||
4 directories, 0 files
|
||||
[admin@obce00 ~]$ tree /redo
|
||||
/redo
|
||||
├── clog
|
||||
├── ilog
|
||||
└── slog
|
||||
|
||||
3 directories, 0 files
|
||||
```
|
||||
|
||||
## OBD 开始启动和初始化集群
|
||||
|
||||
上面 `deploy` 操作只是安装了软件和准备初始化目录,还需要启动集群节点并初始化集群,使用 `obd cluster start` 命令。
|
||||
第一次运行 `start` 会对集群进行初始化(`boostrap`),以后再 `start` 就只会启动集群中节点进程。
|
||||
|
||||
```bash
|
||||
obd cluster start obce-single
|
||||
|
||||
输出:
|
||||
[admin@obce00 ~]$ obd cluster start obce-single
|
||||
Get local repositories and plugins ok
|
||||
Open ssh connection ok
|
||||
Cluster param config check ok
|
||||
Check before start observer ok
|
||||
Start observer ok
|
||||
observer program health check ok
|
||||
Connect to observer ok
|
||||
Initialize cluster
|
||||
Cluster bootstrap ok
|
||||
Wait for observer init ok
|
||||
+-------------------------------------------------+
|
||||
| observer |
|
||||
+---------------+---------+------+-------+--------+
|
||||
| ip | version | port | zone | status |
|
||||
+---------------+---------+------+-------+--------+
|
||||
| 172.20.249.50 | 3.1.0 | 2881 | zone1 | active |
|
||||
+---------------+---------+------+-------+--------+
|
||||
|
||||
obce-single running
|
||||
|
||||
```
|
||||
|
||||
这个命令会在 `bootstrap` 要几分钟。当可用内存不足 8G 或者 日志目录剩余可用空间比例不足 5% 的时候,这个 `bootstrap` 是很可能会失败的。
|
||||
|
||||
接下来进一步确认集群初始化成功。这个步骤是可选的。第一次学习或生产部署的时候,建议检查一下。
|
||||
|
||||
+ 首先查看启动后的集群状态。
|
||||
|
||||
```bash
|
||||
|
||||
[admin@obce00 ~]$ obd cluster list
|
||||
+----------------------------------------------------------------------+
|
||||
| Cluster List |
|
||||
+-------------+--------------------------------------+-----------------+
|
||||
| Name | Configuration Path | Status (Cached) |
|
||||
+-------------+--------------------------------------+-----------------+
|
||||
| obce-single | /home/admin/.obd/cluster/obce-single | running |
|
||||
+-------------+--------------------------------------+-----------------+
|
||||
|
||||
[admin@obce00 ~]$ obd cluster display obce-single
|
||||
Get local repositories and plugins ok
|
||||
Open ssh connection ok
|
||||
Cluster status check ok
|
||||
Connect to observer ok
|
||||
Wait for observer init ok
|
||||
+-------------------------------------------------+
|
||||
| observer |
|
||||
+---------------+---------+------+-------+--------+
|
||||
| ip | version | port | zone | status |
|
||||
+---------------+---------+------+-------+--------+
|
||||
| 172.20.249.50 | 3.1.0 | 2881 | zone1 | active |
|
||||
+---------------+---------+------+-------+--------+
|
||||
|
||||
```
|
||||
|
||||
+ 检查数据文件大小
|
||||
|
||||
进程 `observer` 启动后会初始化数据文件(`block_file`)大小,根据参数 `datafile_size 或 datafile_disk_percentage` 控制。
|
||||
|
||||
```bash
|
||||
[admin@obce00 ~]$ ls -lrth /data/sstable/block_file
|
||||
-rw-r--r-- 1 admin admin 50G Sep 11 17:31 /data/sstable/block_file
|
||||
```
|
||||
|
||||
+ 检查进程
|
||||
|
||||
OceanBase 是单进程软件,进程名叫 `observer` ,可以用下面命令查看这个进程。
|
||||
|
||||
```bash
|
||||
[admin@obce00 ~]$ ps -ef | grep observer | grep -v grep
|
||||
admin 30616 1 68 17:30 ? 00:02:54 /home/admin/oceanbase-ce/bin/observer -r 172.20.249.50:2882:2881 -o __min_full_resource_pool_memory=268435456,redo_dir=/redo,memory_limit=8G,system_memory=4G,stack_size=512K,cpu_count=16,cache_wash_threshold=1G,workers_per_cpu_quota=10,schema_history_expire_time=1d,net_thread_count=4,major_freeze_duty_time=Disable,minor_freeze_times=10,enable_separate_sys_clog=0,enable_merge_by_turn=False,datafile_size=50G,enable_syslog_wf=False,enable_syslog_recycle=True,max_syslog_file_count=10,root_password=bzNvgyhB -z zone1 -p 2881 -P 2882 -c 1 -d /data -i eth0 -l WARN
|
||||
[admin@obce00 ~]$
|
||||
|
||||
```
|
||||
|
||||
从进程里看,可执行文件是 `/home/admin/oceanbase-ce/bin/observer` ,实际上它是个软链接。
|
||||
|
||||
```bash
|
||||
[admin@obce00 oceanbase-ce]$ ll /home/admin/oceanbase-ce/bin/observer
|
||||
lrwxrwxrwx 1 admin admin 100 Sep 11 17:16 /home/admin/oceanbase-ce/bin/observer -> /home/admin/.obd/repository/oceanbase-ce/3.1.0/84bd2fe27f8b8243cc57d8a3f68b4c50f94aab80/bin/observer
|
||||
```
|
||||
|
||||
进程启动的时候,通过 `-o` 指定了很多参数,这些参数都是在前面 OBD 集群部署配置文件里指定的。
|
||||
|
||||
+ 检查进程监听端口
|
||||
|
||||
```bash
|
||||
[admin@obce00 ~]$ sudo netstat -ntlp |grep observer
|
||||
[sudo] password for admin:
|
||||
tcp 0 0 0.0.0.0:2881 0.0.0.0:* LISTEN 30616/observer
|
||||
tcp 0 0 0.0.0.0:2882 0.0.0.0:* LISTEN 30616/observer
|
||||
|
||||
```
|
||||
|
||||
## 连接 OceanBase 集群的内部实例(`sys`)
|
||||
|
||||
传统的 mysql 客户端可以连接 OceanBase 社区版,前提是 mysql 的版本是 5.5/5.6/5.7 。OceanBase 也提供自己的客户端工具 `obclient` 需要安装使用。
|
||||
跟传统MySQL 不一样的地方是 OBSERVER 连接端口是 2881 , 连接用户名是 :`root@sys` ,密码是前面 OBD 配置文件里指定的。
|
||||
|
||||
|
||||
```bash
|
||||
[admin@obce00 ~]$ mysql -h 172.20.249.50 -uroot@sys -P2881 -pbzNvgyhB -c -A oceanbase
|
||||
Welcome to the MariaDB monitor. Commands end with ; or \g.
|
||||
Your MySQL connection id is 3221488586
|
||||
Server version: 5.7.25 OceanBase 3.1.0 (r3-b20901e8c84d3ea774beeaca963c67d7802e4b4e) (Built Aug 10 2021 08:10:38)
|
||||
|
||||
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
|
||||
|
||||
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
|
||||
|
||||
MySQL [oceanbase]> show databases;
|
||||
+--------------------+
|
||||
| Database |
|
||||
+--------------------+
|
||||
| oceanbase |
|
||||
| information_schema |
|
||||
| mysql |
|
||||
| SYS |
|
||||
| LBACSYS |
|
||||
| ORAAUDITOR |
|
||||
| test |
|
||||
+--------------------+
|
||||
7 rows in set (0.002 sec)
|
||||
|
||||
```
|
||||
|
||||
在数据库列表里看到 `oceanbase` 这个数据库,就表示集群初始化成功。
|
||||
|
||||
`obclient` 安装和使用示例。
|
||||
|
||||
```bash
|
||||
sudo rpm -ivh /tmp/obd/obclient-2.0.0-2.el8.x86_64.rpm /tmp/obd/libobclient-2.0.0-2.el8.x86_64.rpm
|
||||
|
||||
obclient -h 172.20.249.50 -uroot@sys -P2881 -pbzNvgyhB -c -A oceanbase
|
||||
|
||||
```
|
466
docs/docs/junior-training/ob-quick-start/chapter02/2.7.md
Normal file
466
docs/docs/junior-training/ob-quick-start/chapter02/2.7.md
Normal file
@ -0,0 +1,466 @@
|
||||
# 如何使用 OBD 自动化部署多节点集群
|
||||
|
||||
## 部署规划
|
||||
|
||||
这一节介绍 OceanBase 集群三节点部署方法,需要通过中控机直接远程登录到 OceanBase 节点上部署启动 `observer` 和 `obproxy` 进程。
|
||||
|
||||
+ 机器信息如下:
|
||||
|
||||
| 机器类型 | 云主机 ECS |
|
||||
|------|-------------------------------|
|
||||
| IP | 172.20.249.50 172.20.249.52 172.20.249.49 172.20.249.51 |
|
||||
| 网卡名 | eth0 |
|
||||
| OS | CentOS Linux release 8.4.2105 |
|
||||
| CPU | 4C |
|
||||
| 内存 | 总内存 14G,可用内存 11G |
|
||||
| 磁盘1 | 云盘 /dev/vda 100G |
|
||||
| 磁盘2 | 云盘 /dev/vdb 100G |
|
||||
|
||||
+ 机器划分如下:
|
||||
|
||||
| 角色 | 机器 | 备注 |
|
||||
|----------|---------------|---------------------|
|
||||
| OBD | 172.20.249.50 | 中控机,自动化部署软件 |
|
||||
| OBSERVER | 172.20.249.52 | OceanBase 数据库 zone1 |
|
||||
| | 172.20.249.49 | OceanBase 数据库 zone2 |
|
||||
| | 172.20.249.51 | OceanBase 数据库 zone3 |
|
||||
| OBPROXY | 172.20.249.52 | OceanBase 访问反向代理 |
|
||||
| | 172.20.249.49 | OceanBase 访问反向代理 |
|
||||
| | 172.20.249.51 | OceanBase 访问反向代理 |
|
||||
| OBCLIENT | 172.20.249.50 | OceanBase 命令行客户端 |
|
||||
|
||||
+ 磁盘划分
|
||||
磁盘划分,这里就使用 LVM 技术对 `/dev/vdb` 进行划分。需要登录到每个节点上手动初始化。
|
||||
|
||||
```bash
|
||||
# lvm 分盘
|
||||
pvcreate /dev/vdb
|
||||
vgcreate obvg /dev/vdb
|
||||
lvcreate obvg -L 20G^C
|
||||
lvcreate -L 20G obvg -n lvredo
|
||||
lvcreate -l 100%FREE obvg -n lvdata
|
||||
|
||||
# 格式化文件系统
|
||||
mkfs.ext4 /dev/obvg/lvdata
|
||||
mkfs.ext4 /dev/obvg/lvredo
|
||||
|
||||
# 修改 mount 参数文件
|
||||
vim /etc/fstab
|
||||
/dev/obvg/lvredo /redo ext4 defaults,noatime,nodiratime,nodelalloc,barrier=0 0 0
|
||||
/dev/obvg/lvdata /data ext4 defaults,noatime,nodiratime,nodelalloc,barrier=0 0 0
|
||||
|
||||
# 挂载文件系统
|
||||
mkdir -p /data /redo
|
||||
vim /etc/fstab
|
||||
mount /data
|
||||
mount /redo
|
||||
chown -R admin.admin /data /redo
|
||||
|
||||
# 检查
|
||||
df -h
|
||||
|
||||
输出:
|
||||
文件系统 容量 已用 可用 已用% 挂载点
|
||||
/dev/mapper/obvg-lvdata 59G 53M 56G 1% /data
|
||||
/dev/mapper/obvg-lvredo 20G 45M 19G 1% /redo
|
||||
|
||||
```
|
||||
|
||||
## 编辑 OBD 配置文件
|
||||
|
||||
OBD 针对不同的部署场景提供不同的配置文件。这些配置文件示例在 OceanBase 开源项目地址里,具体是:[https://github.com/oceanbase/obdeploy/tree/master/example](https://github.com/oceanbase/obdeploy/tree/master/example) 。
|
||||
|
||||
如果是部署单节点版本,就下载其中两个配置文件:
|
||||
|
||||
+ 部署三节点 `observer` 进程: [https://github.com/oceanbase/obdeploy/blob/master/example/mini-distributed-example.yaml](https://github.com/oceanbase/obdeploy/blob/master/example/mini-distributed-example.yaml)
|
||||
+ 部署三节点 `observer` 和 `obproxy` 进程:[https://github.com/oceanbase/obdeploy/blob/master/example/mini-distributed-with-obproxy-example.yaml](https://github.com/oceanbase/obdeploy/blob/master/example/mini-distributed-with-obproxy-example.yaml)
|
||||
|
||||
这里仿照生产环境,选择第二种部署配置文件。
|
||||
|
||||
```bash
|
||||
[admin@obce00 ~]$ cat obce-3zones.yaml
|
||||
# Only need to configure when remote login is required
|
||||
user:
|
||||
username: admin
|
||||
# password: your password if need
|
||||
key_file: /home/admin/.ssh/id_rsa.pub
|
||||
port: your ssh port, default 22
|
||||
# timeout: ssh connection timeout (second), default 30
|
||||
oceanbase-ce:
|
||||
servers:
|
||||
- name: obce01
|
||||
# Please don't use hostname, only IP can be supported
|
||||
ip: 172.20.249.52
|
||||
- name: obce02
|
||||
ip: 172.20.249.49
|
||||
- name: obce03
|
||||
ip: 172.20.249.51
|
||||
global:
|
||||
# Please set devname as the network adaptor's name whose ip is in the setting of severs.
|
||||
# if set severs as "127.0.0.1", please set devname as "lo"
|
||||
# if current ip is 192.168.1.10, and the ip's network adaptor's name is "eth0", please use "eth0"
|
||||
devname: eth0
|
||||
cluster_id: 2
|
||||
# please set memory limit to a suitable value which is matching resource.
|
||||
memory_limit: 8G # The maximum running memory for an observer
|
||||
system_memory: 3G # The reserved system memory. system_memory is reserved for general tenants. The default value is 30G.
|
||||
stack_size: 512K
|
||||
cpu_count: 16
|
||||
cache_wash_threshold: 1G
|
||||
__min_full_resource_pool_memory: 268435456
|
||||
workers_per_cpu_quota: 10
|
||||
schema_history_expire_time: 1d
|
||||
# The value of net_thread_count had better be same as cpu's core number.
|
||||
net_thread_count: 4
|
||||
major_freeze_duty_time: Disable
|
||||
minor_freeze_times: 10
|
||||
enable_separate_sys_clog: 0
|
||||
enable_merge_by_turn: FALSE
|
||||
#datafile_disk_percentage: 20 # The percentage of the data_dir space to the total disk space. This value takes effect only when datafile_size is 0. The default value is 90.
|
||||
datafile_size: 50G
|
||||
syslog_level: WARN # System log level. The default value is INFO.
|
||||
enable_syslog_wf: false # Print system logs whose levels are higher than WARNING to a separate log file. The default value is true.
|
||||
enable_syslog_recycle: true # Enable auto system log recycling or not. The default value is false.
|
||||
max_syslog_file_count: 10 # The maximum number of reserved log files before enabling auto recycling. The default value is 0.
|
||||
# observer cluster name, consistent with obproxy's cluster_name
|
||||
appname: obce-3zones
|
||||
root_password: 0EI5N08d # root user password, can be empty
|
||||
proxyro_password: uY7Yf8zx # proxyro user pasword, consistent with obproxy's observer_sys_password, can be empty
|
||||
obce01:
|
||||
mysql_port: 2881 # External port for OceanBase Database. The default value is 2881.
|
||||
rpc_port: 2882 # Internal port for OceanBase Database. The default value is 2882.
|
||||
# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
|
||||
home_path: /home/admin/oceanbase-ce
|
||||
# The directory for data storage. The default value is $home_path/store.
|
||||
data_dir: /data
|
||||
# The directory for clog, ilog, and slog. The default value is the same as the data_dir value.
|
||||
redo_dir: /redo
|
||||
zone: zone1
|
||||
obce02:
|
||||
mysql_port: 2881 # External port for OceanBase Database. The default value is 2881.
|
||||
rpc_port: 2882 # Internal port for OceanBase Database. The default value is 2882.
|
||||
# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
|
||||
home_path: /home/admin/oceanbase-ce
|
||||
# The directory for data storage. The default value is $home_path/store.
|
||||
data_dir: /data
|
||||
# The directory for clog, ilog, and slog. The default value is the same as the data_dir value.
|
||||
redo_dir: /redo
|
||||
zone: zone2
|
||||
obce03:
|
||||
mysql_port: 2881 # External port for OceanBase Database. The default value is 2881.
|
||||
rpc_port: 2882 # Internal port for OceanBase Database. The default value is 2882.
|
||||
# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
|
||||
home_path: /home/admin/oceanbase-ce
|
||||
# The directory for data storage. The default value is $home_path/store.
|
||||
data_dir: /data
|
||||
# The directory for clog, ilog, and slog. The default value is the same as the data_dir value.
|
||||
redo_dir: /redo
|
||||
zone: zone3
|
||||
obproxy:
|
||||
servers:
|
||||
- 172.20.249.52
|
||||
- 172.20.249.49
|
||||
- 172.20.249.51
|
||||
# Set dependent components for the component.
|
||||
# When the associated configurations are not done, OBD will automatically get the these configurations from the dependent components.
|
||||
depends:
|
||||
- oceanbase-ce
|
||||
global:
|
||||
listen_port: 2883 # External port. The default value is 2883.
|
||||
prometheus_listen_port: 2884 # The Prometheus port. The default value is 2884.
|
||||
home_path: /home/admin/obproxy
|
||||
# oceanbase root server list
|
||||
# format: ip:mysql_port;ip:mysql_port
|
||||
rs_list: 172.20.249.52:2881;172.20.249.49:2881;172.20.249.51:2881
|
||||
enable_cluster_checkout: false
|
||||
# observer cluster name, consistent with oceanbase-ce's appname
|
||||
cluster_name: obce-3zones
|
||||
obproxy_sys_password: 0MdTv1tm # obproxy sys user password, can be empty
|
||||
observer_sys_password: uY7Yf8zx # proxyro user pasword, consistent with oceanbase-ce's proxyro_password, can be empty
|
||||
|
||||
```
|
||||
|
||||
这个配置文件是专门针对最小内存(可用内存大于 8G)的节点配置,里面指定了很多进程 `observer` 的启动参数。注意 `yaml` 的格式,每个配置项后面冒号(`:`) 跟后面的值之间必须有个空格(`' '`)。
|
||||
下面就关键的几个参数补充说明如下:
|
||||
|
||||
| 配置类 | 配置项名 | 配置值 | 备注 |
|
||||
|--------------|------------------------------------------|----------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| user | username | admin | 中控机连接OceanBase 节点的用户名,也是 OceanBase 要部署的用户名。 |
|
||||
| | key_file | /home/admin/.ssh/id_rsa.pub | 中控机上 SSH用的RSA 公钥。 |
|
||||
| | port | 22 | OceanBase 集群节点的 SSH 端口,默认是22。如果不是就修改这里。 |
|
||||
| oceanbase-ce | servers | 指定所有机器列表 | 每个机器是用 `- name 机器标识名 (换行)ip: 机器ip` 指定。多个机器就指定多次,以后会优化 |
|
||||
| | home_path | /home/admin/oceanbase-ce | 指定到普通用户(admin)的目录下,为区别于企业版,文件名叫 `oceanbase-ce` 。 |
|
||||
| | data_dir | /data | 指向独立的磁盘,这里使用前面分配的 LV(`lvdata`)。实际存储 OceanBase 的数据文件(`block_file`)。 |
|
||||
| | redo_dir | /redo | 指向独立的磁盘,这里使用前面分配的LV(`lvredo`)。实际存储 OceanBase 的事务日志、`sstable` 日志等。 |
|
||||
| | devname | eth0 | 这个是跟 servers 里指定的 IP 对应的网卡。如果前面 IP 是 127.0.0.1 ,那么这里就填 lo 。通过 ip addr 命令可以查看 IP 和网卡对应关系。 |
|
||||
| | mysql_port | 2881 | 进程 `observer` 的连接端口,默认是 2881 。后面 OceanBase 客户端直连这个端口可以访问该节点。 |
|
||||
| | rpc_port | 2882 | 进程 `observer` 跟其他节点进程之间的 RPC 通信端口,默认是 2882 。 |
|
||||
| | zone | zone1 zone2 zone3 | `zone` 是逻辑机房的概念。三副本集群下有三个 `zone` 。 |
|
||||
| | cluster_id | 2 | OceanBase 集群ID 标识,不同集群不要重复即可。 |
|
||||
| | memory_limit | 8G | 进程 `observer` 能从OS 获取的最大内存,最小不少于 8G 。如果机器内存丰富的话,这个参数可以大一些。 |
|
||||
| | system_memory | 4G | 进程 `observer` 留给集群内部用的保留内存,这个会占用上面 `memory_limit` 的内存,留给业务租户的就更少。 |
|
||||
| | datafile_size | datafile_disk_percentage | | 这两个参数 2 选 1。用来指定该节点数据文件(`block_file`)大小的。可以按大小指定,或者按磁盘空间的百分比指定。这里示例磁盘空间很小,为精确控制,指定数据文件大小。当数据文件和事务日志文件是共用一个磁盘的时候,则必须指定数据文件具体大小,以避免日志文件的目录空间不够。 |
|
||||
| | syslog_level | WARN 或 ERROR | 运行日志的日志级别,有 INFO | WARN | ERROR 等几个级别。级别越低,日志量越大。进程 `observer` 的日志量非常大,如果磁盘空间不大的话,就调整为 WARN 或 ERROR 吧。 |
|
||||
| | enable_syslog_recycle | TRUE | 指定运行日志是否以滚动方式输出,最多保留 指定数量的运行日志。 |
|
||||
| | max_syslog_file_count | 10 | 根据磁盘空间大小定,这里默认保留最多 10 个历史运行日志文件。 |
|
||||
| | root_password | 随机字符串 | OceanBase 集群的超级管理员 `root@sys` 的密码,默认是空,建议设置复杂的密码。 |
|
||||
| | proxyro_password | 随机字符串 | OBPROXY 连接 OB集群使用的账户名(proxyro) 的密码 |
|
||||
| obproxy | servers | 任意机器IP | OBPROXY 可以部署在应用服务器、中控机或者 OceanBase 机器上。这里选择 OceanBase 机器。 |
|
||||
| | depends | 依赖的配置节| 通常指定依赖的集群配置,会自动复用集群的 `proxyro` 密码、集群名 `cluster_name`、`rs_list` 等等。|
|
||||
| | listen_port | 2883 | OBPROXY 监听端口,默认 2883 。 |
|
||||
| | prometheus_listen_port | 2884 | prometheus 监听端口,默认 2884。 |
|
||||
| | home_path | /home/admin/obproxy | OBPROXY 默认安装路径,建议在普通用户 admin 下。 |
|
||||
| | rs_list | 172.20.249.52:2881;172.20.249.49:2881;172.20.249.51:2881 | OceanBase 集群 rootservice 服务地址,由 sys 租户的三副本所在节点IP 组成。 可以手动指定,也可以不指定,依赖前面的 `depends` 配置节自动从 OceanBase 集群配置里获取。 |
|
||||
| | enable_cluster_checkout | FALSE | |
|
||||
| | cluster_name | obce-3zones | OceanBase 集群名字 |
|
||||
| | obproxy_sys_password | 随机字符串 | OBPROXY 管理员账户(`proxysys`)的密码。 |
|
||||
| | observer_sys_password | 跟 proxyro_password 一致 | OBPROXY 连接 OB集群使用的账户名(proxyro) 的密码 |
|
||||
|
||||
当上面部署成功后,OBD 会把配置文件 `obce-3zones.yaml` 复制到自己的工作目录里(`~/.obd/cluster/obce-3zones/config.yaml` ),后期再改外面这个 `obce-3zones.yaml` 文件,是不生效的。
|
||||
|
||||
## OBD 部署集群
|
||||
|
||||
配置文件准备好后,就可以部署这个配置文件对应的集群了,部署内容主要包含:
|
||||
|
||||
+ 复制软件到相应节点,并安装软件。
|
||||
+ 在相应节点创建相关目录。
|
||||
|
||||
部署使用命令:`obd cluster deploy [集群名] -c 集群配置文件`
|
||||
|
||||
```bash
|
||||
[admin@obce00 ~]$ obd cluster deploy obce-3zones -c obce-3zones.yaml
|
||||
oceanbase-ce-3.1.0 already installed.
|
||||
obproxy-3.1.0 already installed.
|
||||
+-----------------------------------------------------------------------------+
|
||||
| Packages |
|
||||
+--------------+---------+---------+------------------------------------------+
|
||||
| Repository | Version | Release | Md5 |
|
||||
+--------------+---------+---------+------------------------------------------+
|
||||
| oceanbase-ce | 3.1.0 | 3.el8 | 84bd2fe27f8b8243cc57d8a3f68b4c50f94aab80 |
|
||||
| obproxy | 3.1.0 | 1.el8 | d242ea5fe45222b8f61c3135ba2aaa778c61ea22 |
|
||||
+--------------+---------+---------+------------------------------------------+
|
||||
Repository integrity check ok
|
||||
Parameter check ok
|
||||
Open ssh connection ok
|
||||
Remote oceanbase-ce-3.1.0-84bd2fe27f8b8243cc57d8a3f68b4c50f94aab80 repository install ok
|
||||
Remote oceanbase-ce-3.1.0-84bd2fe27f8b8243cc57d8a3f68b4c50f94aab80 repository lib check ok
|
||||
Remote obproxy-3.1.0-d242ea5fe45222b8f61c3135ba2aaa778c61ea22 repository install ok
|
||||
Remote obproxy-3.1.0-d242ea5fe45222b8f61c3135ba2aaa778c61ea22 repository lib check ok
|
||||
Cluster status check ok
|
||||
Initializes cluster work home ok
|
||||
Initializes cluster work home ok
|
||||
obce-3zones deployed
|
||||
|
||||
```
|
||||
|
||||
检查集群部署状态。
|
||||
|
||||
```bash
|
||||
[admin@obce00 ~]$ obd cluster list
|
||||
+----------------------------------------------------------------------+
|
||||
| Cluster List |
|
||||
+-------------+--------------------------------------+-----------------+
|
||||
| Name | Configuration Path | Status (Cached) |
|
||||
+-------------+--------------------------------------+-----------------+
|
||||
| obce-3zones | /home/admin/.obd/cluster/obce-3zones | deployed |
|
||||
+-------------+--------------------------------------+-----------------+
|
||||
```
|
||||
|
||||
## OBD 启动和初始化集群
|
||||
|
||||
上面 `deploy` 操作只是安装了软件和准备初始化目录,还需要启动集群节点并初始化集群,使用 `obd cluster start` 命令。
|
||||
|
||||
```bash
|
||||
obd cluster start obce-3zones
|
||||
|
||||
输出:
|
||||
[admin@obce00 ~]$ obd cluster start obce-3zones
|
||||
Get local repositories and plugins ok
|
||||
Open ssh connection ok
|
||||
Cluster param config check ok
|
||||
Check before start observer ok
|
||||
[WARN] (172.20.249.52) The recommended value of fs.aio-max-nr is 1048576 (Current value: 65536)
|
||||
[WARN] (172.20.249.52) The recommended number of open files is 655350 (Current value: 65535)
|
||||
[WARN] (172.20.249.49) The recommended value of fs.aio-max-nr is 1048576 (Current value: 65536)
|
||||
[WARN] (172.20.249.49) The recommended number of open files is 655350 (Current value: 65535)
|
||||
[WARN] (172.20.249.51) The recommended value of fs.aio-max-nr is 1048576 (Current value: 65536)
|
||||
[WARN] (172.20.249.51) The recommended number of open files is 655350 (Current value: 65535)
|
||||
|
||||
Check before start obproxy ok
|
||||
Start observer ok
|
||||
observer program health check ok
|
||||
Connect to observer ok
|
||||
Initialize cluster
|
||||
Cluster bootstrap ok
|
||||
Wait for observer init ok
|
||||
+-------------------------------------------------+
|
||||
| observer |
|
||||
+---------------+---------+------+-------+--------+
|
||||
| ip | version | port | zone | status |
|
||||
+---------------+---------+------+-------+--------+
|
||||
| 172.20.249.49 | 3.1.0 | 2881 | zone2 | active |
|
||||
| 172.20.249.51 | 3.1.0 | 2881 | zone3 | active |
|
||||
| 172.20.249.52 | 3.1.0 | 2881 | zone1 | active |
|
||||
+---------------+---------+------+-------+--------+
|
||||
|
||||
Start obproxy ok
|
||||
obproxy program health check ok
|
||||
Connect to obproxy ok
|
||||
Initialize cluster
|
||||
+-------------------------------------------------+
|
||||
| obproxy |
|
||||
+---------------+------+-----------------+--------+
|
||||
| ip | port | prometheus_port | status |
|
||||
+---------------+------+-----------------+--------+
|
||||
| 172.20.249.52 | 2883 | 2884 | active |
|
||||
| 172.20.249.49 | 2883 | 2884 | active |
|
||||
| 172.20.249.51 | 2883 | 2884 | active |
|
||||
+---------------+------+-----------------+--------+
|
||||
obce-3zones running
|
||||
|
||||
```
|
||||
|
||||
如果集群节点内核参数和会话限制参数不符合要求,安装会给出提示。
|
||||
这个命令会在 `bootstrap` 要几分钟。当可用内存不足 8G 或者 日志目录剩余可用空间比例不足 5% 的时候,这个 `bootstrap` 是很可能会失败的。
|
||||
|
||||
接下来进一步确认集群初始化成功。这个步骤是可选的。第一次学习或生产部署的时候,建议检查一下。
|
||||
|
||||
+ 首先查看启动后的集群状态。
|
||||
|
||||
```bash
|
||||
|
||||
[admin@obce00 ~]$ obd cluster list
|
||||
+----------------------------------------------------------------------+
|
||||
| Cluster List |
|
||||
+-------------+--------------------------------------+-----------------+
|
||||
| Name | Configuration Path | Status (Cached) |
|
||||
+-------------+--------------------------------------+-----------------+
|
||||
| obce-3zones | /home/admin/.obd/cluster/obce-3zones | running |
|
||||
+-------------+--------------------------------------+-----------------+
|
||||
|
||||
```
|
||||
|
||||
+ 检查 OceanBase 集群各个节点进程信息。
|
||||
|
||||
OceanBase 是单进程软件,进程名叫 `observer` ,可以用下面命令查看这个进程。
|
||||
|
||||
```bash
|
||||
IPS="172.20.249.52 172.20.249.49 172.20.249.51"
|
||||
for ob in $IPS;do echo $ob; ssh $ob "ps -ef | grep observer | grep -v grep "; done
|
||||
|
||||
输出:
|
||||
[admin@obce00 oceanbase-ce]$ for ob in $IPS;do echo $ob; ssh $ob "ps -ef | grep observer | grep -v grep "; done
|
||||
172.20.249.52
|
||||
admin 6987 1 69 08:35 ? 01:38:26 /home/admin/oceanbase-ce/bin/observer -r 172.20.249.52:2882:2881;172.20.249.49:2882:2881;172.20.249.51:2882:2881 -o __min_full_resource_pool_memory=268435456,memory_limit=8G,system_memory=3G,stack_size=512K,cpu_count=16,cache_wash_threshold=1G,workers_per_cpu_quota=10,schema_history_expire_time=1d,net_thread_count=4,major_freeze_duty_time=Disable,minor_freeze_times=10,enable_separate_sys_clog=0,enable_merge_by_turn=False,datafile_size=50G,enable_syslog_wf=False,enable_syslog_recycle=True,max_syslog_file_count=10,root_password=0EI5N08d,redo_dir=/redo -z zone1 -p 2881 -P 2882 -n obce-3zones -c 2 -d /data -i eth0 -l WARN
|
||||
172.20.249.49
|
||||
admin 7064 1 87 08:35 ? 02:02:59 /home/admin/oceanbase-ce/bin/observer -r 172.20.249.52:2882:2881;172.20.249.49:2882:2881;172.20.249.51:2882:2881 -o __min_full_resource_pool_memory=268435456,memory_limit=8G,system_memory=3G,stack_size=512K,cpu_count=16,cache_wash_threshold=1G,workers_per_cpu_quota=10,schema_history_expire_time=1d,net_thread_count=4,major_freeze_duty_time=Disable,minor_freeze_times=10,enable_separate_sys_clog=0,enable_merge_by_turn=False,datafile_size=50G,enable_syslog_wf=False,enable_syslog_recycle=True,max_syslog_file_count=10,root_password=0EI5N08d,redo_dir=/redo -z zone2 -p 2881 -P 2882 -n obce-3zones -c 2 -d /data -i eth0 -l WARN
|
||||
172.20.249.51
|
||||
admin 6920 1 72 08:35 ? 01:42:42 /home/admin/oceanbase-ce/bin/observer -r 172.20.249.52:2882:2881;172.20.249.49:2882:2881;172.20.249.51:2882:2881 -o __min_full_resource_pool_memory=268435456,memory_limit=8G,system_memory=3G,stack_size=512K,cpu_count=16,cache_wash_threshold=1G,workers_per_cpu_quota=10,schema_history_expire_time=1d,net_thread_count=4,major_freeze_duty_time=Disable,minor_freeze_times=10,enable_separate_sys_clog=0,enable_merge_by_turn=False,datafile_size=50G,enable_syslog_wf=False,enable_syslog_recycle=True,max_syslog_file_count=10,root_password=0EI5N08d,redo_dir=/redo -z zone3 -p 2881 -P 2882 -n obce-3zones -c 2 -d /data -i eth0 -l WARN
|
||||
```
|
||||
|
||||
从进程里看,可执行文件是 `/home/admin/oceanbase-ce/bin/observer` ,实际上它是个软链接。
|
||||
|
||||
```bash
|
||||
[admin@obce00 oceanbase-ce]$ ll /home/admin/oceanbase-ce/bin/observer
|
||||
lrwxrwxrwx 1 admin admin 100 Sep 11 17:16 /home/admin/oceanbase-ce/bin/observer -> /home/admin/.obd/repository/oceanbase-ce/3.1.0/84bd2fe27f8b8243cc57d8a3f68b4c50f94aab80/bin/observer
|
||||
```
|
||||
|
||||
进程启动的时候,通过 `-o` 指定了很多参数,这些参数都是在前面 OBD 集群部署配置文件里指定的。
|
||||
|
||||
+ 检查 OceanBase 集群各个节点监听状况。
|
||||
|
||||
```bash
|
||||
IPS="172.20.249.52 172.20.249.49 172.20.249.51"
|
||||
for ob in $IPS;do echo $ob; ssh $ob "netstat -ntlp"; done
|
||||
|
||||
输出:
|
||||
[admin@obce00 ~]$ for ob in $IPS;do echo $ob; ssh $ob "netstat -ntlp"; done
|
||||
172.20.249.52
|
||||
(Not all processes could be identified, non-owned process info
|
||||
will not be shown, you would have to be root to see it all.)
|
||||
Active Internet connections (only servers)
|
||||
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
|
||||
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
|
||||
tcp 0 0 0.0.0.0:2881 0.0.0.0:* LISTEN 6987/observer
|
||||
tcp 0 0 0.0.0.0:2882 0.0.0.0:* LISTEN 6987/observer
|
||||
tcp 0 0 0.0.0.0:2883 0.0.0.0:* LISTEN 7640/obproxy
|
||||
tcp 0 0 0.0.0.0:2884 0.0.0.0:* LISTEN 7640/obproxy
|
||||
172.20.249.49
|
||||
(Not all processes could be identified, non-owned process info
|
||||
will not be shown, you would have to be root to see it all.)
|
||||
Active Internet connections (only servers)
|
||||
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
|
||||
tcp 0 0 0.0.0.0:2881 0.0.0.0:* LISTEN 7064/observer
|
||||
tcp 0 0 0.0.0.0:2882 0.0.0.0:* LISTEN 7064/observer
|
||||
tcp 0 0 0.0.0.0:2883 0.0.0.0:* LISTEN 7718/obproxy
|
||||
tcp 0 0 0.0.0.0:2884 0.0.0.0:* LISTEN 7718/obproxy
|
||||
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
|
||||
172.20.249.51
|
||||
(Not all processes could be identified, non-owned process info
|
||||
will not be shown, you would have to be root to see it all.)
|
||||
Active Internet connections (only servers)
|
||||
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
|
||||
tcp 0 0 0.0.0.0:2881 0.0.0.0:* LISTEN 6920/observer
|
||||
tcp 0 0 0.0.0.0:2882 0.0.0.0:* LISTEN 6920/observer
|
||||
tcp 0 0 0.0.0.0:2883 0.0.0.0:* LISTEN 7574/obproxy
|
||||
tcp 0 0 0.0.0.0:2884 0.0.0.0:* LISTEN 7574/obproxy
|
||||
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
|
||||
```
|
||||
|
||||
## 连接 OceanBase 集群的内部实例(`sys`)
|
||||
|
||||
传统的 mysql 客户端可以连接 OceanBase 社区版,前提是 mysql 的版本是 5.5/5.6/5.7 。OceanBase 也提供自己的客户端工具 `obclient` 需要安装使用。
|
||||
跟传统MySQL 不一样的地方是 OBPROXY 的连接端口是 2883 , 连接用户名是 :`root@sys#集群名` ,密码是前面 OBD 配置文件里指定的。
|
||||
|
||||
```bash
|
||||
mysql -h 172.20.249.52 -uroot@sys#obce-3zones -P2883 -p0EI5N08d -c -A oceanbase
|
||||
|
||||
输出:
|
||||
[admin@obce00 ~]$ mysql -h 172.20.249.52 -uroot@sys#obce-3zones -P2883 -p0EI5N08d -c -A oceanbase
|
||||
Welcome to the MariaDB monitor. Commands end with ; or \g.
|
||||
Your MySQL connection id is 4
|
||||
Server version: 5.6.25 OceanBase 3.1.0 (r3-b20901e8c84d3ea774beeaca963c67d7802e4b4e) (Built Aug 10 2021 08:10:38)
|
||||
|
||||
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
|
||||
|
||||
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
|
||||
|
||||
MySQL [oceanbase]> show databases;
|
||||
+--------------------+
|
||||
| Database |
|
||||
+--------------------+
|
||||
| oceanbase |
|
||||
| information_schema |
|
||||
| mysql |
|
||||
| SYS |
|
||||
| LBACSYS |
|
||||
| ORAAUDITOR |
|
||||
| test |
|
||||
+--------------------+
|
||||
7 rows in set (0.002 sec)
|
||||
|
||||
MySQL [oceanbase]> select a.zone,concat(a.svr_ip,':',a.svr_port) observer, cpu_total, (cpu_total-cpu_assigned) cpu_free, round(mem_total/1024/1024/1024) mem_total_gb, round((mem_total-mem_assigned)/1024/1024/1024) mem_free_gb, usec_to_time(b.last_offline_time) last_offline_time, usec_to_time(b.start_service_time) start_service_time, b.status, usec_to_time(b.stop_time) stop_time, b.build_version
|
||||
from __all_virtual_server_stat a join __all_server b on (a.svr_ip=b.svr_ip and a.svr_port=b.svr_port)
|
||||
order by a.zone, a.svr_ip
|
||||
;
|
||||
|
||||
+-------+--------------------+-----------+----------+--------------+-------------+----------------------------+----------------------------+--------+----------------------------+------------------------------------------------------------------------+
|
||||
| zone | observer | cpu_total | cpu_free | mem_total_gb | mem_free_gb | last_offline_time | start_service_time | status | stop_time | build_version |
|
||||
+-------+--------------------+-----------+----------+--------------+-------------+----------------------------+----------------------------+--------+----------------------------+------------------------------------------------------------------------+
|
||||
| zone1 | 172.20.249.52:2882 | 14 | 11.5 | 5 | 4 | 1970-01-01 08:00:00.000000 | 2021-09-12 08:36:06.357140 | active | 1970-01-01 08:00:00.000000 | 3.1.0_3-b20901e8c84d3ea774beeaca963c67d7802e4b4e(Aug 10 2021 08:10:38) |
|
||||
| zone2 | 172.20.249.49:2882 | 14 | 11.5 | 5 | 4 | 1970-01-01 08:00:00.000000 | 2021-09-12 08:36:07.605244 | active | 1970-01-01 08:00:00.000000 | 3.1.0_3-b20901e8c84d3ea774beeaca963c67d7802e4b4e(Aug 10 2021 08:10:38) |
|
||||
| zone3 | 172.20.249.51:2882 | 14 | 11.5 | 5 | 4 | 1970-01-01 08:00:00.000000 | 2021-09-12 08:36:07.631981 | active | 1970-01-01 08:00:00.000000 | 3.1.0_3-b20901e8c84d3ea774beeaca963c67d7802e4b4e(Aug 10 2021 08:10:38) |
|
||||
+-------+--------------------+-----------+----------+--------------+-------------+----------------------------+----------------------------+--------+----------------------------+------------------------------------------------------------------------+
|
||||
3 rows in set (0.004 sec)
|
||||
```
|
||||
|
||||
在数据库列表里看到 `oceanbase` 这个数据库,就表示集群初始化成功。
|
||||
|
||||
`obclient` 安装和使用示例。
|
||||
|
||||
```bash
|
||||
sudo rpm -ivh /tmp/obd/obclient-2.0.0-2.el8.x86_64.rpm /tmp/obd/libobclient-2.0.0-2.el8.x86_64.rpm
|
||||
|
||||
obclient -h 172.20.249.52 -uroot@sys#obce-3zones -P2883 -p0EI5N08d -c -A oceanbase
|
||||
|
||||
```
|
247
docs/docs/junior-training/ob-quick-start/chapter02/2.8.md
Normal file
247
docs/docs/junior-training/ob-quick-start/chapter02/2.8.md
Normal file
@ -0,0 +1,247 @@
|
||||
# 如何查看和修改 OceanBase 集群参数
|
||||
|
||||
OceanBase 以集群形态运行,提供多租户(也叫多实例)能力。集群初始化成功后,默认会有一个租户 `sys`,保存集群的所有元数据、参数等。管理 OceanBase 集群就是通过登录 `sys` 租户。
|
||||
|
||||
## 查看和修改 OceanBase 集群参数
|
||||
|
||||
查看 OceanBase 集群参数的命令是 :`show parameters [ like '%参数名特征%' ] ;` 或 `show parameters where name in ( '参数名1' , '参数名2' ) ; ` 。
|
||||
不带 `like` 子句就是查看所有参数。
|
||||
|
||||
下面以查看参数 `memory_limit` 和 `memory_limit_percentage` 为例。
|
||||
|
||||
首先这两个参数是指定进程 `observer` 启动后能获取的最大内存,如果分配不出来进程可能会启动失败或运行异常。这个内存可以指定大小,也可以指定总可用内存的比例。不管那种方法,要确保实际可以拿到的内存不少于 8G 。
|
||||
这两个参数实际只有一个生效,取最低值。`memory_limit` 设置为 0 的话就是不限制。使用哪个参数控制进程 `observer` 内存大小由运维决定。生产环境,机器内存很大的时候,通常是通过 `memory_limit_percentage` 控制,默认值是 80(表示总可用内存的 80%)。
|
||||
|
||||
```sql
|
||||
MySQL [oceanbase]> show parameters like 'memory_limit%';
|
||||
+-------+----------+---------------+----------+-------------------------+-----------+-------+--------------------------------------------------------------------------------------------------------------------------------+----------+---------+---------+-------------------+
|
||||
| zone | svr_type | svr_ip | svr_port | name | data_type | value | info | section | scope | source | edit_level |
|
||||
+-------+----------+---------------+----------+-------------------------+-----------+-------+--------------------------------------------------------------------------------------------------------------------------------+----------+---------+---------+-------------------+
|
||||
| zone1 | observer | 172.20.249.50 | 2882 | memory_limit_percentage | NULL | 80 | the size of the memory reserved for internal use(for testing purpose). Range: [10, 90] | OBSERVER | CLUSTER | DEFAULT | DYNAMIC_EFFECTIVE |
|
||||
| zone1 | observer | 172.20.249.50 | 2882 | memory_limit | NULL | 8G | the size of the memory reserved for internal use(for testing purpose), 0 means follow memory_limit_percentage. Range: 0, [8G,) | OBSERVER | CLUSTER | DEFAULT | DYNAMIC_EFFECTIVE |
|
||||
+-------+----------+---------------+----------+-------------------------+-----------+-------+--------------------------------------------------------------------------------------------------------------------------------+----------+---------+---------+-------------------+
|
||||
2 rows in set (0.002 sec)
|
||||
|
||||
MySQL [oceanbase]> show parameters where name in ('memory_limit','memory_limit_percentage')\G
|
||||
*************************** 1. row ***************************
|
||||
zone: zone1
|
||||
svr_type: observer
|
||||
svr_ip: 172.20.249.50
|
||||
svr_port: 2882
|
||||
name: memory_limit_percentage
|
||||
data_type: NULL
|
||||
value: 80
|
||||
info: the size of the memory reserved for internal use(for testing purpose). Range: [10, 90]
|
||||
section: OBSERVER
|
||||
scope: CLUSTER
|
||||
source: DEFAULT
|
||||
edit_level: DYNAMIC_EFFECTIVE
|
||||
*************************** 2. row ***************************
|
||||
zone: zone1
|
||||
svr_type: observer
|
||||
svr_ip: 172.20.249.50
|
||||
svr_port: 2882
|
||||
name: memory_limit
|
||||
data_type: NULL
|
||||
value: 8G
|
||||
info: the size of the memory reserved for internal use(for testing purpose), 0 means follow memory_limit_percentage. Range: 0, [8G,)
|
||||
section: OBSERVER
|
||||
scope: CLUSTER
|
||||
source: DEFAULT
|
||||
edit_level: DYNAMIC_EFFECTIVE
|
||||
2 rows in set (0.002 sec)
|
||||
|
||||
```
|
||||
|
||||
上面参数输出结果简单说明:
|
||||
|
||||
| 列名 | 列值 | 备注 |
|
||||
|------------|---------------------------------------------------------------------------------------|--------------------|
|
||||
| zone | zone1 | 节点的 zone 名称 |
|
||||
| svr_type | observer | 节点类型 |
|
||||
| svr_ip | 172.20.249.50 | 节点 IP |
|
||||
| svr_port | 2882 | 节点 RPC 端口 |
|
||||
| name | memory_limit_percentage | 参数名 |
|
||||
| data_type | NULL | 参数类型 |
|
||||
| value | 80 | 参数值 |
|
||||
| info | the size of the memory reserved for internal use(for testing purpose). Range [10, 90] | 参数的描述。 这个参数的这个描述不是很准确,这是限制进程 `observer` 能分配的最大内存。 |
|
||||
| section | OBSERVER | 参数归类 |
|
||||
| scope | CLUSTER | 参数生效范围 |
|
||||
| edit_level | DYNAMIC_EFFECTIVE | 参数生效时机:动态生效 / 需要重启 |
|
||||
|
||||
OB 集群参数的修改可以通过命令:`alter system set 参数名='参数值' [ server = '节点IP:节点RPC端口' ] ;` 。不指定 `server` 子句就是表示参数修改应用于所有 OceanBase 集群节点。
|
||||
比如说下面调整参数 `syslog_level` 值为 `USER_ERROR` 。
|
||||
|
||||
```sql
|
||||
MySQL [oceanbase]> alter system set syslog_level = 'USER_ERR' server='172.20.249.50:2882' ;
|
||||
Query OK, 0 rows affected (0.021 sec)
|
||||
|
||||
MySQL [oceanbase]> show parameters like 'syslog_level'\G
|
||||
*************************** 1. row ***************************
|
||||
zone: zone1
|
||||
svr_type: observer
|
||||
svr_ip: 172.20.249.50
|
||||
svr_port: 2882
|
||||
name: syslog_level
|
||||
data_type: NULL
|
||||
value: USER_ERR
|
||||
info: specifies the current level of logging. There are DEBUG, TRACE, INFO, WARN, USER_ERR, ERROR, six different log levels.
|
||||
section: OBSERVER
|
||||
scope: CLUSTER
|
||||
source: DEFAULT
|
||||
edit_level: DYNAMIC_EFFECTIVE
|
||||
1 row in set (0.002 sec)
|
||||
|
||||
```
|
||||
|
||||
## OceanBase 集群参数文件
|
||||
|
||||
上面这些参数修改都是立即生效的,并且参数修改也会持久化到 OceanBase 集群节点自己的参数文件。注意不是指前面提到的 OBD 集群部署参数文件。
|
||||
通常 OceanBase 集群每个节点的启动目录下会有一个目录 `etc` ,这里面保存了该节点进程的参数文件 `observer.config.bin` 。这是一个 `binary` 类型的文件,不能直接用 `cat` 命令读取,需要使用 `strings` 命令。 这个文件也不建议直接修改,而是通过上面提到的参数修改命令。
|
||||
|
||||
```bash
|
||||
[admin@obce00 oceanbase-ce]$ pwd
|
||||
/home/admin/oceanbase-ce
|
||||
[admin@obce00 oceanbase-ce]$ tree -L 2
|
||||
.
|
||||
|
||||
├── bin
|
||||
│ └── observer -> /home/admin/.obd/repository/oceanbase-ce/3.1.0/84bd2fe27f8b8243cc57d8a3f68b4c50f94aab80/bin/observer
|
||||
├── etc
|
||||
│ ├── observer.config.bin
|
||||
│ └── observer.config.bin.history
|
||||
├── etc2
|
||||
│ ├── observer.conf.bin
|
||||
│ └── observer.conf.bin.history
|
||||
├── etc3
|
||||
│ ├── observer.conf.bin
|
||||
│ └── observer.conf.bin.history
|
||||
|
||||
<省略掉无关内容>
|
||||
|
||||
9 directories, 20 files
|
||||
```
|
||||
|
||||
从上图看,启动目录下有三个文件夹:`etc etc2 etc3` ,下面都有参数文件以及其历史文件备份。进程 `observer` 默认会读取文件夹 `etc` 中的参数文件,其他两个目录是参数文件的备份,这个备份路径也是通过参数 `config_additional_dir` 指定的,默认值是同一个启动目录的 `etc2` 和 `etc3` 。生产环境一般会设置到其他磁盘,安全一些。当前 OBD 版本还是把它放到同一块盘,这个并不是很有意义,用户可以自己修改这个目录。
|
||||
此外,要注意的是 `etc2` 和 `etc3` 下的参数文件名跟 `etc` 下参数文件名并不完全一致,这个推测是早期开发者的失误。
|
||||
|
||||
```bash
|
||||
MySQL [oceanbase]> show parameters like 'config_additional_dir'\G
|
||||
*************************** 1. row ***************************
|
||||
zone: zone1
|
||||
svr_type: observer
|
||||
svr_ip: 172.20.249.50
|
||||
svr_port: 2882
|
||||
name: config_additional_dir
|
||||
data_type: NULL
|
||||
value: etc2;etc3
|
||||
info: additional directories of configure file
|
||||
section: OBSERVER
|
||||
scope: CLUSTER
|
||||
source: DEFAULT
|
||||
edit_level: DYNAMIC_EFFECTIVE
|
||||
1 row in set (0.002 sec)
|
||||
|
||||
[admin@obce00 oceanbase-ce]$ strings etc/observer.config.bin | grep -n memory_limit
|
||||
25:memory_limit=8G
|
||||
[admin@obce00 oceanbase-ce]$ strings etc2/observer.conf.bin | grep -n memory_limit
|
||||
25:memory_limit=8G
|
||||
[admin@obce00 oceanbase-ce]$ strings etc3/observer.conf.bin | grep -n memory_limit
|
||||
25:memory_limit=8G
|
||||
|
||||
```
|
||||
|
||||
查看实际参数文件内容可以看出,不是所有参数都在这个参数文件里。只有那些被 `alter system set ` 命令修改过的参数,以及在进程 `observer` 启动时通过 `-o` 指定的参数,才会记录在参数文件里。其他参数都是取自默认值(写在进程 `observer` 的代码里)。
|
||||
|
||||
## 使用 OBD 修改 OceanBase 集群参数
|
||||
|
||||
注意:上面直接在 OceanBase 集群里修改参数后,会立即同步到集群节点自身的参数文件中,但是不会同步到 OBD 的集群部署配置文件中(后期 OBD 可能会改进这个功能)。所以,如果使用 OBD 工具重启 OceanBase 集群的时候,默认又会带参数启动进程 `observer` 。如果前面在 OceanBase 集群里修改的参数在 OBD 集群部署配置文件中也有,并且后者的值还是老的,那就意味着那个参数又被调整回原来的设置值了。
|
||||
运维需要理解这里变化的原理。
|
||||
针对这个,OBD 提供两个解决思路:
|
||||
+ 手动同步修改 OBD 集群部署配置文件中的参数值。以后工具可能会自动同步。
|
||||
+ OBD 重启集群的时候不带参数启动节点进程。
|
||||
|
||||
OBD 提供命令编辑集群部署配置文件:`obd cluster edit-config` ,退出时会保存到上面工作目录中。
|
||||
|
||||
```bash
|
||||
obd cluster edit-config obce-single
|
||||
|
||||
保存时输出:
|
||||
oceanbase-ce-3.1.0 already installed.
|
||||
Search param plugin and load ok
|
||||
Parameter check ok
|
||||
Save deploy "obce-single" configuration
|
||||
deploy "need reload"
|
||||
```
|
||||
|
||||
`edit-config` 命令退出后会提示 `reload` 集群配置。
|
||||
|
||||
```bash
|
||||
[admin@obce00 ~]$ obd cluster reload obce-single
|
||||
Get local repositories and plugins ok
|
||||
Open ssh connection ok
|
||||
Cluster status check ok
|
||||
Connect to observer ok
|
||||
obce-single reload
|
||||
|
||||
```
|
||||
|
||||
提示:
|
||||
如果 OBD 命令运行出错,可以查看 日志。日志查看方法: `tail -n 50 ~/.obd/log/obd` 。
|
||||
|
||||
|
||||
## 进程启动时指定参数
|
||||
|
||||
前面介绍过了, OBD 在启动集群节点进程 `observer` 的时候,会在命令行下通过 `-o` 指定参数。对于运维来说,如果某个节点的进程 `observer` 因为某种原因退出了,启动进程是当务之急。可能需要调整某个参数再启动一次。通过 OBD 工具就有点效率低下了。
|
||||
所以,掌握 OceanBase 集群节点进程 `observer` 的启动方法还是很有必要的。
|
||||
|
||||
首先要进入到工作目录。必须在上一次启动进程 `observer` 的工作目录(假设它是正确的)下再次尝试。前面分析过,工作目录在 OBD 集群部署配置文件中指定 `home_path` 。本课程里工作目录都默认是 `/home/admin/oceanbase-ce` 。进程 `observer` 启动后会在这个目录找目录 `etc` ,找默认的参数文件 `observer.config.bin` 。启动后的日志会默认写到 `log/{observer.log, rootservice.log, election.log}` 。所以,工作目录不能错,目录的权限也不能错。
|
||||
|
||||
下面示例不带参数启动进程 `observer` 方法。为了模拟故障,先强行杀掉进程 `observer` 。
|
||||
|
||||
```bash
|
||||
[admin@obce00 ~]$ cd
|
||||
[admin@obce00 ~]$ cd oceanbase-ce/
|
||||
[admin@obce00 oceanbase-ce]$ kill -9 `pidof observer`
|
||||
[admin@obce00 oceanbase-ce]$ sleep 3
|
||||
[admin@obce00 oceanbase-ce]$ ps -ef|grep observer
|
||||
admin 35278 28904 0 11:26 pts/2 00:00:00 grep --color=auto observer
|
||||
[admin@obce00 oceanbase-ce]$ pwd
|
||||
/home/admin/oceanbase-ce
|
||||
[admin@obce00 oceanbase-ce]$ bin/observer
|
||||
bin/observer
|
||||
[admin@obce00 oceanbase-ce]$ ps -ef|grep observer
|
||||
admin 35280 1 99 11:26 ? 00:00:06 bin/observer
|
||||
admin 35848 28904 0 11:26 pts/2 00:00:00 grep --color=auto observer
|
||||
[admin@obce00 oceanbase-ce]$ netstat -ntlp
|
||||
(Not all processes could be identified, non-owned process info
|
||||
will not be shown, you would have to be root to see it all.)
|
||||
Active Internet connections (only servers)
|
||||
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
|
||||
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
|
||||
tcp 0 0 0.0.0.0:2881 0.0.0.0:* LISTEN 35280/bin/observer
|
||||
tcp 0 0 0.0.0.0:2882 0.0.0.0:* LISTEN 35280/bin/observer
|
||||
|
||||
```
|
||||
|
||||
下面示例带参数启动进程 `observer` 方法。为了模拟故障,先强行杀掉进程 `observer` 。
|
||||
|
||||
```bash
|
||||
[admin@obce00 oceanbase-ce]$ kill -9 `pidof observer`
|
||||
[admin@obce00 oceanbase-ce]$ sleep 3
|
||||
[admin@obce00 oceanbase-ce]$ bin/observer -o "max_syslog_file_count=15,datafile_size=60G"
|
||||
bin/observer -o max_syslog_file_count=15,datafile_size=60G
|
||||
optstr: max_syslog_file_count=15,datafile_size=60G
|
||||
[admin@obce00 oceanbase-ce]$ ps -ef|grep observer
|
||||
admin 35867 1 99 11:34 ? 00:00:09 bin/observer -o max_syslog_file_count=15,datafile_size=60G
|
||||
admin 36435 28904 0 11:34 pts/2 00:00:00 grep --color=auto observer
|
||||
[admin@obce00 oceanbase-ce]$ netstat -ntlp
|
||||
(Not all processes could be identified, non-owned process info
|
||||
will not be shown, you would have to be root to see it all.)
|
||||
Active Internet connections (only servers)
|
||||
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
|
||||
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
|
||||
tcp 0 0 0.0.0.0:2881 0.0.0.0:* LISTEN 35867/bin/observer
|
||||
tcp 0 0 0.0.0.0:2882 0.0.0.0:* LISTEN 35867/bin/observer
|
||||
|
||||
```
|
771
docs/docs/junior-training/ob-quick-start/chapter02/2.9.md
Normal file
771
docs/docs/junior-training/ob-quick-start/chapter02/2.9.md
Normal file
@ -0,0 +1,771 @@
|
||||
# 如何部署 OBAgent
|
||||
|
||||
## OBAgent 简介
|
||||
|
||||
OBAgent 是用 GO 语言开发的监控采集框架,通常部署在 OBServer 节点上。OBAgent 支持推、拉两种数据采集模式,可以满足不同的应用场景。OBAgent 默认支持的插件包括主机数据采集、OceanBase 数据库指标的采集、监控数据标签处理和 Prometheus 协议的 HTTP 服务。要使 OBAgent 支持其他数据源的采集,或者自定义数据的处理流程,您只需要开发对应的插件即可。
|
||||
|
||||
## 编辑 OBAgent 部署配置文件
|
||||
|
||||
OBAgent 部署配置文件可以跟 OceanBase 集群部署配置文件一起,也可以后期单独部署。附录 A.1 展示了同时部署 OceanBase 集群和 OBAgent。
|
||||
|
||||
下面示例是采用单独的配置文件部署 OBAgent 。OBAgent 的部署配置文件风格跟 OceanBase 集群部署配置文件一样。
|
||||
首先是指定部署节点,包括节点名称和 IP 。节点名称保持唯一就行,可以是主机名(假设主机名是唯一的)。
|
||||
然后指定全局配置。各个节点共同的配置都放在 `global` 节下。节点定制化的配置就不用放在这个下面。
|
||||
然后指定各个节点定制化的配置。比如说每个节点的 `zone` 名称是不一样的。其他的根据实际情况填写。
|
||||
|
||||
```yaml
|
||||
vim obagent-only.yaml
|
||||
|
||||
obagent:
|
||||
servers:
|
||||
- name: obce01
|
||||
# Please don't use hostname, only IP can be supported
|
||||
ip: 172.20.249.53
|
||||
- name: obce02
|
||||
ip: 172.20.249.55
|
||||
- name: obce03
|
||||
ip: 172.20.249.56
|
||||
global:
|
||||
# The working directory for obagent. obagent is started under this directory. This is a required field.
|
||||
home_path: /home/admin/obagent
|
||||
# The port that pulls and manages the metrics. The default port number is 8088.
|
||||
server_port: 8088
|
||||
# Debug port for pprof. The default port number is 8089.
|
||||
pprof_port: 8089
|
||||
sql_port: 2881
|
||||
rpc_port: 2882
|
||||
# Log level. The default value is INFO.
|
||||
log_level: INFO
|
||||
# Log path. The default value is log/monagent.log.
|
||||
log_path: log/monagent.log
|
||||
# Encryption method. OBD supports aes and plain. The default value is plain.
|
||||
crypto_method: plain
|
||||
# Path to store the crypto key. The default value is conf/.config_secret.key.
|
||||
# crypto_path: conf/.config_secret.key
|
||||
# Size for a single log file. Log size is measured in Megabytes. The default value is 30M.
|
||||
log_size: 30
|
||||
# Expiration time for logs. The default value is 7 days.
|
||||
log_expire_day: 7
|
||||
# The maximum number for log files. The default value is 10.
|
||||
log_file_count: 10
|
||||
# Whether to use local time for log files. The default value is true.
|
||||
# log_use_localtime: true
|
||||
# Whether to enable log compression. The default value is true.
|
||||
# log_compress: true
|
||||
# Username for HTTP authentication. The default value is admin.
|
||||
http_basic_auth_user: admin
|
||||
# Password for HTTP authentication. The default value is root.
|
||||
http_basic_auth_password: eIYf7NAZeT
|
||||
# Username for debug service. The default value is admin.
|
||||
pprof_basic_auth_user: admin
|
||||
# Password for debug service. The default value is root.
|
||||
pprof_basic_auth_password: eIYf7NAZeT
|
||||
|
||||
# 以下配置必须与 OceanBase 数据库一致
|
||||
# Monitor username for OceanBase Database. The user must have read access to OceanBase Database as a system tenant. The default value is root.
|
||||
monitor_user: monitor
|
||||
# Monitor password for OceanBase Database. The default value is empty. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the root_password in oceanbase-ce.
|
||||
monitor_password: fLyaqjrp2R
|
||||
# Cluster name for OceanBase Database. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the appname in oceanbase-ce.
|
||||
cluster_name: obce-3zones
|
||||
# Cluster ID for OceanBase Database. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the cluster_id in oceanbase-ce.
|
||||
cluster_id: 1
|
||||
|
||||
obce01:
|
||||
zone: zone1
|
||||
obce02:
|
||||
zone: zone2
|
||||
obce03:
|
||||
zone: zone3
|
||||
```
|
||||
|
||||
注意:
|
||||
|
||||
+ 指定节点的连接端口用的是 `sql_port` 不是 `mysql_port` ,这点跟 OBSERVER 节点配置不一样。
|
||||
+ 监控用户(`monitor_user`对应)和密码需要在 SYS 租户下创建。 `grant select on oceanbase.* to monitor identified by 'fLyaqjrp2R';` 。
|
||||
|
||||
## OBD 部署 OBAgent
|
||||
|
||||
第一次使用 `deploy` 命令,指定 OBAgent 的配置文件。
|
||||
|
||||
```bash
|
||||
[admin@obce00 ~]$ obd cluster deploy obagent-only -c obagent-only.yaml
|
||||
obagent-1.0.0 already installed.
|
||||
+---------------------------------------------------------------------------+
|
||||
| Packages |
|
||||
+------------+---------+---------+------------------------------------------+
|
||||
| Repository | Version | Release | Md5 |
|
||||
+------------+---------+---------+------------------------------------------+
|
||||
| obagent | 1.0.0 | 2.el8 | 1d65fc3d2cd08b26d6142b6149eb6806260aa7db |
|
||||
+------------+---------+---------+------------------------------------------+
|
||||
Repository integrity check ok
|
||||
Parameter check ok
|
||||
Open ssh connection ok
|
||||
Remote obagent-1.0.0-1d65fc3d2cd08b26d6142b6149eb6806260aa7db repository install ok
|
||||
Remote obagent-1.0.0-1d65fc3d2cd08b26d6142b6149eb6806260aa7db repository lib check ok
|
||||
Cluster status check ok
|
||||
Initializes obagent work home ok
|
||||
obagent-only deployed
|
||||
[admin@obce00 ~]$
|
||||
|
||||
[admin@obce00 ~]$ obd cluster list
|
||||
+------------------------------------------------------------------------+
|
||||
| Cluster List |
|
||||
+--------------+---------------------------------------+-----------------+
|
||||
| Name | Configuration Path | Status (Cached) |
|
||||
+--------------+---------------------------------------+-----------------+
|
||||
| obce-3zones | /home/admin/.obd/cluster/obce-3zones | running |
|
||||
| obagent-only | /home/admin/.obd/cluster/obagent-only | deployed |
|
||||
+--------------+---------------------------------------+-----------------+
|
||||
```
|
||||
|
||||
上面 `deploy` 命令运行后,配置文件就被复制到 `~/.obd/cluster/obagent-only/config.yaml` 了。后续修改 `obagent-only.yaml` 文件就不会生效。此时可以采取 `edit-config` 编辑使用的配置文件,或者使用 `destroy` 命令清理部署,重新读取 `obagent-only.yaml` 开始部署。这个取决于改动的影响范围。
|
||||
|
||||
`deploy` 命令只是在各个节点上部署 OBAgent 软件(直接解压缩方式,不是 RPM 安装),目录如下:
|
||||
|
||||
```bash
|
||||
[admin@obce01 ~]$ pwd
|
||||
/home/admin
|
||||
[admin@obce01 ~]$ tree obagent/
|
||||
obagent/
|
||||
├── bin
|
||||
│ └── monagent -> /home/admin/.obd/repository/obagent/1.0.0/1d65fc3d2cd08b26d6142b6149eb6806260aa7db/bin/monagent
|
||||
├── conf
|
||||
│ ├── config_properties
|
||||
│ │ ├── monagent_basic_auth.yaml
|
||||
│ │ └── monagent_pipeline.yaml
|
||||
│ ├── module_config
|
||||
│ │ ├── monagent_basic_auth.yaml
|
||||
│ │ ├── monagent_config.yaml
|
||||
│ │ ├── monitor_node_host.yaml
|
||||
│ │ └── monitor_ob.yaml
|
||||
│ ├── monagent.yaml
|
||||
│ └── prometheus_config
|
||||
│ ├── prometheus.yaml
|
||||
│ └── rules
|
||||
│ ├── host_rules.yaml
|
||||
│ └── ob_rules.yaml
|
||||
├── lib
|
||||
├── log
|
||||
│ └── monagent.log
|
||||
└── run
|
||||
|
||||
9 directories, 12 files
|
||||
[admin@obce01 ~]$
|
||||
```
|
||||
|
||||
## OBD 启动 OBAgent
|
||||
|
||||
启动命令是 `start` 。
|
||||
|
||||
```bash
|
||||
[admin@obce00 ~]$ obd cluster start obagent-only
|
||||
Get local repositories and plugins ok
|
||||
Open ssh connection ok
|
||||
Cluster param config check ok
|
||||
Check before start obagent ok
|
||||
obagent program health check ok
|
||||
+---------------------------------------------------+
|
||||
| obagent |
|
||||
+---------------+-------------+------------+--------+
|
||||
| ip | server_port | pprof_port | status |
|
||||
+---------------+-------------+------------+--------+
|
||||
| 172.20.249.53 | 8088 | 8089 | active |
|
||||
| 172.20.249.55 | 8088 | 8089 | active |
|
||||
| 172.20.249.56 | 8088 | 8089 | active |
|
||||
+---------------+-------------+------------+--------+
|
||||
obagent-only running
|
||||
[admin@obce00 ~]$
|
||||
[admin@obce00 ~]$ obd cluster list
|
||||
+------------------------------------------------------------------------+
|
||||
| Cluster List |
|
||||
+--------------+---------------------------------------+-----------------+
|
||||
| Name | Configuration Path | Status (Cached) |
|
||||
+--------------+---------------------------------------+-----------------+
|
||||
| obce-3zones | /home/admin/.obd/cluster/obce-3zones | running |
|
||||
| obagent-only | /home/admin/.obd/cluster/obagent-only | running |
|
||||
+--------------+---------------------------------------+-----------------+
|
||||
[admin@obce00 ~]$
|
||||
```
|
||||
|
||||
OBAgent 启动后有两个进程,其中进程 `moagent` 会监听指定端口。
|
||||
|
||||
```bash
|
||||
[admin@obce01 ~]$ ps -ef|grep agent | grep -v grep
|
||||
admin 90855 1 0 12:08 ? 00:00:00 /home/admin/obagent/bin/monagent -c conf/monagent.yaml
|
||||
[admin@obce01 ~]$
|
||||
[admin@obce01 ~]$ netstat -ntlp |grep 90855
|
||||
(Not all processes could be identified, non-owned process info
|
||||
will not be shown, you would have to be root to see it all.)
|
||||
tcp6 0 0 :::8088 :::* LISTEN 90855/monagent
|
||||
tcp6 0 0 :::8089 :::* LISTEN 90855/monagent
|
||||
[admin@obce01 ~]$
|
||||
|
||||
```
|
||||
|
||||
事后也可以通过 OBD 查看 OBAgent 部署情况。
|
||||
|
||||
```bash
|
||||
[admin@obce00 ~]$ obd cluster display obagent-only
|
||||
Get local repositories and plugins ok
|
||||
Open ssh connection ok
|
||||
Cluster status check ok
|
||||
+---------------------------------------------------+
|
||||
| obagent |
|
||||
+---------------+-------------+------------+--------+
|
||||
| ip | server_port | pprof_port | status |
|
||||
+---------------+-------------+------------+--------+
|
||||
| 172.20.249.53 | 8088 | 8089 | active |
|
||||
| 172.20.249.55 | 8088 | 8089 | active |
|
||||
| 172.20.249.56 | 8088 | 8089 | active |
|
||||
+---------------+-------------+------------+--------+
|
||||
[admin@obce00 ~]$
|
||||
|
||||
```
|
||||
|
||||
## Prometheus 配置
|
||||
|
||||
OBAgent 启动后会在节点自动生成 `Prometheus` 配置文件, 位置在 OBAgent 安装目录下,如 `/home/admin/obagent/conf/prometheus_config/` 。这个配置文件可以给 `Prometheus` 产品直接使用。
|
||||
|
||||
示例如下:
|
||||
|
||||
```yaml
|
||||
vim prometheus_config/prometheus.yaml
|
||||
|
||||
global:
|
||||
scrape_interval: 1s
|
||||
evaluation_interval: 10s
|
||||
|
||||
rule_files:
|
||||
- "rules/*rules.yaml"
|
||||
|
||||
scrape_configs:
|
||||
- job_name: prometheus
|
||||
metrics_path: /metrics
|
||||
scheme: http
|
||||
static_configs:
|
||||
- targets:
|
||||
- 'localhost:9090'
|
||||
- job_name: node
|
||||
basic_auth:
|
||||
username: admin
|
||||
password: eIYf7NAZeT
|
||||
metrics_path: /metrics/node/host
|
||||
scheme: http
|
||||
static_configs:
|
||||
- targets:
|
||||
- 172.20.249.53:8088
|
||||
- 172.20.249.55:8088
|
||||
- 172.20.249.56:8088
|
||||
- job_name: ob_basic
|
||||
basic_auth:
|
||||
username: admin
|
||||
password: eIYf7NAZeT
|
||||
metrics_path: /metrics/ob/basic
|
||||
scheme: http
|
||||
static_configs:
|
||||
- targets:
|
||||
- 172.20.249.53:8088
|
||||
- 172.20.249.55:8088
|
||||
- 172.20.249.56:8088
|
||||
- job_name: ob_extra
|
||||
basic_auth:
|
||||
username: admin
|
||||
password: eIYf7NAZeT
|
||||
metrics_path: /metrics/ob/extra
|
||||
scheme: http
|
||||
static_configs:
|
||||
- targets:
|
||||
- 172.20.249.53:8088
|
||||
- 172.20.249.55:8088
|
||||
- 172.20.249.56:8088
|
||||
- job_name: agent
|
||||
basic_auth:
|
||||
username: admin
|
||||
password: eIYf7NAZeT
|
||||
metrics_path: /metrics/stat
|
||||
scheme: http
|
||||
static_configs:
|
||||
- targets:
|
||||
- 172.20.249.53:8088
|
||||
- 172.20.249.55:8088
|
||||
- 172.20.249.56:8088
|
||||
```
|
||||
|
||||
稍加说明如下:
|
||||
| 配置项 | 值 | 说明 |
|
||||
|---------------------|-------------------|--------|
|
||||
| scrape_interval | 1s | 抓取间隔 |
|
||||
| evaluation_interval | 10s | 评估规则间隔 |
|
||||
| rule_files | rules/*rules.yaml | 报警规则 |
|
||||
| scrape_configs | | 抓取配置 |
|
||||
|
||||
下载解压缩 prometheus 后,启动方法
|
||||
|
||||
```bash
|
||||
cd prometheus-2.30.3.linux-amd64 && ./prometheus ./prometheus.yaml
|
||||
level=info ts=2021-11-19T05:41:57.789Z caller=main.go:400 msg="No time or size retention was set so using the default time retention" duration=15d
|
||||
level=info ts=2021-11-19T05:41:57.789Z caller=main.go:438 msg="Starting Prometheus" version="(version=2.30.3, branch=HEAD, revision=f29caccc42557f6a8ec30ea9b3c8c089391bd5df)"
|
||||
level=info ts=2021-11-19T05:41:57.789Z caller=main.go:443 build_context="(go=go1.17.1, user=root@5cff4265f0e3, date=20211005-16:10:52)"
|
||||
level=info ts=2021-11-19T05:41:57.789Z caller=main.go:444 host_details="(Linux 3.10.0-1127.19.1.el7.x86_64 #1 SMP Tue Aug 25 17:23:54 UTC 2020 x86_64 obce00 (none))"
|
||||
level=info ts=2021-11-19T05:41:57.789Z caller=main.go:445 fd_limits="(soft=65535, hard=65535)"
|
||||
level=info ts=2021-11-19T05:41:57.789Z caller=main.go:446 vm_limits="(soft=unlimited, hard=unlimited)"
|
||||
level=info ts=2021-11-19T05:41:57.791Z caller=web.go:541 component=web msg="Start listening for connections" address=0.0.0.0:9090
|
||||
level=info ts=2021-11-19T05:41:57.792Z caller=main.go:822 msg="Starting TSDB ..."
|
||||
level=info ts=2021-11-19T05:41:57.792Z caller=tls_config.go:191 component=web msg="TLS is disabled." http2=false
|
||||
level=info ts=2021-11-19T05:41:57.795Z caller=head.go:479 component=tsdb msg="Replaying on-disk memory mappable chunks if any"
|
||||
level=info ts=2021-11-19T05:41:57.795Z caller=head.go:513 component=tsdb msg="On-disk memory mappable chunks replay completed" duration=7.579µs
|
||||
level=info ts=2021-11-19T05:41:57.795Z caller=head.go:519 component=tsdb msg="Replaying WAL, this may take a while"
|
||||
level=info ts=2021-11-19T05:41:57.795Z caller=head.go:590 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
|
||||
level=info ts=2021-11-19T05:41:57.795Z caller=head.go:596 component=tsdb msg="WAL replay completed" checkpoint_replay_duration=30.235µs wal_replay_duration=183.559µs total_replay_duration=238.771µs
|
||||
level=info ts=2021-11-19T05:41:57.796Z caller=main.go:849 fs_type=EXT4_SUPER_MAGIC
|
||||
level=info ts=2021-11-19T05:41:57.796Z caller=main.go:852 msg="TSDB started"
|
||||
level=info ts=2021-11-19T05:41:57.796Z caller=main.go:979 msg="Loading configuration file" filename=prometheus.yaml
|
||||
level=info ts=2021-11-19T05:41:57.802Z caller=main.go:1016 msg="Completed loading of configuration file" filename=prometheus.yaml totalDuration=6.363415ms db_storage=1.001µs remote_storage=4.404µs web_handler=3.04µs query_engine=886ns scrape=418.015µs scrape_sd=129.011µs notify=5.433µs notify_sd=4.872µs rules=5.284804ms
|
||||
level=info ts=2021-11-19T05:41:57.802Z caller=main.go:794 msg="Server is ready to receive web requests."
|
||||
|
||||
```
|
||||
|
||||
启动后通过浏览器访问:[http://172.24.50.39:9090/graph](http://172.24.50.39:9090/graph) 。
|
||||
|
||||
具体 Prometheus 使用方法可以参考 [Prometheus 官方问答](https://prometheus.io/docs/introduction/overview/) 。
|
||||
|
||||
## OBAgent 重启方法
|
||||
|
||||
直接重启某个节点的 `OBAgent` 方法是:
|
||||
|
||||
```bash
|
||||
kill -9 `pidof monagent`
|
||||
|
||||
cd /home/admin/obagent && nohup bin/monagent -c conf/monagent.yaml &
|
||||
```
|
||||
|
||||
如果是集中重启,那就使用 OBD 命令:
|
||||
|
||||
```bash
|
||||
obd cluster restart obagent_only
|
||||
```
|
||||
|
||||
如果 `OBAgent` 是跟 `OceanBase` 一起部署的,那只能重启组件 `obagent` 。
|
||||
|
||||
```bash
|
||||
obd cluster restart obce-3zones-obagent -c obagent
|
||||
|
||||
[admin@obce00 ~]$ obd cluster restart obce-3zones-obagent -c obagent
|
||||
Get local repositories and plugins ok
|
||||
Open ssh connection ok
|
||||
Stop obagent ok
|
||||
succeed
|
||||
Get local repositories and plugins ok
|
||||
Open ssh connection ok
|
||||
Cluster param config check ok
|
||||
Check before start obagent ok
|
||||
obagent program health check ok
|
||||
+--------------------------------------------------+
|
||||
| obagent |
|
||||
+--------------+-------------+------------+--------+
|
||||
| ip | server_port | pprof_port | status |
|
||||
+--------------+-------------+------------+--------+
|
||||
| 172.24.50.37 | 8088 | 8089 | active |
|
||||
| 172.24.50.40 | 8088 | 8089 | active |
|
||||
| 172.24.50.38 | 8088 | 8089 | active |
|
||||
+--------------+-------------+------------+--------+
|
||||
succeed
|
||||
|
||||
```
|
||||
|
||||
## Grafana 使用
|
||||
|
||||
首先请从 Grafana 官网下载最新版本,并安装启动。下载地址:[https://grafana.com/grafana/download?pg=get&plcmt=selfmanaged-box1-cta1](https://grafana.com/grafana/download?pg=get&plcmt=selfmanaged-box1-cta1) 。
|
||||
|
||||
然后在 Grafana 里新增 Datasource,填入 Prometheus 地址。
|
||||
|
||||
第三,从 Grafana 官网下载 OceanBase 提交的 主机性能模板和OceanBase 性能模板文件,文件是 json 格式。
|
||||
|
||||
+ [主机性能模板](https://grafana.com/grafana/dashboards/15216)
|
||||
+ [OceanBase 性能模板](https://grafana.com/grafana/dashboards/15215)
|
||||
|
||||
下载到本机后,在Grafana 里 Import 这两个 json 文件。
|
||||
|
||||
## 附录
|
||||
|
||||
### A.1 OceanBase 和 OBAgent 一起的配置文件
|
||||
|
||||
```yaml
|
||||
# Only need to configure when remote login is required
|
||||
user:
|
||||
username: admin
|
||||
# password: your password if need
|
||||
key_file: /home/admin/.ssh/id_rsa.pub
|
||||
port: your ssh port, default 22
|
||||
# timeout: ssh connection timeout (second), default 30
|
||||
oceanbase-ce:
|
||||
servers:
|
||||
- name: obce01
|
||||
# Please don't use hostname, only IP can be supported
|
||||
ip: 172.24.50.37
|
||||
- name: obce02
|
||||
ip: 172.24.50.40
|
||||
- name: obce03
|
||||
ip: 172.24.50.38
|
||||
global:
|
||||
# Please set devname as the network adaptor's name whose ip is in the setting of severs.
|
||||
# if set severs as "127.0.0.1", please set devname as "lo"
|
||||
# if current ip is 192.168.1.10, and the ip's network adaptor's name is "eth0", please use "eth0"
|
||||
devname: eth0
|
||||
cluster_id: 2
|
||||
# please set memory limit to a suitable value which is matching resource.
|
||||
memory_limit: 8G # The maximum running memory for an observer
|
||||
system_memory: 3G # The reserved system memory. system_memory is reserved for general tenants. The default value is 30G.
|
||||
stack_size: 512K
|
||||
cpu_count: 16
|
||||
cache_wash_threshold: 1G
|
||||
__min_full_resource_pool_memory: 268435456
|
||||
workers_per_cpu_quota: 10
|
||||
schema_history_expire_time: 1d
|
||||
# The value of net_thread_count had better be same as cpu's core number.
|
||||
net_thread_count: 4
|
||||
major_freeze_duty_time: Disable
|
||||
minor_freeze_times: 10
|
||||
enable_separate_sys_clog: 0
|
||||
enable_merge_by_turn: FALSE
|
||||
#datafile_disk_percentage: 20 # The percentage of the data_dir space to the total disk space. This value takes effect only when datafile_size is 0. The default value is 90.
|
||||
datafile_size: 50G
|
||||
syslog_level: WARN # System log level. The default value is INFO.
|
||||
enable_syslog_wf: false # Print system logs whose levels are higher than WARNING to a separate log file. The default value is true.
|
||||
enable_syslog_recycle: true # Enable auto system log recycling or not. The default value is false.
|
||||
max_syslog_file_count: 10 # The maximum number of reserved log files before enabling auto recycling. The default value is 0.
|
||||
# observer cluster name, consistent with obproxy's cluster_name
|
||||
appname: obce-3zones
|
||||
root_password: 0EI5N08d # root user password, can be empty
|
||||
proxyro_password: uY7Yf8zx # proxyro user pasword, consistent with obproxy's observer_sys_password, can be empty
|
||||
obce01:
|
||||
mysql_port: 3881 # External port for OceanBase Database. The default value is 3881.
|
||||
rpc_port: 3882 # Internal port for OceanBase Database. The default value is 3882.
|
||||
# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
|
||||
home_path: /home/admin/oceanbase-ce
|
||||
# The directory for data storage. The default value is $home_path/store.
|
||||
data_dir: /data/obce
|
||||
# The directory for clog, ilog, and slog. The default value is the same as the data_dir value.
|
||||
redo_dir: /redo/obce
|
||||
zone: zone1
|
||||
obce02:
|
||||
mysql_port: 3881 # External port for OceanBase Database. The default value is 3881.
|
||||
rpc_port: 3882 # Internal port for OceanBase Database. The default value is 3882.
|
||||
# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
|
||||
home_path: /home/admin/oceanbase-ce
|
||||
# The directory for data storage. The default value is $home_path/store.
|
||||
data_dir: /data/obce
|
||||
# The directory for clog, ilog, and slog. The default value is the same as the data_dir value.
|
||||
redo_dir: /redo/obce
|
||||
zone: zone2
|
||||
obce03:
|
||||
mysql_port: 3881 # External port for OceanBase Database. The default value is 3881.
|
||||
rpc_port: 3882 # Internal port for OceanBase Database. The default value is 3882.
|
||||
# The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field.
|
||||
home_path: /home/admin/oceanbase-ce
|
||||
# The directory for data storage. The default value is $home_path/store.
|
||||
data_dir: /data/obce
|
||||
# The directory for clog, ilog, and slog. The default value is the same as the data_dir value.
|
||||
redo_dir: /redo/obce
|
||||
zone: zone3
|
||||
obproxy:
|
||||
servers:
|
||||
- 172.24.50.39
|
||||
- 172.24.50.37
|
||||
# Set ependent components for the component.
|
||||
# When the associated configurations are not done, OBD will automatically get the these configurations from the dependent components.
|
||||
depends:
|
||||
- oceanbase-ce
|
||||
global:
|
||||
listen_port: 3883 # External port. The default value is 3883.
|
||||
prometheus_listen_port: 3884 # The Prometheus port. The default value is 3884.
|
||||
home_path: /home/admin/obproxy
|
||||
# oceanbase root server list
|
||||
# format: ip:mysql_port;ip:mysql_port
|
||||
rs_list: 172.24.50.37:3881;172.24.50.40:3881;172.24.50.38:3881
|
||||
enable_cluster_checkout: false
|
||||
# observer cluster name, consistent with oceanbase-ce's appname
|
||||
cluster_name: obce-3zones
|
||||
obproxy_sys_password: 0MdTv1tm # obproxy sys user password, can be empty
|
||||
observer_sys_password: uY7Yf8zx # proxyro user pasword, consistent with oceanbase-ce's proxyro_password, can be empty
|
||||
obagent:
|
||||
servers:
|
||||
- name: obce01
|
||||
# Please don't use hostname, only IP can be supported
|
||||
ip: 172.24.50.37
|
||||
- name: obce02
|
||||
ip: 172.24.50.40
|
||||
- name: obce03
|
||||
ip: 172.24.50.38
|
||||
depends:
|
||||
- oceanbase-ce
|
||||
global:
|
||||
# The working directory for obagent. obagent is started under this directory. This is a required field.
|
||||
home_path: /home/admin/obagent
|
||||
# The port that pulls and manages the metrics. The default port number is 8088.
|
||||
server_port: 8088
|
||||
# Debug port for pprof. The default port number is 8089.
|
||||
pprof_port: 8089
|
||||
sql_port: 3881
|
||||
rpc_port: 3882
|
||||
# Log level. The default value is INFO.
|
||||
log_level: INFO
|
||||
# Log path. The default value is log/monagent.log.
|
||||
log_path: log/monagent.log
|
||||
# Encryption method. OBD supports aes and plain. The default value is plain.
|
||||
crypto_method: plain
|
||||
# Path to store the crypto key. The default value is conf/.config_secret.key.
|
||||
# crypto_path: conf/.config_secret.key
|
||||
# Size for a single log file. Log size is measured in Megabytes. The default value is 30M.
|
||||
log_size: 30
|
||||
# Expiration time for logs. The default value is 7 days.
|
||||
log_expire_day: 7
|
||||
# The maximum number for log files. The default value is 10.
|
||||
log_file_count: 10
|
||||
# Whether to use local time for log files. The default value is true.
|
||||
# log_use_localtime: true
|
||||
# Whether to enable log compression. The default value is true.
|
||||
# log_compress: true
|
||||
# Username for HTTP authentication. The default value is admin.
|
||||
http_basic_auth_user: admin
|
||||
# Password for HTTP authentication. The default value is root.
|
||||
http_basic_auth_password: eIYf7NAZeT
|
||||
# Username for debug service. The default value is admin.
|
||||
pprof_basic_auth_user: admin
|
||||
# Password for debug service. The default value is root.
|
||||
pprof_basic_auth_password: eIYf7NAZeT
|
||||
|
||||
# 以下配置必须与 OceanBase 数据库一致
|
||||
# Monitor username for OceanBase Database. The user must have read access to OceanBase Database as a system tenant. The default value is root.
|
||||
monitor_user: monitor
|
||||
# Monitor password for OceanBase Database. The default value is empty. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the root_password in oceanbase-ce.
|
||||
monitor_password: fLyaqjrp2R
|
||||
# Cluster name for OceanBase Database. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the appname in oceanbase-ce.
|
||||
# cluster_name: obce-3zones
|
||||
# Cluster ID for OceanBase Database. When a depends exists, OBD gets this value from the oceanbase-ce of the depends. The value is the same as the cluster_id in oceanbase-ce.
|
||||
# cluster_id: 2
|
||||
```
|
||||
|
||||
### A.2 `obagent` 输出的性能数据
|
||||
|
||||
示例数据。
|
||||
|
||||
```bash
|
||||
[admin@obce00 ~]$ curl --user admin:eIYf7NAZeT -L 'http://172.24.50.40:8088/metrics/ob/basic'
|
||||
# HELP ob_active_session_num monitor collected metric
|
||||
# TYPE ob_active_session_num untyped
|
||||
ob_active_session_num{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",tenant_name="sys"} 6
|
||||
# HELP ob_cache_size_bytes monitor collected metric
|
||||
# TYPE ob_cache_size_bytes untyped
|
||||
ob_cache_size_bytes{app="OB",cache_name="location_cache",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",tenant_name="sys"} 4.193216e+06
|
||||
ob_cache_size_bytes{app="OB",cache_name="user_tab_col_stat_cache",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",tenant_name="sys"} 6.290304e+06
|
||||
ob_cache_size_bytes{app="OB",cache_name="user_table_stat_cache",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",tenant_name="sys"} 6.290304e+06
|
||||
# HELP ob_partition_num monitor collected metric
|
||||
# TYPE ob_partition_num untyped
|
||||
ob_partition_num{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",role="1",svr_ip="172.24.50.40",tenant_name="sys"} 1198
|
||||
# HELP ob_plan_cache_access_total monitor collected metric
|
||||
# TYPE ob_plan_cache_access_total untyped
|
||||
ob_plan_cache_access_total{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",tenant_name="sys"} 22984
|
||||
# HELP ob_plan_cache_hit_total monitor collected metric
|
||||
# TYPE ob_plan_cache_hit_total untyped
|
||||
ob_plan_cache_hit_total{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",tenant_name="sys"} 8645
|
||||
# HELP ob_plan_cache_memory_bytes monitor collected metric
|
||||
# TYPE ob_plan_cache_memory_bytes untyped
|
||||
ob_plan_cache_memory_bytes{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",tenant_name="sys"} 1.3404239e+07
|
||||
# HELP ob_server_num monitor collected metric
|
||||
# TYPE ob_server_num untyped
|
||||
ob_server_num{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",server_ips="172.24.50.37,172.24.50.38,172.24.50.40",status="active",svr_ip="172.24.50.40"} 3
|
||||
# HELP ob_sysstat monitor collected metric
|
||||
# TYPE ob_sysstat untyped
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="10000",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 47136
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="10001",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 3.0078186e+07
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="10002",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 46863
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="10003",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 4.291008e+07
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="10005",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} -2.050408e+06
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="10006",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 4096
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="130000",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 8.59442e+07
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="130001",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 8.8080384e+07
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="130002",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 4.0265315e+08
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="130004",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 8.0530635e+08
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="140002",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 1.610612736e+09
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="140003",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 3.81681664e+08
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="140005",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 500
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="140006",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 1
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="20001",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 4122
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="20002",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 104938
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="30000",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 0
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="30001",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 0
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="30002",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 0
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="30005",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 15330
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="30006",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 7.566136e+06
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="40000",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 463
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="40001",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 1.863916e+06
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="40002",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 0
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="40003",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 0
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="40004",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 0
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="40005",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 0
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="40006",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 0
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="40007",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 0
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="40008",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 0
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="40009",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 0
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="40010",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 108
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="40011",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 8339
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="40012",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 329
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="50000",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 0
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="50001",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 0
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="50008",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 0
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="50009",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 0
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="60000",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 0
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="60001",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 0
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="60002",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 0
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="60003",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 4
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="60004",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 43325
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="60005",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 8.388608e+06
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="60019",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 0
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="60020",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 0
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="60021",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 0
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="60022",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 0
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="60023",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 0
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="60024",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 0
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="80040",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 11980
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="80041",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 2.0937394e+07
|
||||
ob_sysstat{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",stat_id="80057",svr_ip="172.24.50.40",tenant_id="1",tenant_name="sys"} 0
|
||||
# HELP ob_table_num monitor collected metric
|
||||
# TYPE ob_table_num untyped
|
||||
ob_table_num{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",tenant_name="sys"} 1013
|
||||
# HELP ob_waitevent_wait_seconds_total monitor collected metric
|
||||
# TYPE ob_waitevent_wait_seconds_total untyped
|
||||
ob_waitevent_wait_seconds_total{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",tenant_name="sys"} 49578.6026
|
||||
# HELP ob_waitevent_wait_total monitor collected metric
|
||||
# TYPE ob_waitevent_wait_total untyped
|
||||
ob_waitevent_wait_total{app="OB",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",tenant_name="sys"} 787473
|
||||
# HELP ob_zone_current_timestamp monitor collected metric
|
||||
# TYPE ob_zone_current_timestamp untyped
|
||||
ob_zone_current_timestamp{app="OB",name="all_merged_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone1"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="all_merged_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone2"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="all_merged_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone3"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="broadcast_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone1"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="broadcast_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone2"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="broadcast_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone3"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="cluster",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="config_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="frozen_time",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="frozen_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="gc_schema_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="global_broadcast_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="idc",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone1"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="idc",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone2"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="idc",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone3"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="is_merge_error",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="is_merge_timeout",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone1"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="is_merge_timeout",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone2"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="is_merge_timeout",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone3"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="is_merging",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone1"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="is_merging",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone2"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="is_merging",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone3"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="last_merged_time",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone1"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="last_merged_time",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone2"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="last_merged_time",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone3"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="last_merged_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="last_merged_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone1"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="last_merged_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone2"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="last_merged_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone3"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="lease_info_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="merge_start_time",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone1"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="merge_start_time",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone2"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="merge_start_time",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone3"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="merge_status",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="merge_status",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone1"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="merge_status",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone2"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="merge_status",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone3"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="privilege_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="proposal_frozen_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="recovery_status",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone1"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="recovery_status",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone2"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="recovery_status",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone3"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="region",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone1"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="region",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone2"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="region",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone3"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="snapshot_gc_ts",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="status",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone1"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="status",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone2"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="status",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone3"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="storage_format_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="storage_type",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone1"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="storage_type",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone2"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="storage_type",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone3"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="suspend_merging",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone1"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="suspend_merging",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone2"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="suspend_merging",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone3"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="time_zone_info_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="try_frozen_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="warm_up_start_time",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="zone_type",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone1"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="zone_type",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone2"} 1.637303541e+15
|
||||
ob_zone_current_timestamp{app="OB",name="zone_type",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone3"} 1.637303541e+15
|
||||
# HELP ob_zone_stat monitor collected metric
|
||||
# TYPE ob_zone_stat untyped
|
||||
ob_zone_stat{app="OB",name="all_merged_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone1"} 1
|
||||
ob_zone_stat{app="OB",name="all_merged_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone2"} 1
|
||||
ob_zone_stat{app="OB",name="all_merged_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone3"} 1
|
||||
ob_zone_stat{app="OB",name="broadcast_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone1"} 1
|
||||
ob_zone_stat{app="OB",name="broadcast_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone2"} 1
|
||||
ob_zone_stat{app="OB",name="broadcast_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone3"} 1
|
||||
ob_zone_stat{app="OB",name="cluster",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 0
|
||||
ob_zone_stat{app="OB",name="config_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 1.63730315985165e+15
|
||||
ob_zone_stat{app="OB",name="frozen_time",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 0
|
||||
ob_zone_stat{app="OB",name="frozen_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 1
|
||||
ob_zone_stat{app="OB",name="gc_schema_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 0
|
||||
ob_zone_stat{app="OB",name="global_broadcast_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 1
|
||||
ob_zone_stat{app="OB",name="idc",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone1"} 0
|
||||
ob_zone_stat{app="OB",name="idc",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone2"} 0
|
||||
ob_zone_stat{app="OB",name="idc",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone3"} 0
|
||||
ob_zone_stat{app="OB",name="is_merge_error",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 0
|
||||
ob_zone_stat{app="OB",name="is_merge_timeout",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone1"} 0
|
||||
ob_zone_stat{app="OB",name="is_merge_timeout",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone2"} 0
|
||||
ob_zone_stat{app="OB",name="is_merge_timeout",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone3"} 0
|
||||
ob_zone_stat{app="OB",name="is_merging",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone1"} 0
|
||||
ob_zone_stat{app="OB",name="is_merging",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone2"} 0
|
||||
ob_zone_stat{app="OB",name="is_merging",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone3"} 0
|
||||
ob_zone_stat{app="OB",name="last_merged_time",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone1"} 1.637303150866399e+15
|
||||
ob_zone_stat{app="OB",name="last_merged_time",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone2"} 1.637303150867716e+15
|
||||
ob_zone_stat{app="OB",name="last_merged_time",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone3"} 1.637303150868863e+15
|
||||
ob_zone_stat{app="OB",name="last_merged_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 1
|
||||
ob_zone_stat{app="OB",name="last_merged_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone1"} 1
|
||||
ob_zone_stat{app="OB",name="last_merged_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone2"} 1
|
||||
ob_zone_stat{app="OB",name="last_merged_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone3"} 1
|
||||
ob_zone_stat{app="OB",name="lease_info_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 1.63730315985492e+15
|
||||
ob_zone_stat{app="OB",name="merge_start_time",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone1"} 1.637303150866399e+15
|
||||
ob_zone_stat{app="OB",name="merge_start_time",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone2"} 1.637303150867716e+15
|
||||
ob_zone_stat{app="OB",name="merge_start_time",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone3"} 1.637303150868863e+15
|
||||
ob_zone_stat{app="OB",name="merge_status",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 0
|
||||
ob_zone_stat{app="OB",name="merge_status",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone1"} 0
|
||||
ob_zone_stat{app="OB",name="merge_status",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone2"} 0
|
||||
ob_zone_stat{app="OB",name="merge_status",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone3"} 0
|
||||
ob_zone_stat{app="OB",name="privilege_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 0
|
||||
ob_zone_stat{app="OB",name="proposal_frozen_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 1
|
||||
ob_zone_stat{app="OB",name="recovery_status",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone1"} 0
|
||||
ob_zone_stat{app="OB",name="recovery_status",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone2"} 0
|
||||
ob_zone_stat{app="OB",name="recovery_status",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone3"} 0
|
||||
ob_zone_stat{app="OB",name="region",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone1"} 0
|
||||
ob_zone_stat{app="OB",name="region",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone2"} 0
|
||||
ob_zone_stat{app="OB",name="region",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone3"} 0
|
||||
ob_zone_stat{app="OB",name="snapshot_gc_ts",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 0
|
||||
ob_zone_stat{app="OB",name="status",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone1"} 2
|
||||
ob_zone_stat{app="OB",name="status",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone2"} 2
|
||||
ob_zone_stat{app="OB",name="status",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone3"} 2
|
||||
ob_zone_stat{app="OB",name="storage_format_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 4
|
||||
ob_zone_stat{app="OB",name="storage_type",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone1"} 0
|
||||
ob_zone_stat{app="OB",name="storage_type",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone2"} 0
|
||||
ob_zone_stat{app="OB",name="storage_type",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone3"} 0
|
||||
ob_zone_stat{app="OB",name="suspend_merging",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone1"} 0
|
||||
ob_zone_stat{app="OB",name="suspend_merging",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone2"} 0
|
||||
ob_zone_stat{app="OB",name="suspend_merging",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone3"} 0
|
||||
ob_zone_stat{app="OB",name="time_zone_info_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 0
|
||||
ob_zone_stat{app="OB",name="try_frozen_version",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 1
|
||||
ob_zone_stat{app="OB",name="warm_up_start_time",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone=""} 0
|
||||
ob_zone_stat{app="OB",name="zone_type",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone1"} 0
|
||||
ob_zone_stat{app="OB",name="zone_type",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone2"} 0
|
||||
ob_zone_stat{app="OB",name="zone_type",ob_cluster_id="2",ob_cluster_name="obce-3zones",obzone="zone1",svr_ip="172.24.50.40",zone="zone3"} 0
|
||||
```
|
24
docs/docs/junior-training/ob-quick-start/readme.md
Normal file
24
docs/docs/junior-training/ob-quick-start/readme.md
Normal file
@ -0,0 +1,24 @@
|
||||
# 说明
|
||||
|
||||
本目录是 OceanBase 开源版入门到实战教程。欢迎大家参考练习。
|
||||
由于测试环境有限,不能覆盖所有客户场景,可能个别地方会有问题,欢迎大家补充。
|
||||
本目录会持续更新。
|
||||
|
||||
## 目录
|
||||
|
||||
+ [第 1 章:OceanBase 数据库概述](chapter01/1.md)
|
||||
+ [第 2 章:如何部署 OceanBase 社区版](chapter02/2.0.md)
|
||||
+ [第 3 章:如何使用 OceanBase 社区版](chapter03/3.0.md)
|
||||
+ [第 4 章:如何迁移 MySQL 数据到 OceanBase](chapter04/4.0.md)
|
||||
+ [第 5 章:如何运维 OceanBase 社区版](chapter05/5.0.md)
|
||||
+ [第 6 章:如何测试 OceanBase 社区版性能](chapter06/6.0.md)
|
||||
+ [第 7 章:如何诊断和调优 OceanBase 社区版性能](chapter07/7.0.md)
|
||||
+ [第 8 课:OceanBase 社区版生态工具介绍](chapter08/8.0.md)
|
||||
|
||||
## 如何联系我们
|
||||
|
||||
欢迎广大 OceanBase 爱好者、用户和客户有任何问题联系我们反馈:
|
||||
|
||||
+ 社区版官网论坛:[https://open.oceanbase.com/answer](https://open.oceanbase.com/answer) 。
|
||||
+ 社区版项目网站提 `Issue`:[https://github.com/oceanbase/oceanbase/issues](https://github.com/oceanbase/oceanbase/issues) 。
|
||||
+ 钉钉群:群号 `33254054` 。
|
Loading…
x
Reference in New Issue
Block a user