案例说明:
KingbaseES RAC在部署完成后,进行日常的集群及数据库管理。
适用版本:
KingbaseES V008R006C008M030B0010
操作系统版本:
[root@node201 KingbaseHA]# cat /etc/centos-release
CentOS Linux release 7.9.2009 (Core)
集群架构:
如下所示,node1和node2为集群节点:
节点信息:
[root@node201 KingbaseHA]# vi /etc/hosts
192.168.1.201 node201
192.168.1.202 node202
192.168.1.203 node203 iscsi_Srv
一、集群数据库结构
1、数据库服务进程
如下所示,集群每个节点都有一个instance,可以访问共享数据库。手工使用sys_ctl在每个节点启动instance,每个instance对应一个pid文件:
[root@node201 KingbaseHA]# ps -ef |grep kingbase
kingbase 23496 1 0 11:05 ? 00:00:00 /opt/Kingbase/ES/V8/KESRealPro/V008R006C008M030B0010/Server/bin/kingbase -D /sharedata/data_gfs2/kingbase/data -c config_file=/sharedata/data_gfs2/kingbase/data/kingbase.conf -c log_directory=sys_log -h 0.0.0.0
kingbase 24164 23496 0 11:06 ? 00:00:00 kingbase: logger
kingbase 24165 23496 0 11:06 ? 00:00:00 kingbase: lmon
kingbase 24166 23496 0 11:06 ? 00:00:00 kingbase: lms 1
kingbase 24167 23496 0 11:06 ? 00:00:00 kingbase: lms 2
kingbase 24168 23496 0 11:06 ? 00:00:00 kingbase: lms 3
kingbase 24169 23496 0 11:06 ? 00:00:00 kingbase: lms 4
kingbase 24170 23496 0 11:06 ? 00:00:00 kingbase: lms 5
kingbase 24171 23496 0 11:06 ? 00:00:00 kingbase: lms 6
kingbase 24172 23496 0 11:06 ? 00:00:00 kingbase: lms 7
kingbase 24393 23496 0 11:06 ? 00:00:00 kingbase: checkpointer
kingbase 24394 23496 0 11:06 ? 00:00:00 kingbase: background writer
kingbase 24395 23496 0 11:06 ? 00:00:00 kingbase: global deadlock checker
kingbase 24396 23496 0 11:06 ? 00:00:00 kingbase: transaction syncer
kingbase 24397 23496 0 11:06 ? 00:00:00 kingbase: walwriter
kingbase 24398 23496 0 11:06 ? 00:00:00 kingbase: autovacuum launcher
kingbase 24399 23496 0 11:06 ? 00:00:00 kingbase: archiver last was 00000001000000000000000E
kingbase 24402 23496 0 11:06 ? 00:00:00 kingbase: stats collector
kingbase 24403 23496 0 11:06 ? 00:00:00 kingbase: kwr collector
kingbase 24404 23496 0 11:06 ? 00:00:00 kingbase: ksh writer
kingbase 24405 23496 0 11:06 ? 00:00:00 kingbase: ksh collector
kingbase 24406 23496 0 11:06 ? 00:00:00 kingbase: logical replication launcheTips:
lms进程处理集群请求与其他节点之间的通信。
lms会占用7个端口。# 每个节点上有一个instance,实例进程id:
[root@node201 ~]# ls -lh /sharedata/data_gfs2/kingbase/data/kingbase*.pid
-rw------- 1 kingbase kingbase 100 Aug 12 11:06 /sharedata/data_gfs2/kingbase/data/kingbase_1.pid
-rw------- 1 kingbase kingbase 100 Aug 12 11:06 /sharedata/data_gfs2/kingbase/data/kingbase_2.pid
2、数据存储架构
1)数据库存储目录data(存储在gfs2的共享文件系统上)
test=# show data_directory;data_directory
------------------------------------/sharedata/data_gfs2/kingbase/data
(1 row)
2)每个节点配置文件
默认所有实例访问data/kingbase.conf配置,可以为每个节点配置单独的配置文件(优先级高于数据库共享配置):
[root@node201 ~]# ls -lh /sharedata/data_gfs2/kingbase/data/kingbase*.conf
-rw------- 1 kingbase kingbase 0 Aug 2 11:45 /sharedata/data_gfs2/kingbase/data/kingbase_1.conf
-rw------- 1 kingbase kingbase 0 Aug 2 11:45 /sharedata/data_gfs2/kingbase/data/kingbase_2.conf
-rw------- 1 kingbase kingbase 0 Aug 2 11:45 /sharedata/data_gfs2/kingbase/data/kingbase_3.conf
-rw------- 1 kingbase kingbase 0 Aug 2 11:45 /sharedata/data_gfs2/kingbase/data/kingbase_4.conf
-rw------- 1 kingbase kingbase 88 Aug 2 11:45 /sharedata/data_gfs2/kingbase/data/kingbase.auto.conf
-rw------- 1 kingbase kingbase 28K Aug 2 11:45 /sharedata/data_gfs2/kingbase/data/kingbase.conf
# 启动节点配置文件:
kingbase.conf配置:
sub_config_file='/sharedata/data_gfs2/kingbase/data/kingbase_node.conf'
3)节点wal日志和sys_log日志:
如下所示,节点的wal日志和sys_log日志按照节点id单独存储在sys_wal下的子目录中:
# sys_wal日志
[root@node201 ~]# ls -lh /sharedata/data_gfs2/kingbase/data/sys_wal
total 16K
drwx------ 3 kingbase kingbase 3.8K Aug 12 11:11 1
drwx------ 3 kingbase kingbase 3.8K Aug 12 11:11 2# sys_log日志
[root@node201 ~]# ls -lh /sharedata/data_gfs2/kingbase/data/sys_log
total 8.0K
drwx------ 2 kingbase kingbase 3.8K Aug 12 11:06 1
drwx------ 2 kingbase kingbase 3.8K Aug 12 11:05 2
二、启动集群及数据库
1、启动集群(all nodes)
[root@node201 ~]# cd /opt/KingbaseHA/
[root@node201 KingbaseHA]# ./cluster_manager.sh start
Waiting for node failover handling:[ OK ]
Starting Corosync Cluster Engine (corosync): [WARNING]
clean qdisk fence flag start
clean qdisk fence flag success
Starting Qdisk Fenced daemon (qdisk-fenced): [ OK ]
Starting Corosync Qdevice daemon (corosync-qdevice): [ OK ]
Waiting for quorate:.....................................................................................................................................[ OK ]
Starting Pacemaker Cluster Manager[ OK ]
2、查看资源状态
# 查看集群服务状态
[root@node201 KingbaseHA]# ./cluster_manager.sh status
corosync (pid 2937) is running...
pacemakerd (pid 3277) is running...
corosync-qdevice (pid 2955) is running...[root@node201 KingbaseHA]# ./cluster_manager.sh --status_pacemaker
pacemakerd (pid 11521) is running...
[root@node201 KingbaseHA]# ./cluster_manager.sh --status_corosync
corosync (pid 9924) is running...
[root@node201 KingbaseHA]# ./cluster_manager.sh --status_qdevice
corosync-qdevice (pid 11499) is running...
[root@node201 KingbaseHA]# ./cluster_manager.sh --status_qdisk_fenced
qdisk-fenced is stopped# 如下所示dlm和gfs2的资源未加载
[root@node202 KingbaseHA]# crm status
Cluster Summary:* Stack: corosync* Current DC: node201 (version 2.0.3-4b1f869f0f) - partition with quorum* Last updated: Fri Aug 9 18:05:20 2024* Last change: Fri Aug 9 18:01:06 2024 by hacluster via crmd on node201* 2 nodes configured* 0 resource instances configuredNode List:* Online: [ node201 node202 ]Full List of Resources: # 无资源被加载* No resources
3、启动PINGD、FIP、DB资源
[root@node201 KingbaseHA]# ./cluster_manager.sh --config_gfs2_resource
config dlm and gfs2 resource start
3e934629-a2b8-4b7d-a153-ded2dbec7a28
config dlm and gfs2 resource success如下所示,dlm、gfs2等资源被启动:
[root@node201 KingbaseHA]# crm status
Cluster Summary:* Stack: corosync* Current DC: node201 (version 2.0.3-4b1f869f0f) - partition with quorum* Last updated: Mon Aug 12 15:31:41 2024* Last change: Mon Aug 12 15:31:31 2024 by root via cibadmin on node201* 2 nodes configured* 4 resource instances configuredNode List:* Online: [ node201 node202 ]Full List of Resources: # dlm和gfs2资源被加载和启动* Clone Set: clone-dlm [dlm]:* Started: [ node201 node202 ]* Clone Set: clone-gfs2 [gfs2]* Started: [ node201 node202 ]
4、启动数据库资源
1)启动数据库资源
[root@node201 KingbaseHA]# ./cluster_manager.sh --config_rac_resource
crm configure DB resource start
crm configure DB resource end
2)查看集群资源配置
如下所示,数据库资源:DB:
[root@node201 ~]# crm config show
node 1: node201
node 2: node202
primitive DB ocf:kingbase:kingbase \params sys_ctl="/opt/Kingbase/ES/V8/Server/bin/sys_ctl" ksql="/opt/Kingbase/ES/V8/Server/bin/ksql" sys_isready="/opt/Kingbase/ES/V8/Server/bin/sys_isready" kb_data="/sharedata/data_gfs2/kingbase/data" kb_dba=kingbase kb_host=0.0.0.0 kb_user=system kb_port=55321 kb_db=template1 logfile="/home/kingbase/log/kingbase1.log" \op start interval=0 timeout=120 \op stop interval=0 timeout=120 \op monitor interval=9s timeout=30 on-fail=stop \meta failure-timeout=5min
primitive dlm ocf:pacemaker:controld \params daemon="/opt/KingbaseHA/dlm-dlm/sbin/dlm_controld" dlm_tool="/opt/KingbaseHA/dlm-dlm/sbin/dlm_tool" args="-s 0 -f 0" allow_stonith_disabled=true \op start interval=0 \op stop interval=0 \op monitor interval=60 timeout=60
primitive gfs2 Filesystem \params device="-U 3e934629-a2b8-4b7d-a153-ded2dbec7a28" directory="/sharedata/data_gfs2" fstype=gfs2 \op start interval=0 timeout=60 \op stop interval=0 timeout=60 \op monitor interval=30s timeout=60 OCF_CHECK_LEVEL=20 \meta failure-timeout=5min
clone clone-DB DB \meta target-role=Started
clone clone-dlm dlm \meta interleave=true target-role=Started
clone clone-gfs2 gfs2 \meta interleave=true target-role=Started
colocation cluster-colo1 inf: clone-gfs2 clone-dlm
order cluster-order1 clone-dlm clone-gfs2
order cluster-order2 clone-dlm clone-gfs2
property cib-bootstrap-options: \have-watchdog=false \dc-version=2.0.3-4b1f869f0f \cluster-infrastructure=corosync \cluster-name=krac \no-quorum-policy=freeze \stonith-enabled=false
3)查看数据库服务状态
如下所示,在查看集群资源状态,DB资源已经被启动:
[root@node201 KingbaseHA]# crm status
Cluster Summary:* Stack: corosync* Current DC: node201 (version 2.0.3-4b1f869f0f) - partition with quorum* Last updated: Mon Aug 12 15:32:50 2024* Last change: Mon Aug 12 15:32:43 2024 by root via cibadmin on node201* 2 nodes configured* 6 resource instances configuredNode List:* Online: [ node201 node202 ]Full List of Resources:* Clone Set: clone-dlm [dlm]:* Started: [ node201 node202 ]* Clone Set: clone-gfs2 [gfs2]:* Started: [ node201 node202 ]* Clone Set: clone-DB [DB]: # 数据库资源DB被加载和启动* Started: [ node201 node202 ]
4)数据库服务状态
[root@node201 KingbaseHA]# netstat -antlp |grep 553
tcp 0 0 0.0.0.0:55321 0.0.0.0:* LISTEN 29041/kingbase
5)集群状态实时监控
[root@node201 ~]# crm_mon -1
Cluster Summary:* Stack: corosync* Current DC: node202 (version 2.0.3-4b1f869f0f) - partition with quorum* Last updated: Mon Aug 12 11:20:47 2024* Last change: Mon Aug 12 10:55:34 2024 by root via cibadmin on node201* 2 nodes configured* 6 resource instances configuredNode List:* Online: [ node201 node202 ]Active Resources:* Clone Set: clone-dlm [dlm]:* Started: [ node201 node202 ]* Clone Set: clone-gfs2 [gfs2]:* Started: [ node201 node202 ]* Clone Set: clone-DB [DB]:* Started: [ node201 node202 ]
5、停止集群
[root@node201 KingbaseHA]# ./cluster_manager.sh stop
Signaling Pacemaker Cluster Manager to terminate[ OK ]
Waiting for cluster services to unload.......[ OK ]
Signaling Qdisk Fenced daemon (qdisk-fenced) to terminate: [ OK ]
Waiting for qdisk-fenced services to unload:..[ OK ]
Signaling Corosync Qdevice daemon (corosync-qdevice) to terminate: [ OK ]
Waiting for corosync-qdevice services to unload:.[ OK ]
Signaling Corosync Cluster Engine (corosync) to terminate: [ OK ]
Waiting for corosync services to unload:..[ OK ]# 另外节点查看资源状态:
[root@node202 KingbaseHA]# crm resource statusfence_qdisk_0 (stonith:fence_qdisk): Startedfence_qdisk_1 (stonith:fence_qdisk): StartedClone Set: clone-dlm [dlm]Started: [ node201 node202 ]Clone Set: clone-gfs2 [gfs2]Started: [ node201 node202 ]Clone Set: clone-DB [DB]Stopped (disabled): [ node201 node202 ]
三、资源自动恢复
KingbaseRAC以资源的形式管理数据库,当使用sys_ctl stop或者kill数据库服务后,pacemaker会自动拉起资源:
1、关闭数据库服务
[kingbase@node201 bin]$ ./sys_ctl stop -D /sharedata/data_gfs2/kingbase/data/
waiting for server to shut down................... done
server stopped
2、查看资源状态
如下所示,pacemaker监控到DB资源运行状态异常:
[root@node201 KingbaseHA]# crm status
Cluster Summary:* Stack: corosync* Current DC: node202 (version 2.0.3-4b1f869f0f) - partition with quorum* Last updated: Mon Aug 12 11:56:05 2024* Last change: Mon Aug 12 11:53:25 2024 by root via cibadmin on node202* 2 nodes configured* 6 resource instances configuredNode List:* Online: [ node201 node202 ]
Full List of Resources:* Clone Set: clone-dlm [dlm]:* Started: [ node201 node202 ]* Clone Set: clone-gfs2 [gfs2]:* Started: [ node201 node202 ]* Clone Set: clone-DB [DB]:* DB (ocf::kingbase:kingbase): Stopping node202* DB (ocf::kingbase:kingbase): FAILED node201Failed Resource Actions:* DB_monitor_9000 on node201 'not running' (7): call=35, status='complete', exitreason='', last-rc-change='2024-08-12 11:56:04 +08:00', queued=0ms, exec=0ms
3、数据库资源恢复正常
如下所示,一段时间后,数据库资源被pacemaker拉起:
[root@node201 KingbaseHA]# crm status
Cluster Summary:* Stack: corosync* Current DC: node202 (version 2.0.3-4b1f869f0f) - partition with quorum* Last updated: Mon Aug 12 13:56:02 2024* Last change: Mon Aug 12 11:53:25 2024 by root via cibadmin on node202* 2 nodes configured* 6 resource instances configuredNode List:* Online: [ node201 node202 ]Full List of Resources:* Clone Set: clone-dlm [dlm]:* Started: [ node201 node202 ]* Clone Set: clone-gfs2 [gfs2]:* Started: [ node201 node202 ]* Clone Set: clone-DB [DB]:* Started: [ node201 node202 ]# 数据库服务运行正常
[root@node201 KingbaseHA]# netstat -antlp |grep 553
tcp 0 0 0.0.0.0:55321 0.0.0.0:* LISTEN 20963/kingbase
四、访问数据库
[kingbase@node201 bin]$ ./ksql -U system test -p 55321
Type "help" for help.prod=# select * from t1 limit 10;id | name
----+-------1 | usr12 | usr23 | usr34 | usr45 | usr56 | usr67 | usr78 | usr89 | usr910 | usr10
(10 rows)[kingbase@node202 bin]$ ./ksql -U system test -p 55321
Type "help" for help.test=# \c prod
prod=# select count(*) from t1;count
-------1000
(1 row)
五、附件
故障1:集群服务启动失败
如下所示i,集群服务启动异常:
[root@node201 KingbaseHA]# ./cluster_manager.sh start
Waiting for node failover handling:[ OK ]
Starting Corosync Cluster Engine (corosync): [WARNING]
clean qdisk fence flag start
查看集群配置:
[root@node201 ~]# cat /opt/KingbaseHA/cluster_manager.conf|grep fence
################# fence #################
enable_fence=1
配置enable_fence=0后,启动集群:
[root@node201 KingbaseHA]# ./cluster_manager.sh start
Waiting for node failover handling:[ OK ]
Starting Corosync Cluster Engine (corosync): [WARNING]
Starting Corosync Qdevice daemon (corosync-qdevice): [ OK ]
Waiting for quorate:...........[ OK ]
Starting Pacemaker Cluster Manager[ OK ]
故障2:crm resource start clone-DB失败
1)启动集群服务
[root@node201 KingbaseHA]# ./cluster_manager.sh start
Waiting for node failover handling:[ OK ]
Starting Corosync Cluster Engine (corosync): [WARNING]
Starting Corosync Qdevice daemon (corosync-qdevice): [ OK ]
Waiting for quorate:...........[ OK ]
Starting Pacemaker Cluster Manager[ OK ]
2)查看集群资源状态
如下所示dlm和gfs2的资源未加载
[root@node202 KingbaseHA]# crm status
Cluster Summary:* Stack: corosync* Current DC: node201 (version 2.0.3-4b1f869f0f) - partition with quorum* Last updated: Fri Aug 9 18:05:20 2024* Last change: Fri Aug 9 18:01:06 2024 by hacluster via crmd on node201* 2 nodes configured* 0 resource instances configuredNode List:* Online: [ node201 node202 ]Full List of Resources: # 无资源被加载* No resources
3)启动PINGD、FIP、DB资源
[root@node201 KingbaseHA]# ./cluster_manager.sh --config_gfs2_resource
config dlm and gfs2 resource start
3e934629-a2b8-4b7d-a153-ded2dbec7a28
config dlm and gfs2 resource success如下所示,dlm、gfs2等资源被启动,但是仍然缺失数据库DB资源:
[root@node201 KingbaseHA]# crm status
Cluster Summary:* Stack: corosync* Current DC: node201 (version 2.0.3-4b1f869f0f) - partition with quorum* Last updated: Mon Aug 12 15:31:41 2024* Last change: Mon Aug 12 15:31:31 2024 by root via cibadmin on node201* 2 nodes configured* 4 resource instances configuredNode List:* Online: [ node201 node202 ]Full List of Resources: # dlm和gfs2资源被加载和启动* Clone Set: clone-dlm [dlm]:* Started: [ node201 node202 ]* Clone Set: clone-gfs2 [gfs2]:* Started: [ node201 node202 ]
4)配置数据库DB资源
crm configure primitive DB ocf:kingbase:kingbase \params sys_ctl="/opt/Kingbase/ES/V8/Server/bin/sys_ctl" \ksql="/opt/Kingbase/ES/V8/Server/bin/ksql" \sys_isready="/opt/Kingbase/ES/V8/Server/bin/sys_isready" \kb_data="/sharedata/data_gfs2/kingbase/data" \kb_dba="kingbase" kb_host="0.0.0.0" \kb_user="system" \kb_port="55321" \kb_db="template1" \logfile="/home/kingbase/log/kingbase1.log" \op start interval="0" timeout="120" \op stop interval="0" timeout="120" \op monitor interval="9s" timeout="30" on-fail=stop \meta failure-timeout=5min target-role=Stopped[root@node201 KingbaseHA]# crm configure primitive DB ocf:kingbase:kingbase \
> params sys_ctl="/opt/Kingbase/ES/V8/Server/bin/sys_ctl" \
> ksql="/opt/Kingbase/ES/V8/Server/bin/ksql" \
> sys_isready="/opt/Kingbase/ES/V8/Server/bin/sys_isready" \
> kb_data="/sharedata/data_gfs2/kingbase/data" \
> kb_dba="kingbase" kb_host="0.0.0.0" \
> kb_user="system" \
> kb_port="55321" \
> kb_db="template1" \
> logfile="/home/kingbase/log/kingbase1.log" \
> op start interval="0" timeout="120" \
> op stop interval="0" timeout="120" \
> op monitor interval="9s" timeout="30" on-fail=stop \
> meta failure-timeout=5min target-role=Stopped# 配置为clone资源及配置资源启动顺序
[root@node201 KingbaseHA]# crm configure clone clone-DB DB
[root@node201 KingbaseHA]# crm configure order cluster-order2 clone-dlm clone-gfs2 clone-DB
5)查看集群资源
如下所示,在集群资源中增加了DB资源:
[root@node201 KingbaseHA]# crm config show
node 1: node201
node 2: node202
primitive DB ocf:kingbase:kingbase \params sys_ctl="/opt/Kingbase/ES/V8/Server/bin/sys_ctl" ksql="/opt/Kingbase/ES/V8/Server/bin/ksql" sys_isready="/opt/Kingbase/ES/V8/Server/bin/sys_isready" kb_data="/sharedata/data_gfs2/kingbase/data" kb_dba=kingbase kb_host=0.0.0.0 kb_user=system kb_port=55321 kb_db=template1 logfile="/home/kingbase/log/kingbase1.log" \op start interval=0 timeout=120 \op stop interval=0 timeout=120 \op monitor interval=9s timeout=30 on-fail=stop \meta failure-timeout=5min
primitive dlm ocf:pacemaker:controld \params daemon="/opt/KingbaseHA/dlm-dlm/sbin/dlm_controld" dlm_tool="/opt/KingbaseHA/dlm-dlm/sbin/dlm_tool" args="-s 0 -f 0" allow_stonith_disabled=true \op start interval=0 \op stop interval=0 \op monitor interval=60 timeout=60
primitive gfs2 Filesystem \params device="-U 3e934629-a2b8-4b7d-a153-ded2dbec7a28" directory="/sharedata/data_gfs2" fstype=gfs2 \op start interval=0 timeout=60 \op stop interval=0 timeout=60 \op monitor interval=30s timeout=60 OCF_CHECK_LEVEL=20 \meta failure-timeout=5min
clone clone-DB DB \meta target-role=Started
clone clone-dlm dlm \meta interleave=true target-role=Started
clone clone-gfs2 gfs2 \meta interleave=true target-role=Started
colocation cluster-colo1 inf: clone-gfs2 clone-dlm
order cluster-order1 clone-dlm clone-gfs2
order cluster-order2 clone-dlm clone-gfs2
property cib-bootstrap-options: \have-watchdog=false \dc-version=2.0.3-4b1f869f0f \cluster-infrastructure=corosync \cluster-name=krac \no-quorum-policy=freeze \stonith-enabled=false[root@node201 KingbaseHA]# crm config verify
[root@node201 KingbaseHA]# crm config commit
6)启动数据库资源服务
[root@node201 KingbaseHA]# crm resource start clone-DB
[root@node201 KingbaseHA]# crm resource status clone-DB
resource clone-DB is running on: node201
resource clone-DB is running on: node202
# 数据库服务被启动
[root@node201 KingbaseHA]# netstat -antlp |grep 553
tcp 0 0 0.0.0.0:55321 0.0.0.0:* LISTEN 3240/kingbase查看数据库服务状态:
[root@node202 KingbaseHA]# crm status
Cluster Summary:* Stack: corosync* Current DC: node201 (version 2.0.3-4b1f869f0f) - partition with quorum* Last updated: Mon Aug 12 14:57:06 2024* Last change: Mon Aug 12 14:56:00 2024 by root via cibadmin on node201* 2 nodes configured* 6 resource instances configuredNode List:* Online: [ node201 node202 ]
Full List of Resources:* Clone Set: clone-dlm [dlm]:* Started: [ node201 node202 ]* Clone Set: clone-gfs2 [gfs2]:* Started: [ node201 node202 ]* Clone Set: clone-DB [DB]: # 数据库资源服务被加载和启动 * Started: [ node201 node202 ]
六、清理及卸载集群
1、清理集群配置环境(all nodes)
[root@node201 KingbaseHA]# ./cluster_manager.sh --clean_all
clean all start
Pacemaker Cluster Manager is already stopped[ OK ]
clean env variable start
clean env variable success
clean host start
clean host success
remove pacemaker daemon user start
remove pacemaker daemon user success
clean all success# 如下所示,集群配置被清理
[root@node201 KingbaseHA]# crm config show
ERROR: running cibadmin -Ql: Connection to the CIB manager failed: Transport endpoint is not connected
Init failed, could not perform requested operations
ERROR: configure: Missing requirements[root@node201 KingbaseHA]# ./cluster_manager.sh start
Waiting for node failover handling:[ OK ]
./cluster_manager.sh: line 1143: /etc/init.d/corosync: No such file or directory
2、卸载集群(all nodes)
如下所示,执行集群卸载,将删除/opt/KingbaseHA目录:
[root@node202 KingbaseHA]# ./cluster_manager.sh --uninstall
uninstall start
./cluster_manager.sh: line 1276: /etc/init.d/pacemaker: No such file or directory
./cluster_manager.sh: line 1335: /etc/init.d/corosync-qdevice: No such file or directory
./cluster_manager.sh: line 1148: /etc/init.d/corosync: No such file or directory
clean env variable start
clean env variable success
clean host start
clean host success
remove pacemaker daemon user start
userdel: user 'hacluster' does not exist
groupdel: group 'haclient' does not exist
remove pacemaker daemon user success
uninstall success# /opt/KingbaseHA目录被删除
[root@node202 KingbaseHA]# ls -lh /opt/KingbaseHA/
ls: cannot access /opt/KingbaseHA/: No such file or directory