环境:
OS:Centos 4
es:6.8.5
源端:3节点组成的集群
目的端:单点
1.目的端需要安装好ES
安装的版本与源端保持一致,同时目的端启用密码认证
安装步骤参考如下:
https://www.cnblogs.com/hxlasky/p/13361631.html
2.目的端需要安装与源端的插件
安装插件
su - elasticsearch
[elasticsearch@elasticsearch-backup001 ~]$ cd /usr/local/services/elasticsearch/bin
[elasticsearch@elasticsearch-backup001 bin]$ ./elasticsearch-plugin install file:///soft/es_fenci/elasticsearch-analysis-ik-6.8.5.zip
-> Downloading file:///soft/es_fenci/elasticsearch-analysis-ik-6.8.5.zip
[=================================================] 100%??
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: plugin requires additional permissions @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
* java.net.SocketPermission * connect,resolve
See http://docs.oracle.com/javase/8/docs/technotes/guides/security/permissions.html
for descriptions of what these permissions allow and the associated risks.Continue with installation? [y/N]y
-> Installed analysis-ik需要重启动
[elasticsearch@elasticsearch-backup001 bin]$kill 7885
[elasticsearch@elasticsearch-backup001 bin]$cd /usr/local/services/elasticsearch/bin
[elasticsearch@elasticsearch-backup001 bin]$ ./elasticsearch -d
3.确定源端和目的端的备份路径
源端:path.repo: /home/middle/esbak/backup
目的端:path.repo: /nasdata/middle/esbak/bakcup
说明:源端和目的端的备份路径可以不一样
4.拷贝备份压缩的备份文件到目的集群
[elasticsearch@tarfile]$cd /home/middle/esbak/tarfile
[elasticsearch@tarfile]$ls -al
total 355565029
drwxr-xr-x 2 elasticsearch elasticsearch 4096 Mar 17 05:27 .
drwxrwxrwt 4 root root 4096 Jan 6 2022 ..
-rw-r--r-- 1 root root 51600734611 Mar 11 05:26 esbak_20250311.tar.gz
-rw-r--r-- 1 root root 51768466359 Mar 12 05:26 esbak_20250312.tar.gz
-rw-r--r-- 1 root root 51887303301 Mar 13 05:26 esbak_20250313.tar.gz
-rw-r--r-- 1 root root 52025550514 Mar 14 05:27 esbak_20250314.tar.gz
-rw-r--r-- 1 root root 52208586000 Mar 15 05:27 esbak_20250315.tar.gz
-rw-r--r-- 1 root root 52237266593 Mar 16 05:27 esbak_20250316.tar.gz
-rw-r--r-- 1 root root 52370671775 Mar 17 05:27 esbak_20250317.tar.gzscp esbak_20250317.tar.gz scpuser@192.168.1.100:/hxl-scp-databackup/
5.修改传输到目标机器文件的权限
[root@data-scp]# cd /temp01/data-scp
[root@data-scp]# chown elasticsearch:elasticsearch ./esbak_20250317.tar.gz
6.解压到备份目录
查看压缩文件的路径情况
[elasticsearch@elasticsearch-backup001 data-scp]$ pwd
/temp01/data-scp
-rw-rw-r-- elasticsearch/elasticsearch 8664 2025-03-17 01:00 ./backup/indices/6_NCHiUURoORo_fN5BomoA/4/__hNn1StymSJyWNLRsvKu6SQ
-rw-rw-r-- elasticsearch/elasticsearch 331 2025-03-17 01:00 ./backup/indices/6_NCHiUURoORo_fN5BomoA/4/__iF9iqzP4TDiQt46Cm2MTOA
-rw-rw-r-- elasticsearch/elasticsearch 826208 2025-03-17 01:00 ./backup/indices/6_NCHiUURoORo_fN5BomoA/4/__iFFiaNbATkKKi-ALBhZdxA
压缩文件里带有了backup目录,我们直接解压到 /nasdata/middle/elasticsearch/esbak
目的端es备份目录设置如下:
path.repo: /nasdata/middle/elasticsearch/esbak/backup
[root@elasticsearch]# su - elasticsearch
[elasticsearch@elasticsearch]$ cd /temp01/data-scp[elasticsearch@elasticsearch]$time tar --use-compress-program=pigz -xvf esbak_20250317.tar.gz -C /nasdata/middle/elasticsearch/esbak
7.注册备份集到新机器
设置备份指定目录
curl -u elastic:elastic -H "Content-Type: application/json" -XPUT http://192.168.1.100:19200/_snapshot/esbackup -d'{
"type": "fs",
"settings": {
"location": "/nasdata/middle/elasticsearch/esbak/backup"
}
}'
查看备份情况
[elasticsearch@elasticsearch-hangzhou-db-backup001 esbak]$ curl -u elastic:elastic -X GET "http://172.16.119.201:19200/_snapshot/esbackup/_all?pretty"
{"snapshots" : [{"snapshot" : "snapshot_20250317","uuid" : "z0FaDPjDTAasm0h7reQhsg","version_id" : 6080599,"version" : "6.8.5","indices" : [".monitoring-kibana-6-2025.03.15",".monitoring-kibana-6-2025.03.11",".monitoring-es-6-2025.03.10",
".monitoring-kibana-6-2025.03.14","hxl_inocue_examine",".monitoring-es-6-2025.03.16",".monitoring-es-6-2025.03.12",".kibana_task_manager",".monitoring-kibana-6-2025.03.10",
".monitoring-es-6-2025.03.14",".kibana_1",".monitoring-kibana-6-2025.03.16",".monitoring-es-6-2025.03.15",".monitoring-es-6-2025.03.11",".monitoring-es-6-2025.03.13",
".security-6",".monitoring-kibana-6-2025.03.13",
".monitoring-kibana-6-2025.03.12"],"include_global_state" : true,"state" : "SUCCESS","start_time" : "2025-03-16T17:00:19.226Z","start_time_in_millis" : 1742144419226,"end_time" : "2025-03-16T17:19:48.111Z","end_time_in_millis" : 1742145588111,"duration_in_millis" : 1168885,"failures" : [ ],"shards" : {"total" : 18,"failed" : 0,"successful" : 18}}]
}
这里列出了所有备份的index,包括kinna,security
8.恢复
time curl -u elastic:elastic -XPOST http://192.168.1.100:19200/_snapshot/esbackup/snapshot_20250317/_restore?wait_for_completion=true
[elasticsearch@db-backup001 esbak]$ curl -u elastic:elastic -XPOST http://192.168.1.100:19200/_snapshot/esbackup/snapshot_20250317/_restore?wait_for_completion=true
{"error":{"root_cause":[{"type":"snapshot_restore_exception","reason":"[esbackup:snapshot_20250317/z0FaDPjDTAasm0h7reQhsg]
cannot restore index [.security-6] because an open index with same name already exists in the cluster.
Either close or delete the existing index or restore the index under a different name by providing a rename pattern and replacement name"}],"type":"snapshot_restore_exception","reason":"[esbackup:snapshot_20250317/z0FaDPjDTAasm0h7reQhsg]
cannot restore index [.security-6] because an open index with same name already exists in the cluster.
Either close or delete the existing index or restore the index under a different name by providing a rename pattern and replacement name"},"status":500}
报错:是因为当前的环境已经存在了.security-6
解决办法(先关闭):
curl -u elastic:elastic -XPOST "http://192.168.1.100:19200/.security-6/_close"
再次恢复:
time curl -u elastic:elastic -XPOST http://192.168.1.100:19200/_snapshot/esbackup/snapshot_20250317/_restore?wait_for_completion=true
wait_for_completion选项是需要执行完成后才返回
若没有安装相应的插件会报如下错误
[2025-03-17T13:48:11,983][WARN ][o.e.c.r.a.AllocationService] [ytB7yEC] failing shard [failed shard, shard [threegene_content_base][0], node[ytB7yECbS_SwTIT6kg2ujw], [P], recovery_source[snapshot recovery [ZU2ZUtojSmiTWrIhLztlQg] from esbackup:snapshot_20250317/z0FaDPjDTAasm0h7reQhsg], s[INITIALIZING], a[id=6-yNz-MAQaGKbTvrdQzQXw], unassigned_info[[reason=NEW_INDEX_RESTORED], at[2025-03-17T05:47:23.002Z], delayed=false, details[restore_source[esbackup/snapshot_20250317]], allocation_status[deciders_throttled]], message [failed to update mapping for index], failure [MapperParsingException[Failed to parse mapping [content]: analyzer [ik_max_word] not found for field [content]]; nested: MapperParsingException[analyzer [ik_max_word] not found for field [content]]; ], markAsStale [true]]
org.elasticsearch.index.mapper.MapperParsingException: Failed to parse mapping [content]: analyzer [ik_max_word] not found for field [content]
数据量比较大的话,恢复过程需要些时间
9.查看恢复情况
这个时候恢复了security-6,就需要使用源库的密码才能登录了,所以为了方便恢复,可以使用不带验证的环境进行恢复
索引情况
curl -u elastic:原实例的密码 -X GET 'http://192.168.1.100:19200/_cat/indices?v'
分片情况
curl -u elastic:原实例的密码 -X GET "192.168.1.100:19200/_cat/shards?h=index,shard,prirep,state,unassigned.reason?v"
目前我们是从集群同步到单节点环境的,在集群环境设置了副本集的话,恢复到单机显示的状态是:yellow
可以修改副本数设置为0:
curl -u elastic:源库密码 -H "Content-Type: application/json" -XPUT 'http://192.168.1.100:19200/索引名称/_settings' -d '{
"number_of_replicas" : 0
}'
最后的分片情况如下:
[elasticsearch@elasticsearch-backup001 data-scp]$ curl -u elastic:源库密码 -X GET "192.168.1.100:19200/_cat/shards?h=index,shard,prirep,state,unassigned.reason?v"
.kibana_1 0 p STARTED
hxl_inocue_examine 1 p STARTED
hxl_inocue_examine 3 p STARTED
hxl_inocue_examine 2 p STARTED
hxl_inocue_examine 4 p STARTED
hxl_inocue_examine 0 p STARTED
.security-6 0 p STARTED
.monitoring-kibana-6-2025.03.15 0 p STARTED
.kibana_task_manager 0 p STARTED
threegene_content_heatcontent 0 p STARTED
.monitoring-kibana-6-2025.03.14 0 p STARTED
.monitoring-es-6-2025.03.16 0 p STARTED
.monitoring-es-6-2025.03.14 0 p STARTED
.monitoring-kibana-6-2025.03.13 0 p STARTED
.monitoring-es-6-2025.03.10 0 p STARTED
.monitoring-kibana-6-2025.03.16 0 p STARTED
.monitoring-es-6-2025.03.15 0 p STARTED
.monitoring-kibana-6-2025.03.10 0 p STARTED
.monitoring-kibana-6-2025.03.12 0 p STARTED
.monitoring-kibana-6-2025.03.11 0 p STARTED
.monitoring-es-6-2025.03.11 0 p STARTED
.monitoring-es-6-2025.03.12 0 p STARTED
.monitoring-es-6-2025.03.13 0 p STARTED