PVE RAID模拟恢复案例

news/2025/3/31 1:44:02/文章来源:https://www.cnblogs.com/ZhengLiming/p/18798034

PVE RAID模拟恢复案例

2024年10月10日

14:28

https://www.bilibili.com/read/cv32149324/

1、正常环境现象,看raid磁盘信息和状态

root@pve:/var/lib/vz/dump# lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

sda 8:0 0 238.5G 0 disk

├─sda1 8:1 0 1007K 0 part

├─sda2 8:2 0 1G 0 part

└─sda3 8:3 0 237.5G 0 part

sdb 8:16 0 238.5G 0 disk

├─sdb1 8:17 0 1007K 0 part

├─sdb2 8:18 0 1G 0 part

└─sdb3 8:19 0 237.5G 0 part

sdc 8:32 1 57.6G 0 disk

├─sdc1 8:33 1 57.6G 0 part

└─sdc2 8:34 1 32M 0 part

root@pve:/var/lib/vz/dump# zpool status

pool: rpool

state: ONLINE

config:

 

NAME STATE READ WRITE CKSUM

rpool ONLINE 0 0 0

mirror-0 ONLINE 0 0 0

ata-HS-SSD-C260_256G_30147651612-part3 ONLINE 0 0 0

ata-HS-SSD-C260_256G_30148251328-part3 ONLINE 0 0 0

 

errors: No known data errors

root@pve:/var/lib/vz/dump# cd /dev/disk/by-id/

root@pve:/dev/disk/by-id# ls -l

total 0

lrwxrwxrwx 1 root root 9 Oct 10 12:20 ata-HS-SSD-C260_256G_30147651612 -> ../../sda

lrwxrwxrwx 1 root root 10 Oct 10 12:20 ata-HS-SSD-C260_256G_30147651612-part1 -> ../../sda1

lrwxrwxrwx 1 root root 10 Oct 10 12:20 ata-HS-SSD-C260_256G_30147651612-part2 -> ../../sda2

lrwxrwxrwx 1 root root 10 Oct 10 12:20 ata-HS-SSD-C260_256G_30147651612-part3 -> ../../sda3

lrwxrwxrwx 1 root root 9 Oct 10 12:20 ata-HS-SSD-C260_256G_30148251328 -> ../../sdb

lrwxrwxrwx 1 root root 10 Oct 10 12:20 ata-HS-SSD-C260_256G_30148251328-part1 -> ../../sdb1

lrwxrwxrwx 1 root root 10 Oct 10 12:20 ata-HS-SSD-C260_256G_30148251328-part2 -> ../../sdb2

lrwxrwxrwx 1 root root 10 Oct 10 12:20 ata-HS-SSD-C260_256G_30148251328-part3 -> ../../sdb3

lrwxrwxrwx 1 root root 9 Oct 10 12:20 usb-Kingston_DataTraveler_3.0_E0D55EA574C1F510C8320163-0:0 -> ../../sdc

lrwxrwxrwx 1 root root 10 Oct 10 12:20 usb-Kingston_DataTraveler_3.0_E0D55EA574C1F510C8320163-0:0-part1 -> ../../sdc1

lrwxrwxrwx 1 root root 10 Oct 10 12:20 usb-Kingston_DataTraveler_3.0_E0D55EA574C1F510C8320163-0:0-part2 -> ../../sdc2

lrwxrwxrwx 1 root root 9 Oct 10 12:20 wwn-0x5000000123456789 -> ../../sdb

lrwxrwxrwx 1 root root 10 Oct 10 12:20 wwn-0x5000000123456789-part1 -> ../../sdb1

lrwxrwxrwx 1 root root 10 Oct 10 12:20 wwn-0x5000000123456789-part2 -> ../../sdb2

lrwxrwxrwx 1 root root 10 Oct 10 12:20 wwn-0x5000000123456789-part3 -> ../../sdb3

lrwxrwxrwx 1 root root 9 Oct 10 12:20 wwn-0x5000000123456819 -> ../../sda

lrwxrwxrwx 1 root root 10 Oct 10 12:20 wwn-0x5000000123456819-part1 -> ../../sda1

lrwxrwxrwx 1 root root 10 Oct 10 12:20 wwn-0x5000000123456819-part2 -> ../../sda2

lrwxrwxrwx 1 root root 10 Oct 10 12:20 wwn-0x5000000123456819-part3 -> ../../sda3

root@pve:/dev/disk/by-id#

 

2、模拟故障,在线手动拔掉一块盘后,重启开机,再查看磁盘信息和状态

root@pve:/dev/disk/by-id# zpool status

pool: rpool

state: DEGRADED

status: One or more devices could not be used because the label is missing or

invalid. Sufficient replicas exist for the pool to continue

functioning in a degraded state.

action: Replace the device using 'zpool replace'.

see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J

config:

 

NAME STATE READ WRITE CKSUM

rpool DEGRADED 0 0 0

mirror-0 DEGRADED 0 0 0

ata-HS-SSD-C260_256G_30147651612-part3 ONLINE 0 0 0

ata-HS-SSD-C260_256G_30148251328-part3 UNAVAIL 3 160 0

 

errors: No known data errors

root@pve:/dev/disk/by-id# ls -l

total 0

lrwxrwxrwx 1 root root 9 Oct 10 12:20 ata-HS-SSD-C260_256G_30147651612 -> ../../sda

lrwxrwxrwx 1 root root 10 Oct 10 12:20 ata-HS-SSD-C260_256G_30147651612-part1 -> ../../sda1

lrwxrwxrwx 1 root root 10 Oct 10 12:20 ata-HS-SSD-C260_256G_30147651612-part2 -> ../../sda2

lrwxrwxrwx 1 root root 10 Oct 10 12:20 ata-HS-SSD-C260_256G_30147651612-part3 -> ../../sda3

lrwxrwxrwx 1 root root 9 Oct 10 12:20 usb-Kingston_DataTraveler_3.0_E0D55EA574C1F510C8320163-0:0 -> ../../sdc

lrwxrwxrwx 1 root root 10 Oct 10 12:20 usb-Kingston_DataTraveler_3.0_E0D55EA574C1F510C8320163-0:0-part1 -> ../../sdc1

lrwxrwxrwx 1 root root 10 Oct 10 12:20 usb-Kingston_DataTraveler_3.0_E0D55EA574C1F510C8320163-0:0-part2 -> ../../sdc2

lrwxrwxrwx 1 root root 9 Oct 10 12:20 wwn-0x5000000123456819 -> ../../sda

lrwxrwxrwx 1 root root 10 Oct 10 12:20 wwn-0x5000000123456819-part1 -> ../../sda1

lrwxrwxrwx 1 root root 10 Oct 10 12:20 wwn-0x5000000123456819-part2 -> ../../sda2

lrwxrwxrwx 1 root root 10 Oct 10 12:20 wwn-0x5000000123456819-part3 -> ../../sda3

root@pve:/dev/disk/by-id# lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

sda 8:0 0 238.5G 0 disk

├─sda1 8:1 0 1007K 0 part

├─sda2 8:2 0 1G 0 part

└─sda3 8:3 0 237.5G 0 part

sdc 8:32 1 57.6G 0 disk

├─sdc1 8:33 1 57.6G 0 part

└─sdc2 8:34 1 32M 0 part

root@pve:/dev/disk/by-id#

 

3、关机,在故障盘被拔出后,再插入新盘到坏盘上,开机,查看磁盘信息新加入的裸盘已经识别到/dev/sdb

注意:如果插入的新盘内带系统(属于无效数据)可以将盘插入到正常的pve的服务器上,再在web页面上刷新磁盘信息将识别到的新盘的进行格式化。

root@pve:/dev/disk/by-id# lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

sda 8:0 0 238.5G 0 disk

├─sda1 8:1 0 1007K 0 part

├─sda2 8:2 0 1G 0 part

└─sda3 8:3 0 237.5G 0 part

sdb 8:16 0 238.5G 0 disk

sdc 8:32 1 57.6G 0 disk

├─sdc1 8:33 1 57.6G 0 part

└─sdc2 8:34 1 32M 0 part

zd0 230:0 0 4M 0 disk

zd16 230:16 0 4M 0 disk

zd32 230:32 0 60G 0 disk

├─zd32p1 230:33 0 100M 0 part

├─zd32p2 230:34 0 16M 0 part

└─zd32p3 230:35 0 39.9G 0 part

zd48 230:48 0 100G 0 disk

├─zd48p1 230:49 0 100M 0 part

├─zd48p2 230:50 0 16M 0 part

└─zd48p3 230:51 0 19.2G 0 part

zd64 230:64 0 100G 0 disk

├─zd64p1 230:65 0 100M 0 part

├─zd64p2 230:66 0 16M 0 part

└─zd64p3 230:67 0 19.2G 0 part

zd80 230:80 0 1M 0 disk

root@pve:/dev/disk/by-id#

root@pve:/dev/disk/by-id#

root@pve:/dev/disk/by-id#

root@pve:/dev/disk/by-id#

root@pve:/dev/disk/by-id#

root@pve:/dev/disk/by-id#

root@pve:/dev/disk/by-id#

root@pve:/dev/disk/by-id# ls -l

total 0

lrwxrwxrwx 1 root root 9 Oct 10 13:26 ata-HS-SSD-C260_256G_30147651609 -> ../../sdb

lrwxrwxrwx 1 root root 9 Oct 10 13:26 ata-HS-SSD-C260_256G_30147651612 -> ../../sda

lrwxrwxrwx 1 root root 10 Oct 10 13:26 ata-HS-SSD-C260_256G_30147651612-part1 -> ../../sda1

lrwxrwxrwx 1 root root 10 Oct 10 13:26 ata-HS-SSD-C260_256G_30147651612-part2 -> ../../sda2

lrwxrwxrwx 1 root root 10 Oct 10 13:26 ata-HS-SSD-C260_256G_30147651612-part3 -> ../../sda3

lrwxrwxrwx 1 root root 9 Oct 10 13:26 usb-Kingston_DataTraveler_3.0_E0D55EA574C1F510C8320163-0:0 -> ../../sdc

lrwxrwxrwx 1 root root 10 Oct 10 13:26 usb-Kingston_DataTraveler_3.0_E0D55EA574C1F510C8320163-0:0-part1 -> ../../sdc1

lrwxrwxrwx 1 root root 10 Oct 10 13:26 usb-Kingston_DataTraveler_3.0_E0D55EA574C1F510C8320163-0:0-part2 -> ../../sdc2

lrwxrwxrwx 1 root root 9 Oct 10 13:26 wwn-0x5000000123456816 -> ../../sdb

lrwxrwxrwx 1 root root 9 Oct 10 13:26 wwn-0x5000000123456819 -> ../../sda

lrwxrwxrwx 1 root root 10 Oct 10 13:26 wwn-0x5000000123456819-part1 -> ../../sda1

lrwxrwxrwx 1 root root 10 Oct 10 13:26 wwn-0x5000000123456819-part2 -> ../../sda2

lrwxrwxrwx 1 root root 10 Oct 10 13:26 wwn-0x5000000123456819-part3 -> ../../sda3

root@pve:/dev/disk/by-id# zpool status

pool: rpool

state: DEGRADED

status: One or more devices could not be used because the label is missing or

invalid. Sufficient replicas exist for the pool to continue

functioning in a degraded state.

action: Replace the device using 'zpool replace'.

see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J

config:

 

NAME STATE READ WRITE CKSUM

rpool DEGRADED 0 0 0

mirror-0 DEGRADED 0 0 0

ata-HS-SSD-C260_256G_30147651612-part3 ONLINE 0 0 0

17753432011831091831 UNAVAIL 0 0 0 was /dev/disk/by-id/ata-HS-SSD-C260_256G_30148251328-part3

 

errors: No known data errors

root@pve:/dev/disk/by-id#

 

 

 

4、将正常盘的分区表信息复制到新磁盘

/dev/sda 为正常盘的物理路径

/dev/sdb 为新盘的物理路径

在新盘被复制分区表信息后,会跟正常盘的磁盘结构一样包括磁盘大小

root@pve:/dev/disk/by-id# sgdisk /dev/sda -R /dev/sdb

The operation has completed successfully.

root@pve:/dev/disk/by-id#

root@pve:/dev/disk/by-id# ls -l

total 0

lrwxrwxrwx 1 root root 9 Oct 10 13:28 ata-HS-SSD-C260_256G_30147651609 -> ../../sdb

lrwxrwxrwx 1 root root 10 Oct 10 13:28 ata-HS-SSD-C260_256G_30147651609-part1 -> ../../sdb1

lrwxrwxrwx 1 root root 10 Oct 10 13:28 ata-HS-SSD-C260_256G_30147651609-part2 -> ../../sdb2

lrwxrwxrwx 1 root root 10 Oct 10 13:28 ata-HS-SSD-C260_256G_30147651609-part3 -> ../../sdb3

lrwxrwxrwx 1 root root 9 Oct 10 13:26 ata-HS-SSD-C260_256G_30147651612 -> ../../sda

lrwxrwxrwx 1 root root 10 Oct 10 13:26 ata-HS-SSD-C260_256G_30147651612-part1 -> ../../sda1

lrwxrwxrwx 1 root root 10 Oct 10 13:26 ata-HS-SSD-C260_256G_30147651612-part2 -> ../../sda2

lrwxrwxrwx 1 root root 10 Oct 10 13:26 ata-HS-SSD-C260_256G_30147651612-part3 -> ../../sda3

lrwxrwxrwx 1 root root 9 Oct 10 13:26 usb-Kingston_DataTraveler_3.0_E0D55EA574C1F510C8320163-0:0 -> ../../sdc

lrwxrwxrwx 1 root root 10 Oct 10 13:26 usb-Kingston_DataTraveler_3.0_E0D55EA574C1F510C8320163-0:0-part1 -> ../../sdc1

lrwxrwxrwx 1 root root 10 Oct 10 13:26 usb-Kingston_DataTraveler_3.0_E0D55EA574C1F510C8320163-0:0-part2 -> ../../sdc2

lrwxrwxrwx 1 root root 9 Oct 10 13:28 wwn-0x5000000123456816 -> ../../sdb

lrwxrwxrwx 1 root root 10 Oct 10 13:28 wwn-0x5000000123456816-part1 -> ../../sdb1

lrwxrwxrwx 1 root root 10 Oct 10 13:28 wwn-0x5000000123456816-part2 -> ../../sdb2

lrwxrwxrwx 1 root root 10 Oct 10 13:28 wwn-0x5000000123456816-part3 -> ../../sdb3

lrwxrwxrwx 1 root root 9 Oct 10 13:26 wwn-0x5000000123456819 -> ../../sda

lrwxrwxrwx 1 root root 10 Oct 10 13:26 wwn-0x5000000123456819-part1 -> ../../sda1

lrwxrwxrwx 1 root root 10 Oct 10 13:26 wwn-0x5000000123456819-part2 -> ../../sda2

lrwxrwxrwx 1 root root 10 Oct 10 13:26 wwn-0x5000000123456819-part3 -> ../../sda3

root@pve:/dev/disk/by-id#

 

加载新盘分区信息

root@pve:/dev/disk/by-id# sgdisk -G /dev/sdb

The operation has completed successfully.

root@pve:/dev/disk/by-id#

 

 

5、将正常盘的数据信息复制到新的分区表中,需要将之前的坏盘磁盘名称替换成新盘磁盘名称

17753432011831091831 坏盘磁盘名称

ata-HS-SSD-C260_256G_30147651609-part3 新盘磁盘名称(这个名称可以通过/dev/disk/by-id目录下查看sdb被复制分区表后所对应正常盘的分区)

root@pve:/dev/disk/by-id# zpool replace -f rpool 17753432011831091831 ata-HS-SSD-C260_256G_30147651609-part3

root@pve:/dev/disk/by-id# zpool status

pool: rpool

state: DEGRADED

status: One or more devices is currently being resilvered. The pool will

continue to function, possibly in a degraded state.

action: Wait for the resilver to complete.

scan: resilver in progress since Thu Oct 10 13:29:24 2024

49.7G scanned at 5.52G/s, 513M issued at 57.0M/s, 49.7G total

528M resilvered, 1.01% done, 00:14:44 to go

config:

 

NAME STATE READ WRITE CKSUM

rpool DEGRADED 0 0 0

mirror-0 DEGRADED 0 0 0

ata-HS-SSD-C260_256G_30147651612-part3 ONLINE 0 0 0

replacing-1 DEGRADED 0 0 0

17753432011831091831 UNAVAIL 0 0 0 was /dev/disk/by-id/ata-HS-SSD-C260_256G_30148251328-part3

ata-HS-SSD-C260_256G_30147651609-part3 ONLINE 0 0 0 (resilvering)

 

errors: No known data errors

root@pve:/dev/disk/by-id#

 

复制数据时可以通过zpool status查看复制进度

root@pve:/dev/disk/by-id# zpool status

pool: rpool

state: DEGRADED

status: One or more devices is currently being resilvered. The pool will

continue to function, possibly in a degraded state.

action: Wait for the resilver to complete.

scan: resilver in progress since Thu Oct 10 13:29:24 2024

49.7G scanned at 117M/s, 45.7G issued at 107M/s, 49.7G total

46.1G resilvered, 91.91% done, 00:00:38 to go

config:

 

NAME STATE READ WRITE CKSUM

rpool DEGRADED 0 0 0

mirror-0 DEGRADED 0 0 0

ata-HS-SSD-C260_256G_30147651612-part3 ONLINE 0 0 0

replacing-1 DEGRADED 0 0 0

17753432011831091831 UNAVAIL 0 0 0 was /dev/disk/by-id/ata-HS-SSD-C260_256G_30148251328-part3

ata-HS-SSD-C260_256G_30147651609-part3 ONLINE 0 0 0 (resilvering)

 

errors: No known data errors

root@pve:/dev/disk/by-id#

root@pve:/dev/disk/by-id# zpool status

pool: rpool

state: ONLINE

scan: resilvered 50.1G in 00:08:37 with 0 errors on Thu Oct 10 13:38:01 2024

config:

 

NAME STATE READ WRITE CKSUM

rpool ONLINE 0 0 0

mirror-0 ONLINE 0 0 0

ata-HS-SSD-C260_256G_30147651612-part3 ONLINE 0 0 0

ata-HS-SSD-C260_256G_30147651609-part3 ONLINE 0 0 0

 

errors: No known data errors

root@pve:/dev/disk/by-id#

root@pve:/dev/disk/by-id#

 

 

 

6、构建新盘的EFI引导分区,保证两块盘都能正常启动(新盘磁盘名称也可以通过web页面的磁盘信息,可以看出ZFS、EFI所对应的磁盘路径)

ata-HS-SSD-C260_256G_30147651609-part2 新盘磁盘名称(这个名称可以通过/dev/disk/by-id目录下查看sdb被复制分区表后所对应正常盘的分区)

root@pve:/dev/disk/by-id# proxmox-boot-tool format ata-HS-SSD-C260_256G_30147651609-part2

UUID="" SIZE="1073741824" FSTYPE="" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="sdb" MOUNTPOINT=""

Formatting 'ata-HS-SSD-C260_256G_30147651609-part2' as vfat..

mkfs.fat 4.2 (2021-01-31)

Done.

 

root@pve:/dev/disk/by-id# proxmox-boot-tool init ata-HS-SSD-C260_256G_30147651609-part2

Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..

UUID="8936-46A7" SIZE="1073741824" FSTYPE="vfat" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="sdb" MOUNTPOINT=""

Mounting 'ata-HS-SSD-C260_256G_30147651609-part2' on '/var/tmp/espmounts/8936-46A7'.

Installing grub i386-pc target..

Installing for i386-pc platform.

Installation finished. No error reported.

Unmounting 'ata-HS-SSD-C260_256G_30147651609-part2'.

Adding 'ata-HS-SSD-C260_256G_30147651609-part2' to list of synced ESPs..

Refreshing kernels and initrds..

Running hook script 'proxmox-auto-removal'..

Running hook script 'zz-proxmox-boot'..

Copying and configuring kernels on /dev/disk/by-uuid/5162-40DC

Copying kernel 5.15.102-1-pve

Generating grub configuration file ...

Found linux image: /boot/vmlinuz-5.15.102-1-pve

Found initrd image: /boot/initrd.img-5.15.102-1-pve

Warning: os-prober will not be executed to detect other bootable partitions.

Systems on them will not be added to the GRUB boot configuration.

Check GRUB_DISABLE_OS_PROBER documentation entry.

done

WARN: /dev/disk/by-uuid/5162-CE50 does not exist - clean '/etc/kernel/proxmox-boot-uuids'! - skipping

Copying and configuring kernels on /dev/disk/by-uuid/8936-46A7

Copying kernel 5.15.102-1-pve

Generating grub configuration file ...

Found linux image: /boot/vmlinuz-5.15.102-1-pve

Found initrd image: /boot/initrd.img-5.15.102-1-pve

Warning: os-prober will not be executed to detect other bootable partitions.

Systems on them will not be added to the GRUB boot configuration.

Check GRUB_DISABLE_OS_PROBER documentation entry.

done

root@pve:/dev/disk/by-id#

 

 

7、检查EFI分区启动状态

root@pve:/dev/disk/by-id# proxmox-boot-tool refresh

Running hook script 'proxmox-auto-removal'..

Running hook script 'zz-proxmox-boot'..

Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..

Copying and configuring kernels on /dev/disk/by-uuid/5162-40DC

Copying kernel 5.15.102-1-pve

Generating grub configuration file ...

Found linux image: /boot/vmlinuz-5.15.102-1-pve

Found initrd image: /boot/initrd.img-5.15.102-1-pve

Warning: os-prober will not be executed to detect other bootable partitions.

Systems on them will not be added to the GRUB boot configuration.

Check GRUB_DISABLE_OS_PROBER documentation entry.

done

WARN: /dev/disk/by-uuid/5162-CE50 does not exist - clean '/etc/kernel/proxmox-boot-uuids'! - skipping

Copying and configuring kernels on /dev/disk/by-uuid/8936-46A7

Copying kernel 5.15.102-1-pve

Generating grub configuration file ...

Found linux image: /boot/vmlinuz-5.15.102-1-pve

Found initrd image: /boot/initrd.img-5.15.102-1-pve

Warning: os-prober will not be executed to detect other bootable partitions.

Systems on them will not be added to the GRUB boot configuration.

Check GRUB_DISABLE_OS_PROBER documentation entry.

done

root@pve:/dev/disk/by-id#

 

 

 

附加故障现象

1、在插入新盘后无法进入系统提示rpool丢失

出现这种情况时可能是新插入的磁盘也有同样类似的系统

 

使用zpool import查看需要使用的系统,

输入zpool import 6931352008822882077,回车后正常就可以exit退出,进入到系统中 #6931352008822882077为raid的ID号

 

 

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.hqwc.cn/news/907145.html

如若内容造成侵权/违法违规/事实不符,请联系编程知识网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

23593243

2353243谢芷欣

27.7K star!这个SpringBoot+Vue人力资源管理系统,让企业开发事半功倍!

`微人事(vhr)`是一款基于**SpringBoot+Vue**全栈技术打造的企业级人力资源管理系统,采用前后端分离架构设计,整合了Redis缓存、RabbitMQ消息队列、WebSocket实时通信等12+项企业级技术方案。项目自开源以来已获得**27.7K星标**,被广泛应用于中小企业的人事管理系统搭建。嗨,…

如何告别电脑卡顿烦恼?90%人未必知道的远程操作1分钟学会!

不知道你们有没有经历过这样的困扰,当电脑使用了一段时间后发现就不再像刚购买时那么丝滑流畅了?卡顿、等待程序、界面响应的情况常有发生,一次两次或还可接受,多次出现可就要令人头疼不已了。 无论是处理工作文件过程中的突然卡死,还是畅玩游戏时的画面冻结,都会严重影响…

SAP商业AI实测:AI如何升级企业全球化运营能力?

在全球化的浪潮中,企业如何快速应对多地区、多规则的复杂挑战?答案或许藏在AI技术的深度应用中。据IDC报告,目前全世界仅有不到20%的企业使用生成式AI,但企业每投入1美元却能获得高达3到10倍的回报!AI的潜力究竟如何释放?SAP的“AI+流程+数据”模式,给出了解决方案。 SA…

MCR106-ASEMI电机驱动专用MCR106

MCR106-ASEMI电机驱动专用MCR106编辑:LL MCR106-ASEMI电机驱动专用MCR106 型号:MCR106 品牌:ASEMI 封装:TO-92 阻断电压:600V 通态电流:2.55A 类型:单向可控硅 工作温度:-40℃~150℃ 在工业电机控制、大功率电源设备和新能源系统中,如何以更小损耗、更高可靠性应对频繁…

实战指南:智慧碳中和管理平台搭建全流程解析(一)

在“双碳”目标的推动下,企业正面临日益严格的碳排放监管要求。然而,当前的碳管理模式仍存在诸多痛点:数据采集渠道分散,难以形成统一的碳排放账本;碳核算流程复杂,依赖人工统计,效率低且易出错;减排策略缺乏精准测算,难以优化碳资产配置。此外,碳交易市场的发展对企…

基于MQTTX订阅端搭建及基于Qt的发布端搭建

1、基于MQTTX创建MQTT订阅端 MQTTX下载地址为:https://mqttx.app/zh,下载安装后,打开软件界面如下图所示2、创建新的链接 在上图基础上点击“New Connection”按钮,弹出如下所示界面按照图中填充界面相关内容后点击右上角“Connect”按钮进行与服务端的链接,连接成功后如下…

通过Linux包管理器提升权限

在Linux系统中,apt和yum是两种常见的包管理工具,分别用于Debian/Ubuntu和CentOS/RHEL等发行版,如果配置不当极有可能导致linux提权问题,进而导致服务器受到安全威胁。免责声明:本文所涉及的技术仅供学习和参考,严禁使用本文内容从事违法行为和未授权行为,如因个人原因造…

EDMI电表 mk6es关口表数据采集远程抄表点位信息表各种规约协议网关盒子全匹配

正向有功总电量:0169 正向有功峰电量:0160 正向有功平电量:0161 正向有功谷电量:0162 正向有功尖峰电量:0163反向有功总电量:0069 反向有功峰电量:0060 反向有功平电量:0061 反向有功谷电量:0062 反向有功尖峰电量:0063正向无功总电量:0369 正向无功峰电量:0360 正…