我之前有个黑群,因为硬件故障导致无法开机。配置为 4 块 4T 硬盘组装 RAID6 。
使用新机器插入硬盘,结果无法进入系统,无奈只能用一块新硬盘装系统,另外插入两块硬盘(新机器期中一个 SATA 口有问题,只能连接三块硬盘。)。
然后使用降级模式( RAID6 下两块硬盘)拷贝了所有文件。
但是发现拷贝出来的文件有元错误(某些 RAR 压缩包报错。)
为了解决这个问题,我新买了 DS1522 ,并将四块硬盘都插入机器。结果只能识别到两块盘的 RAID6 。另外两块盘无法加入 RAID 。为此我发起了群晖的工单。但是群晖技术人员一通操作,还是无法重新构建 RAID6
其操作记录如下:
login as: test1
[email protected]'s password:
Access denied
[email protected]'s password:
Using terminal commands to modify system configs, execute external binary
files, add files, or install unauthorized third-party apps may lead to system
damages or unexpected behavior, or cause data loss. Make sure you are aware of
the consequences of each command and proceed at your own risk.
Warning: Data should only be stored in shared folders. Data stored elsewhere
may be deleted when the system is updated/restarted.
Could not chdir to home directory /var/services/homes/test1: No such file or dir ectory
test1@DS1522:/$ sudo -i
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.
Password:
root@DS1522:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md0 7.9G 1.4G 6.4G 17% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 240K 3.9G 1% /dev/shm
tmpfs 3.9G 17M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 712K 3.9G 1% /tmp
/dev/mapper/cachedev_1 437G 81M 437G 1% /volume1
/dev/mapper/cachedev_0 7.0T 6.1T 907G 88% /volume2
root@DS1522:~# touch /tmp/volume_skip_check
root@DS1522:~# ll /usr/syno/etc | grep preference
lrwxrwxrwx 1 root root 24 May 28 10:53 preference -> /volume1/@userpreference
root@DS1522:~# synosetkeyvalue /etc/synoinfo.conf disable_volumes volume2
root@DS1522:~#
login as: test1
[email protected]'s password:
Using terminal commands to modify system configs, execute external binary
files, add files, or install unauthorized third-party apps may lead to system
damages or unexpected behavior, or cause data loss. Make sure you are aware of
the consequences of each command and proceed at your own risk.
Warning: Data should only be stored in shared folders. Data stored elsewhere
may be deleted when the system is updated/restarted.
Could not chdir to home directory /var/services/homes/test1: No such file or directory
test1@DS1522:/$ sudo -i
Password:
root@DS1522:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md0 7.9G 1.4G 6.4G 18% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 240K 3.9G 1% /dev/shm
tmpfs 3.9G 16M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 692K 3.9G 1% /tmp
/dev/mapper/cachedev_1 437G 80M 437G 1% /volume1
root@DS1522:~# sfdisk -l
/dev/md0p1 0 16777087 16777088 0
/dev/md1p1 0 4194175 4194176 0
Error: /dev/md2: unrecognised disk label
get disk fail
Error: /dev/md3: unrecognised disk label
get disk fail
/dev/sata1p1 2048 4982527 4980480 fd
/dev/sata1p2 4982528 9176831 4194304 fd
/dev/sata1p3 9437184 7814032064 7804594881 fd
/dev/sata2p1 2048 4982527 4980480 fd
/dev/sata2p2 4982528 9176831 4194304 fd
/dev/sata2p3 9437184 7814032064 7804594881 fd
/dev/sata3p1 8192 16785407 16777216 fd
/dev/sata3p2 16785408 20979711 4194304 fd
/dev/sata3p3 21241856 976568351 955326496 fd
/dev/sata4p1 2048 4982527 4980480 fd
/dev/sata4p2 4982528 9176831 4194304 fd
/dev/sata4p3 9437184 7814032064 7804594881 fd
/dev/sata5p1 2048 4982527 4980480 fd
/dev/sata5p2 4982528 9176831 4194304 fd
/dev/sata5p3 9437184 7814032064 7804594881 fd
/dev/zram0p1 0 4863999 4864000 0
/dev/zram1p1 0 4863999 4864000 0
/dev/synobootp1 2048 67583 65536 ef
/dev/synobootp2 67584 239615 172032 83
root@DS1522:~# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] [raidF1] [linear]
md2 : active linear sata3p3[0]
477662208 blocks super 1.2 64k rounding [1/1] [U]
md3 : active raid6 sata2p3[6] sata1p3[7]
7804592768 blocks super 1.2 level 6, 64k chunk, algorithm 2 [4/2] [__UU]
md1 : active raid1 sata3p2[0]
2097088 blocks [5/1] [U____]
md0 : active raid1 sata3p1[0]
8388544 blocks [5/1] [U____]
unused devices: <none>
root@DS1522:~# mdadm -S /dev/md3
mdadm: Cannot get exclusive access to /dev/md3:Perhaps a running process, mounted filesystem or active volume group?
root@DS1522:~# pvs
PV VG Fmt Attr PSize PFree
/dev/md2 vg1 lvm2 a-- 455.53g 532.00m
/dev/md3 vg2 lvm2 a-- 7.27t 28.00m
root@DS1522:~# vgs
VG #PV #LV #SN Attr VSize VFree
vg1 1 2 0 wz--n- 455.53g 532.00m
vg2 1 2 0 wz--n- 7.27t 28.00m
root@DS1522:~# lvm pvscan
PV /dev/md3 VG vg2 lvm2 [7.27 TiB / 28.00 MiB free]
PV /dev/md2 VG vg1 lvm2 [455.53 GiB / 532.00 MiB free]
Total: 2 [7.71 TiB] / in use: 2 [7.71 TiB] / in no VG: 0 [0 ]
root@DS1522:~# vgchange -an vg2
Logical volume vg2/volume_2 is used by another device.
Can't deactivate volume group "vg2" with 1 open logical volume(s)
root@DS1522:~# lvdisplay
--- Logical volume ---
LV Path /dev/vg2/syno_vg_reserved_area
LV Name syno_vg_reserved_area
VG Name vg2
LV UUID 2PZQzQ-eUOs-mypA-kdxj-CYu9-NN3O-616plp
LV Write Access read/write
LV Creation host, time ,
LV Status available
# open 0
LV Size 12.00 MiB
Current LE 3
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 512
Block device 248:0
--- Logical volume ---
LV Path /dev/vg2/volume_2
LV Name volume_2
VG Name vg2
LV UUID lnH9RB-qmXI-PqoP-WL0e-HjoS-3Llm-SzVvbn
LV Write Access read/write
LV Creation host, time ,
LV Status available
# open 1
LV Size 7.27 TiB
Current LE 1905408
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 512
Block device 248:1
--- Logical volume ---
LV Path /dev/vg1/syno_vg_reserved_area
LV Name syno_vg_reserved_area
VG Name vg1
LV UUID b3UDkk-uf9K-bpkP-Ofm0-vmI0-F2aU-gN05z9
LV Write Access read/write
LV Creation host, time ,
LV Status available
# open 0
LV Size 12.00 MiB
Current LE 3
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 384
Block device 248:2
--- Logical volume ---
LV Path /dev/vg1/volume_1
LV Name volume_1
VG Name vg1
LV UUID HPq9Fi-D34E-7Ovp-LCa4-S8Td-Y7lp-zpI2Cc
LV Write Access read/write
LV Creation host, time ,
LV Status available
# open 1
LV Size 455.00 GiB
Current LE 116480
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 384
Block device 248:3
root@DS1522:~# lvchange -an /dev/vg
vg1/ vg2/ vga_arbiter
root@DS1522:~# lvchange -an /dev/vg2/volume_2
-ash: lvchange: command not found
root@DS1522:~# synospace --stop-all-spaces
sucess to unmount all volume, start to disassemble space
success to disassemble all space
root@DS1522:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md0 7.9G 1.4G 6.4G 18% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 4.0K 3.9G 1% /dev/shm
tmpfs 3.9G 16M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 696K 3.9G 1% /tmp
root@DS1522:~# vgchange -an
root@DS1522:~# mdadm -S /dev/md
md0 md1
root@DS1522:~# mdadm -Af /dev/md
md0 md1
root@DS1522:~# mdadm -Af /dev/md3 /dev/sata[1245]p3
mdadm: /dev/md3 has been started with 2 drives (out of 4).
root@DS1522:~# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] [raidF1] [linear]
md3 : active raid6 sata2p3[6] sata1p3[7]
7804592768 blocks super 1.2 level 6, 64k chunk, algorithm 2 [4/2] [__UU]
md1 : active raid1 sata3p2[0]
2097088 blocks [5/1] [U____]
md0 : active raid1 sata3p1[0]
8388544 blocks [5/1] [U____]
unused devices: <none>
root@DS1522:~# mdadm -D /dev/md3
/dev/md3:
Version : 1.2
Creation Time : Sun Apr 7 01:14:53 2019
Raid Level : raid6
Array Size : 7804592768 (7443.04 GiB 7991.90 GB)
Used Dev Size : 3902296384 (3721.52 GiB 3995.95 GB)
Raid Devices : 4
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Thu May 30 11:42:58 2024
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Name : DS918:2
UUID : 10eec84e:da357c19:2c40875f:92e5302c
Events : 19092
Number Major Minor RaidDevice State
- 0 0 0 removed
- 0 0 1 removed
6 8 19 2 active sync /dev/sata2p3
7 8 3 3 active sync /dev/sata1p3
root@DS1522:~# mdadm -E /dev/sata4p3
/dev/sata4p3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 10eec84e:da357c19:2c40875f:92e5302c
Name : DS918:2
Creation Time : Sun Apr 7 01:14:53 2019
Raid Level : raid6
Raid Devices : 4
Avail Dev Size : 7804592833 (3721.52 GiB 3995.95 GB)
Array Size : 7804592768 (7443.04 GiB 7991.90 GB)
Used Dev Size : 7804592768 (3721.52 GiB 3995.95 GB)
Data Offset : 2048 sectors
Super Offset : 8 sectors
Unused Space : before=1968 sectors, after=65 sectors
State : clean
Device UUID : 4d54a772:f9b0a587:d8093250:abeb8e36
Update Time : Thu Feb 23 11:18:37 2023
Checksum : b2665d87 - correct
Events : 13539
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 1
Array State : .A.. ('A' == active, '.' == missing, 'R' == replacing)
root@DS1522:~# mdadm -Af /dev/md3 /dev/sata[1245]p3 --run
mdadm: /dev/sata1p3 is busy - skipping
mdadm: /dev/sata2p3 is busy - skipping
mdadm: Found some drive for an array that is already active: /dev/md3
mdadm: giving up.
root@DS1522:~# mdadm -S /dev/md3
mdadm: stopped /dev/md3
root@DS1522:~# mdadm -Af /dev/md3 /dev/sata[1245]p3 --run
mdadm: /dev/md3 has been started with 2 drives (out of 4).
root@DS1522:~#
想问下大家,这种情况下,这个 RAID6 还有救吗?