QNAP RAID DEGRADED -- 2 - think solved - I changed one hard disk, because it was an emergeny case.

Cancelled Posted Mar 26, 2016 Paid on delivery
Cancelled Paid on delivery

[/] # mdadm -D /dev/md0

/dev/md0:

Version : [url removed, login to view]

Creation Time : Mon Aug 26 22:01:04 2013

Raid Level : raid5

Used Dev Size : 2928697536 (2793.02 GiB 2998.99 GB)

Raid Devices : 8

Total Devices : 8

Preferred Minor : 0

Persistence : Superblock is persistent

Update Time : Fri Mar 25 14:16:39 2016

State : active, degraded, Not Started

Active Devices : 7

Working Devices : 8

Failed Devices : 0

Spare Devices : 1

Layout : left-symmetric

Chunk Size : 64K

Name : 0

UUID : 43fe1543:32c62b10:36a60d77:9acf4b1b

Events : 1168796

Number Major Minor RaidDevice State

0 8 3 0 active sync /dev/sda3

1 8 19 1 active sync /dev/sdb3

8 8 35 2 active sync /dev/sdc3

3 8 51 3 active sync /dev/sdd3

4 8 67 4 spare rebuilding /dev/sde3

5 8 83 5 active sync /dev/sdf3

6 8 99 6 active sync /dev/sdg3

7 8 115 7 active sync /dev/sdh3

I replaced HardDisk No 4 to a new one:

[/] # cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]

md0 : active raid5 sde3[9] sda3[0] sdh3[7] sdg3[6] sdf3[5] sdd3[3] sdc3[8] sdb3[1]

20500882752 blocks super 1.0 level 5, 64k chunk, algorithm 2 [8/7] [UUUU_UUU]

[=>...................] recovery = 9.8% (287171712/2928697536) finish=[url removed, login to view] speed=41385K/sec

md8 : active raid1 sdh2[8](S) sdg2[7](S) sdf2[6](S) sdd2[5](S) sdc2[4](S) sdb2[3](S) sda2[2] sde2[0]

530128 blocks super 1.0 [2/2] [UU]

md13 : active raid1 sde4[4] sda4[0] sdh4[7] sdg4[6] sdf4[5] sdd4[3] sdc4[2] sdb4[1]

458880 blocks [8/8] [UUUUUUUU]

bitmap: 0/57 pages [0KB], 4KB chunk

md9 : active raid1 sde1[4] sda1[0] sdh1[7] sdg1[6] sdf1[5] sdd1[3] sdc1[2] sdb1[1]

530048 blocks [8/8] [UUUUUUUU]

bitmap: 1/65 pages [4KB], 4KB chunk

unused devices: <none>

Linux

Project ID: #10052638

About the project

Remote project Active Mar 26, 2016