Disk failure is a common fault of storage devices. To ensure the reliability of the storage data, the faulty disk should be accurately located in time. This case describes the process and precautions for locating a faulty disk in the S5100 and fault symptoms.
Step 1 Accurately locate the physical slot of the faulty disk. The location methods are as follows:
1. Determine the physical positions (the slot IDs in the same controller enclosure or disk enclosure of the S5100 are numbered 0 to 23 from left to right and from top to bottom) of the faulty disk by observing its status indicator.
2. Log in to the OSM, observe the slot of the faulty disk in the Device view and All Resources view, click the faulty disk to go to the information page of the disk, and click the disk location button on the lower right corner.
3. View the fault information on the OSM. The slot of the faulty disk is clarified in the alarm description. Precautions: To accurately locate the faulty disk, the previous methods are all used in actual troubleshooting and the results are the same.
Step 2 Remove the faulty disk. Open the handle of the faulty disk and pull out the disk slowly in the horizontal direction. Wait for 30 seconds to ensure that the OSM 3.0 detects that the faulty disk is removed. When the OSM 3.0 detects that the faulty disk is removed, the status indicators of all disks on the S5100 are off for three to five minutes and then on. This process is normal.
Step 3 Insert a new disk. Open the handle of the new disk and insert it into the slot slowly in the horizontal direction. Close the handle when the disk is in place. When the OSM 3.0 detects that the faulty disk is removed, the status indicators of all disks on the S5100 are off for three to five minutes and then on. This process is normal.
Step 4 Confirm that the new disk runs properly. If the fault indicator of the new disk is off but the running indicator is blinking, it indicates that the data is being restored on the new disk and the disk is successfully replaced. You can confirm it on the OSM 3.0. If the new disk still runs improperly, insert the disk again.
When device insertion or removal is detected on the S5100, the entire device is scanned. During the process, links become unstable temporarily and the status indicators of all disks are off for three to five minutes. It is a normal phenomena. When the faulty disk is being reconstructed, that is, the system detects that the disk is faulty, other member disks in the RAID group restore the data on the faulty disk to the hot spare disk by the XOR operation. The faulty disk is replaced at the moment. The status indicator is green on before replacement, while the status indicators of all disks are off for three to five minutes after the faulty disk is removed and the new disk is inserted. After three to five minutes, the status indicators of the hot spare disk and the disks in the RAID group become green on first, and then the status indicator of other disks become green on.