RAID 10, also known as RAID 1+0, is a common RAID configuration that combines mirroring and striping to provide fault tolerance and improved performance. In RAID 10, data is mirrored and striped across multiple drives simultaneously. This results in fast read/write speeds and the ability to withstand multiple drive failures, making RAID 10 a popular choice for mission-critical storage needs.
RAID 10 Overview
In RAID 10, data is organized into mirrored pairs. Each mirrored pair consists of two identical copies of the data. The mirrored pairs are then striped across multiple drives. For example, in a 4-drive RAID 10 array, the data would be mirrored into two pairs, and then each half of the mirror is striped across two different drives.
The mirroring provides redundancy – if one drive fails, the other drive in the mirrored pair still contains a complete copy of the data. The striping provides improved performance, since reads and writes can be distributed across multiple drives simultaneously.
Advantages of RAID 10
- Very high read and write performance – approachs the speed of RAID 0
- Ability to withstand multiple drive failures – as long as no more than 1 drive fails per mirrored set
- Ideal for applications requiring both speed and redundancy
Disadvantages of RAID 10
- Higher cost – requires at least 4 drives
- Usable capacity is half of total drives – due to mirroring
Drive Failure Tolerance in 12 Drive RAID 10
In a RAID 10 array with 12 drives, the drives would be mirrored and striped as follows:
- Drives would be mirrored into 6 pairs (Drive 1 & 2, Drive 3 & 4, etc)
- Each half of the mirror would be striped across 6 drives
This setup allows the RAID 10 array to withstand up to 6 drive failures, as long as no more than 1 drive fails per mirrored set. For example:
Drive | Status |
---|---|
1 | Online |
2 | Failed |
3 | Online |
4 | Online |
5 | Failed |
6 | Online |
7 | Online |
8 | Online |
9 | Online |
10 | Failed |
11 | Online |
12 | Online |
In this scenario, up to 6 drives could fail (Drives 2, 5, 10, and up to 3 more) without losing any data, since each mirrored set still has one operational drive.
However, if two drives in the same mirrored set failed, such as Drives 1 and 2, the RAID 10 would be degraded and data would be lost. Therefore, the maximum number of drives that can fail in a 12 drive RAID 10 configuration is 6.
Rebuild Process in Degraded RAID 10
When a single drive fails in a RAID 10 array, the rebuild process works as follows:
- The RAID controller detects the drive failure through regularpatrol reads
- The controller switches any I/O operations from the failed drive to its mirrored partner
- A spare drive is activated to replace the failed drive
- Data is rebuilt from the surviving mirror drive onto the spare
- When rebuild completes, the spare is now a fully integrated member of the RAID 10 array
As long as no more than one drive fails per mirrored set, this rebuild process allows the RAID 10 to restore full redundancy and protection against additional drive failures.
Advantages of RAID 10 Rebuild
- Faster than rebuilding large RAID 5 or 6 arrays
- Can rebuild using free CPU capacity without impacting performance
- Only one drive needs to be read to rebuild mirrored data
Drive Replacement Process
When replacing a failed drive in RAID 10, the steps are:
- Remove the failed drive from the array
- Insert a new replacement drive that is the same capacity or larger
- The RAID controller automatically starts a rebuild onto the replacement drive
- When the rebuild finishes, the drive becomes a full member of the RAID 10 array
Hot spares can also be used to automate the drive replacement process. A hot spare is an unused standby drive. If a drive fails, the hot spare is immediately activated to replace it. This eliminates the need to manually replace failed drives.
Expanding RAID 10 Arrays
To expand the total capacity of a RAID 10 array, drives must be added in mirrored pairs. For example, to expand a 6 drive RAID 10 to 8 drives, two new mirrored drives would need to be added.
The process involves:
- Add two new drives to open drive bays
- Use the RAID controller to extend the existing mirror set
- The controller creates a second mirror set using the two new drives
- The data is rebalanced across the larger set of drives
Many RAID controllers also support online capacity expansion, allowing the RAID 10 to be expanded without downtime.
RAID 10 Performance
RAID 10 delivers excellent performance for both read and write operations. Reads can be distributed across many drives for near RAID 0 speeds. Writes also perform well since each write is mirrored to two drives simultaneously.
RAID 10 Read Performance
For read operations, I/O can be distributed across all drives containing the required data. With more drives to share the load, this provides similar parallelism and bandwidth to RAID 0. Benchmarks show RAID 10 reads approaching speeds of a single drive multiplied by the number of drives holding the striped data.
RAID 10 Write Performance
Writes must go to both drives in the mirrored set, reducing maximum throughput compared to RAID 0. However, performance is still very good due to the striping of data across drives. Writes to sequential blocks can utilize multiple drives in parallel.
Mixed Workload Performance
In mixed read/write workloads, RAID 10 still provides excellent performance. The fast reads can effectively offset some of the write penalty. In highly parallel environments, the mirrored writes have minimal impact on throughput.
Ideal Uses for RAID 10
Due to its combination of speed, redundancy, and robustness, RAID 10 is well suited for many critical storage scenarios:
- Database servers
- Business-critical applications
- High-performance virtualization
- Transactional databases
- High-volume transactional websites
- Disk imaging and backups
The performance capabilities and multiple drive fault tolerance make RAID 10 suitable for these demanding production environments.
Alternatives to RAID 10
While RAID 10 is an excellent well-rounded RAID type, there are also some alternatives that may better match specific use cases:
- RAID 6 – Preferred for large arrays where drive rebuild time is more critical than write performance. Can tolerate failure of two drives.
- RAID 50 – Stripes data across multiple RAID 5 groups. Allows for more drives with better read performance scaling.
- RAID 60 – Similar to RAID 50, but uses RAID 6parity groups instead of RAID 5.
- Nested RAID – Uses smaller RAID 1 arrays as building blocks for a larger RAID 0 array. Alternative to traditional RAID 10.
Conclusion
To summarize, in a 12 drive RAID 10 array the maximum number of drives that can fail without data loss is 6. This level of drive failure tolerance provides excellent redundancy for critical data. RAID 10 also delivers fast performance for both read and write workloads, making it ideal for applications that demand speed, robustness and high availability.
The combination of mirroring and striping in RAID 10 provides a balance of features not found in other RAID types. For systems that require fast throughput and the ability to survive multiple drive failures, RAID 10 is an excellent choice.