MD RAID6 Performance

Jean De Gyns Jean.DeGyns at memnon.com
Mon Feb 4 08:34:37 UTC 2019


Hi Everyone,

I have a twelve drive md raid6 and am a bit puzzled about the raw performance I'm getting out of it.
Each drive can sustain about 200 MiB/s at the beginning of the drive so I expected something around 2.4 GiB/s read speed and 2GiB/s write speed when writing full stripes at the beginning of the raid array.

Unfortunately ... I am nowhere near those values when performing a stripe aligned dd write : 

>> root at test:~# dd if=/dev/zero of=/dev/md1 status=progress bs=2560k oflag=direct
>> 12200181760 bytes (12 GB, 11 GiB) copied, 126 s, 96.8 MB/s
>> 4656+0 records in
>> 4656+0 records out
>> 12205424640 bytes (12 GB, 11 GiB) copied, 126.078 s, 96.8 MB/s

>> Device            r/s     w/s     rMB/s     wMB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
>> dm-1             0.00   13.50      0.00      3.13     0.00   756.00   0.00  98.25    0.00   20.30   0.29     0.00   237.20  21.33  28.80
>> dm-12            0.50   13.50      0.00      3.13     0.00   756.00   0.00  98.25    8.00   12.89   0.20     4.00   237.20  14.29  20.00
>> dm-60            0.50   13.00      0.12      3.00     0.00   756.00   0.00  98.31   12.00   13.08   0.20   256.00   236.48  15.11  20.40
>> dm-82            0.00   13.00      0.00      3.00     0.00   756.00   0.00  98.31    0.00   12.77   0.18     0.00   236.48  13.54  17.60
>> dm-108           1.50   13.00      0.00      3.00     0.00   756.00   0.00  98.31    6.67    7.08   0.12     2.83   236.48   8.55  12.40
>> dm-128           3.50   13.00      0.01      3.00     0.00   756.00   0.00  98.31    4.00    5.08   0.10     4.00   236.48   5.94   9.80
>> dm-180           6.00   13.00      0.13      3.00     0.00   756.00   0.00  98.31    0.33    4.00   0.08    22.33   236.48   4.00   7.60
>> dm-193           0.00   13.00      0.00      3.00     0.00   756.00   0.00  98.31    0.00    7.23   0.11     0.00   236.48   8.77  11.40
>> dm-240           8.00   13.00      0.13      3.00     0.00   756.00   0.00  98.31    4.25    5.69   0.14    16.25   236.48   6.38  13.40
>> dm-254           0.00   13.00      0.00      3.00     0.00   756.00   0.00  98.31    0.00    6.92   0.11     0.00   236.48   8.77  11.40
>> dm-306           0.00   13.00      0.00      3.00     0.00   756.00   0.00  98.31    0.00    2.00   0.06     0.00   236.48   4.77   6.20
>> dm-316           0.00   13.00      0.00      3.00     0.00   756.00   0.00  98.31    0.00    4.62   0.07     0.00   236.48   5.38   7.00

md1_raid6 process cpu usage is around 11% during this test.

Increasing the block size helps a little :

>> root at test:~# dd if=/dev/zero of=/dev/md1 status=progress bs=7680k oflag=direct
>> 6668943360 bytes (6.7 GB, 6.2 GiB) copied, 30 s, 222 MB/s^C
>> 856+0 records in
>> 856+0 records out
>> 6731857920 bytes (6.7 GB, 6.3 GiB) copied, 30.3263 s, 222 MB/s

>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>>            0.01    0.00    1.56    0.20    0.00   98.24

>> Device            r/s     w/s     rMB/s     wMB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
>> dm-1             0.00   60.50      0.00     22.50     0.00  5700.00   0.00  98.95    0.00   16.79   1.13     0.00   380.86   9.75  59.00
>> dm-12            0.00   60.50      0.00     22.50     0.00  5700.00   0.00  98.95    0.00   14.58   1.02     0.00   380.86   8.89  53.80
>> dm-60            0.00   60.50      0.00     22.50     0.00  5700.00   0.00  98.95    0.00   14.31   1.00     0.00   380.86   8.66  52.40
>> dm-82            0.00   61.50      0.00     22.88     0.00  5700.00   0.00  98.93    0.00   15.12   1.04     0.00   380.91   8.98  55.20
>> dm-108           0.00   60.50      0.00     22.50     0.00  5700.00   0.00  98.95    0.00    5.92   0.46     0.00   380.86   4.36  26.40
>> dm-128           0.00   61.50      0.00     22.88     0.00  5700.00   0.00  98.93    0.00    5.50   0.47     0.00   380.91   4.29  26.40
>> dm-180           0.00   61.50      0.00     22.88     0.00  5700.00   0.00  98.93    0.00    5.46   0.43     0.00   380.91   3.97  24.40
>> dm-193           0.00   60.50      0.00     22.50     0.00  5700.00   0.00  98.95    0.00    5.29   0.44     0.00   380.86   4.13  25.00
>> dm-240           0.00   60.50      0.00     22.50     0.00  5700.00   0.00  98.95    0.00    5.39   0.46     0.00   380.86   4.23  25.60
>> dm-254           0.00   60.50      0.00     22.50     0.00  5700.00   0.00  98.95    0.00    5.42   0.45     0.00   380.86   4.26  25.80
>> dm-306           0.00   60.50      0.00     22.50     0.00  5700.00   0.00  98.95    0.00    5.45   0.45     0.00   380.86   4.13  25.00
>> dm-316           0.00   60.50      0.00     22.50     0.00  5700.00   0.00  98.95    0.00    5.09   0.43     0.00   380.86   3.97  24.00

Single dd read is much better but still missing 400 MiB/s somewhere :

>> root at test:~# dd if=/dev/md1 of=/dev/null status=progress bs=2560k iflag=direct
>> 52292485120 bytes (52 GB, 49 GiB) copied, 25 s, 2.1 GB/s
>> 20733+0 records in
>> 20732+0 records out
>> 54347694080 bytes (54 GB, 51 GiB) copied, 25.9668 s, 2.1 GB/s

>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>>            0.01    0.00    2.43    0.73    0.00   96.83

>> Device            r/s     w/s     rMB/s     wMB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
>> dm-1           658.00    0.00    164.50      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.50   256.00     0.00   0.76  49.80
>> dm-12          657.50    0.00    164.38      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.39   256.00     0.00   0.59  38.80
>> dm-60          659.00    0.00    164.75      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.45   256.00     0.00   0.68  44.60
>> dm-82          659.00    0.00    164.75      0.00     0.00     0.00   0.00   0.00    0.02    0.00   0.39   256.00     0.00   0.59  39.00
>> dm-108         659.50    0.00    164.88      0.00     0.00     0.00   0.00   0.00    0.02    0.00   0.44   256.00     0.00   0.67  44.00
>> dm-128         660.50    0.00    165.12      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.38   256.00     0.00   0.58  38.00
>> dm-180         660.00    0.00    165.00      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.39   256.00     0.00   0.59  38.80
>> dm-193         660.00    0.00    165.00      0.00     0.00     0.00   0.00   0.00    0.03    0.00   0.41   256.00     0.00   0.62  40.60
>> dm-240         659.50    0.00    164.88      0.00     0.00     0.00   0.00   0.00    0.02    0.00   0.40   256.00     0.00   0.61  40.20
>> dm-254         660.00    0.00    165.00      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.39   256.00     0.00   0.60  39.40
>> dm-306         660.00    0.00    165.00      0.00     0.00     0.00   0.00   0.00    0.02    0.00   0.64   256.00     0.00   0.96  63.40
>> dm-316         660.50    0.00    165.12      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.40   256.00     0.00   0.61  40.40

The array configuration :

>>root at test:~# mdadm --detail /dev/md1
>>/dev/md1:
>>           Version : 1.2
>>     Creation Time : Fri Feb  1 17:42:02 2019
>>        Raid Level : raid6
>>        Array Size : 78138944000 (74519.10 GiB 80014.28 GB)
>>     Used Dev Size : 7813894400 (7451.91 GiB 8001.43 GB)
>>      Raid Devices : 12
>>     Total Devices : 12
>>       Persistence : Superblock is persistent
>>
>>     Intent Bitmap : Internal
>>
>>       Update Time : Mon Feb  4 09:23:23 2019
>>             State : clean
>>    Active Devices : 12
>>   Working Devices : 12
>>    Failed Devices : 0
>>     Spare Devices : 0
>>
>>            Layout : left-symmetric
>>        Chunk Size : 256K
>>
>>Consistency Policy : bitmap
>>
>>              Name : test:RAID6-01  (local to host test)
>>              UUID : 42430496:1c541a25:554cffdf:aa21ea2e
>>            Events : 20583
>>
>>    Number   Major   Minor   RaidDevice State
>>       0     253      108        0      active sync   /dev/dm-108
>>       1     253      180        1      active sync   /dev/dm-180
>>       2     253       60        2      active sync   /dev/dm-60
>>       3     253        1        3      active sync   /dev/dm-1
>>       4     253      316        4      active sync   /dev/dm-316
>>       5     253      240        5      active sync   /dev/dm-240
>>       6     253      128        6      active sync   /dev/dm-128
>>       7     253      193        7      active sync   /dev/dm-193
>>       8     253       82        8      active sync   /dev/dm-82
>>       9     253       12        9      active sync   /dev/dm-12
>>      10     253      306       10      active sync   /dev/dm-306
>>      11     253      254       11      active sync   /dev/dm-254


What am I missing ?
Many thanks.

JDG




More information about the ubuntu-users mailing list