request for feedback - I have a load balancing patch for dm-raid1.c

Stefan Bader stefan.bader at
Tue Jul 5 13:44:46 UTC 2011

On 05.07.2011 13:59, Robert Collins wrote:
> Hi, I recently realised
> was
> affecting me - I have my desktop machine setup with an onboard BIOS
> supported raid setup which ends up running with dm-raid1 targets.
> Anyhow, doing a load-balancing heuristic seemed pretty approachable,
> so I put one together; I looked at reusing the md raid1.c logic but
> the two implementations are pretty far apart. I did borrow what seemed
> to be the most significant heuristic - reading from the same target if
> the prior read was adjacent to it.
> I didn't do the markup-on-read-complete, because with ahci tagging and
> delayed reads it was more complex than I wanted to reason about :).
> Anyhow, I'm sure that this is improvable further, but it seems like an
> unqualified improvement over the status quo: it load balances reads,
> sequential IO is still plenty fast, and it makes it a little clearer
> for folk wanting to hack on this whats going on.
> I would love a review for correctness and concurrency safety, if
> someone has the time.
> my patch:
> -Rob

Hi Robert,

I had a quick look at your patch, so here is what comes to my mind. Generally
the printk's will probably not be liked much. Even with going to debug they will
be emitted quite often. And that uses more CPU, slows down processing and could
cause logs to grow.
The idea of sector distance is not bad but maybe a combination of just not
switching paths every request and using a merge function would be preferred
(there is dm-crypt and dm-stripe which do this). There is also dm-mpath which
was changed from using bios to use requests, which may serve a similar benefit.
In the end, if possible any read sent to one path should be as large as
possible. Writes would benefit there as well as the drives could optimize.

But for a minimal intrusive/effort approach it maybe helps to change to switch
the mirror path every x requests to get a sort of grouping of them...


More information about the kernel-team mailing list