[Precise][CVE-2013-2140 1/1] xen/blkback: Check device permissions before allowing OP_DISCARD
Luis Henriques
luis.henriques at canonical.com
Tue Oct 22 17:42:41 UTC 2013
On Tue, Oct 22, 2013 at 10:31:12AM -0500, Seth Forshee wrote:
> On Tue, Oct 22, 2013 at 03:22:51PM +0100, Luis Henriques wrote:
> > From: Konrad Rzeszutek Wilk <konrad.wilk at oracle.com>
> >
> > BugLink: http://bugs.launchpad.net/bugs/1091187
> >
> > CVE-2013-2140
> >
> > We need to make sure that the device is not RO or that
> > the request is not past the number of sectors we want to
> > issue the DISCARD operation for.
> >
> > This fixes CVE-2013-2140.
> >
> > Cc: stable at vger.kernel.org
> > Acked-by: Jan Beulich <JBeulich at suse.com>
> > Acked-by: Ian Campbell <Ian.Campbell at citrix.com>
> > [v1: Made it pr_warn instead of pr_debug]
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk at oracle.com>
> > (back ported from commit 604c499cbbcc3d5fe5fb8d53306aa0fae1990109)
> > Signed-off-by: Luis Henriques <luis.henriques at canonical.com>
> > ---
> > drivers/block/xen-blkback/blkback.c | 13 +++++++++++++
> > 1 file changed, 13 insertions(+)
> >
> > diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
> > index 2232b85..70cd614 100644
> > --- a/drivers/block/xen-blkback/blkback.c
> > +++ b/drivers/block/xen-blkback/blkback.c
> > @@ -426,6 +426,18 @@ static void xen_blk_discard(struct xen_blkif *blkif, struct blkif_request *req)
> > int err = 0;
> > int status = BLKIF_RSP_OKAY;
> > struct block_device *bdev = blkif->vbd.bdev;
> > + struct phys_req preq;
> > +
> > + preq.sector_number = req->u.discard.sector_number;
> > + preq.nr_sects = req->u.discard.nr_sectors;
> > +
> > + err = xen_vbd_translate(&preq, blkif, WRITE);
> > + if (err) {
> > + pr_warn(DRV_PFX "access denied: DISCARD [%llu->%llu] on dev=%04x\n",
> > + preq.sector_number,
> > + preq.sector_number + preq.nr_sects, blkif->vbd.pdevice);
> > + goto fail_response;
> > + }
>
> There's a similar check in dispatch_rw_block_io(), but it's assuming a
> read/write request rather than discard and only guarantees that the
> start of the request is okay. But you must pass through that check to
> get to xen_blk_discard(), so you end up running the check twice. It
> doesn't look that expensive, but you could avoid doing it twice by
> setting preq.{sector_number,nr_sects} in dispatch_rw_block_io() based on
> the request type to ensure proper validation on discard.
Stefan, Seth,
Thank you for your review. After reading your comments, I spent some
more time looking at the code, and my eyes already hurt!
So, if I understand correctly, Seth's suggestion would require a patch
similar to this:
static int dispatch_rw_block_io()
{
...
preq.dev = req->handle;
preq.sector_number = req->u.rw.sector_number;
preq.nr_sects = 0;
pending_req->blkif = blkif;
pending_req->id = req->id;
pending_req->operation = req->operation;
pending_req->status = BLKIF_RSP_OKAY;
pending_req->nr_pages = nseg;
- for (i = 0; i < nseg; i++) {
- seg[i].nsec = req->u.rw.seg[i].last_sect -
- req->u.rw.seg[i].first_sect + 1;
- if ((req->u.rw.seg[i].last_sect >= (PAGE_SIZE >> 9)) ||
- (req->u.rw.seg[i].last_sect < req->u.rw.seg[i].first_sect))
- goto fail_response;
- preq.nr_sects += seg[i].nsec;
+ if (operation != REQ_DISCARD) {
+ for (i = 0; i < nseg; i++) {
+ seg[i].nsec = req->u.rw.seg[i].last_sect -
+ req->u.rw.seg[i].first_sect + 1;
+ if ((req->u.rw.seg[i].last_sect >= (PAGE_SIZE >> 9)) ||
+ (req->u.rw.seg[i].last_sect < req->u.rw.seg[i].first_sect))
+ goto fail_response;
+ preq.nr_sects += seg[i].nsec;
+ }
+ } else {
+ preq.sector_number = req->u.discard.sector_number;
+ preq.nr_sects = req->u.discard.nr_sectors;
}
if (xen_vbd_translate(&preq, blkif, operation) != 0) {
I guess the original values of preq.nr_sects and preq.sector_number
would also need to be restored again after the call to
xen_vbd_translate.
Obviously, this patch would be very different from the original one
and I'm not sure if it shouldn't be a SAUCE patch, instead of a
backport.
Any other thoughts?
I'll spend some more time looking at this code tomorrow, just to make
sure I'm not missing something.
Cheers,
--
Luis
More information about the kernel-team
mailing list