[Raring] Pre-upstream fix for xen netfront
Stefan Bader
stefan.bader at canonical.com
Tue Nov 20 16:37:04 UTC 2012
This some brand-new-not-settled change that came up today. Upstream
still has some discussions about better error/safety handling but
at least in testing this version already improves usablitly quite
a lot.
The issue got revealed by generic net changes that now try to
create bigger fragments. Unforunately netfront only handles 4k.
So in practise this causes transfers out of the PVM guest to just
stop if that happens.
I hope that this will get upstream soon but at least using it now
would allow working cloudimages for now.
-Stefan
>From 99324e49c8440d611fa9572b7f397fe7a270caa2 Mon Sep 17 00:00:00 2001
From: Ian Campbell <ian.campbell at citrix.com>
Date: Tue, 20 Nov 2012 12:40:00 +0100
Subject: [PATCH] UBUNTU: SAUCE: (no-up) xen/netfront: handle compound page fragments on transmit
An SKB paged fragment can consist of a compound page with order > 0.
However the netchannel protocol deals only in PAGE_SIZE frames.
Handle this in xennet_make_frags by iterating over the frames which
make up the page.
This is the netfront equivalent to 6a8ed462f16b for netback.
Signed-off-by: Ian Campbell <ian.campbell at citrix.com>
Cc: netdev at vger.kernel.org
Cc: xen-devel at lists.xen.org
Cc: Eric Dumazet <edumazet at google.com>
Cc: Konrad Rzeszutek Wilk <konrad at kernel.org>
Cc: ANNIE LI <annie.li at oracle.com>
Cc: Sander Eikelenboom <linux at eikelenboom.it>
Cc: Stefan Bader <stefan.bader at canonical.com>
(picked from mailing list)
BugLink: http://bugs.launchpad.net/bugs/1078926
Signed-off-by: Stefan Bader <stefan.bader at canonical.com>
---
drivers/net/xen-netfront.c | 58 +++++++++++++++++++++++++++++++++-----------
1 file changed, 44 insertions(+), 14 deletions(-)
diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index caa0110..a12b99a 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -452,24 +452,54 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
/* Grant backend access to each skb fragment page. */
for (i = 0; i < frags; i++) {
skb_frag_t *frag = skb_shinfo(skb)->frags + i;
+ struct page *page = skb_frag_page(frag);
+ unsigned long size = skb_frag_size(frag);
+ unsigned long offset = frag->page_offset;
- tx->flags |= XEN_NETTXF_more_data;
+ /* Data must not cross a page boundary. */
+ BUG_ON(size + offset > PAGE_SIZE<<compound_order(page));
- id = get_id_from_freelist(&np->tx_skb_freelist, np->tx_skbs);
- np->tx_skbs[id].skb = skb_get(skb);
- tx = RING_GET_REQUEST(&np->tx, prod++);
- tx->id = id;
- ref = gnttab_claim_grant_reference(&np->gref_tx_head);
- BUG_ON((signed short)ref < 0);
+ /* Skip unused frames from start of page */
+ page += offset >> PAGE_SHIFT;
+ offset &= ~PAGE_MASK;
- mfn = pfn_to_mfn(page_to_pfn(skb_frag_page(frag)));
- gnttab_grant_foreign_access_ref(ref, np->xbdev->otherend_id,
- mfn, GNTMAP_readonly);
+ while (size > 0) {
+ unsigned long bytes;
- tx->gref = np->grant_tx_ref[id] = ref;
- tx->offset = frag->page_offset;
- tx->size = skb_frag_size(frag);
- tx->flags = 0;
+ BUG_ON(offset >= PAGE_SIZE);
+
+ bytes = PAGE_SIZE - offset;
+ if (bytes > size)
+ bytes = size;
+
+ tx->flags |= XEN_NETTXF_more_data;
+
+ id = get_id_from_freelist(&np->tx_skb_freelist, np->tx_skbs);
+ np->tx_skbs[id].skb = skb_get(skb);
+ tx = RING_GET_REQUEST(&np->tx, prod++);
+ tx->id = id;
+ ref = gnttab_claim_grant_reference(&np->gref_tx_head);
+ BUG_ON((signed short)ref < 0);
+
+ mfn = pfn_to_mfn(page_to_pfn(page));
+ gnttab_grant_foreign_access_ref(ref, np->xbdev->otherend_id,
+ mfn, GNTMAP_readonly);
+
+ tx->gref = np->grant_tx_ref[id] = ref;
+ tx->offset = offset;
+ tx->size = bytes;
+ tx->flags = 0;
+
+ offset += bytes;
+ size -= bytes;
+
+ /* Next frame */
+ if (offset == PAGE_SIZE && size) {
+ BUG_ON(!PageCompound(page));
+ page++;
+ offset = 0;
+ }
+ }
}
np->tx.req_prod_pvt = prod;
--
1.7.9.5
More information about the kernel-team
mailing list