[SRU][F:linux-bluefield][PATCH 03/10] netfilter: conntrack: remove unneeded nf_ct_put
Bodong Wang
bodong at nvidia.com
Thu Oct 27 21:26:52 UTC 2022
From: Florian Westphal <fw at strlen.de>
BugLink: https://bugs.launchpad.net/bugs/1995004
We can delay refcount increment until we reassign the existing entry to
the current skb.
A 0 refcount can't happen while the nf_conn object is still in the
hash table and parallel mutations are impossible because we hold the
bucket lock.
Signed-off-by: Florian Westphal <fw at strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo at netfilter.org>
(Cherry-picked from upstream ff73e7479b8eea594a985ca29f4b45d604dbcb2c)
Signed-off-by: Bodong Wang <bodong at nvidia.com>
---
net/netfilter/nf_conntrack_core.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
index 6909c50..f8213dc 100644
--- a/net/netfilter/nf_conntrack_core.c
+++ b/net/netfilter/nf_conntrack_core.c
@@ -909,6 +909,7 @@ static void __nf_conntrack_insert_prepare(struct nf_conn *ct)
tstamp->start = ktime_get_real_ns();
}
+/* caller must hold locks to prevent concurrent changes */
static int __nf_ct_resolve_clash(struct sk_buff *skb,
struct nf_conntrack_tuple_hash *h)
{
@@ -922,13 +923,12 @@ static int __nf_ct_resolve_clash(struct sk_buff *skb,
if (nf_ct_is_dying(ct))
return NF_DROP;
- if (!atomic_inc_not_zero(&ct->ct_general.use))
- return NF_DROP;
-
if (((ct->status & IPS_NAT_DONE_MASK) == 0) ||
nf_ct_match(ct, loser_ct)) {
struct net *net = nf_ct_net(ct);
+ nf_conntrack_get(&ct->ct_general);
+
nf_ct_acct_merge(ct, ctinfo, loser_ct);
nf_ct_add_to_dying_list(loser_ct);
nf_conntrack_put(&loser_ct->ct_general);
@@ -938,7 +938,6 @@ static int __nf_ct_resolve_clash(struct sk_buff *skb,
return NF_ACCEPT;
}
- nf_ct_put(ct);
return NF_DROP;
}
--
1.8.3.1
More information about the kernel-team
mailing list