]> www.infradead.org Git - users/hch/misc.git/commitdiff
net: use synchronize_rcu_expedited in cleanup_net()
authorEric Dumazet <edumazet@google.com>
Fri, 9 Feb 2024 15:31:00 +0000 (15:31 +0000)
committerDavid S. Miller <davem@davemloft.net>
Mon, 12 Feb 2024 12:17:03 +0000 (12:17 +0000)
cleanup_net() is calling synchronize_rcu() right before
acquiring RTNL.

synchronize_rcu() is much slower than synchronize_rcu_expedited(),
and cleanup_net() is currently single threaded. In many workloads
we want cleanup_net() to be fast, in order to free memory and various
sysfs and procfs entries as fast as possible.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
net/core/net_namespace.c

index 233ec0cdd0111d5ca21c6f8a66f4c1f3fbc4657b..f0540c5575157135b1dc5dece2220f81a408fb7e 100644 (file)
@@ -622,7 +622,7 @@ static void cleanup_net(struct work_struct *work)
         * the rcu_barrier() below isn't sufficient alone.
         * Also the pre_exit() and exit() methods need this barrier.
         */
-       synchronize_rcu();
+       synchronize_rcu_expedited();
 
        rtnl_lock();
        list_for_each_entry_reverse(ops, &pernet_list, list) {