Avoid piling too many producers on the busylock
by updating sk_rmem_alloc before busylock acquisition.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Reviewed-by: Kuniyuki Iwashima <kuniyu@google.com>
Reviewed-by: Willem de Bruijn <willemb@google.com>
Link: https://patch.msgid.link/20250916160951.541279-7-edumazet@google.com
Reviewed-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
if (rmem > (rcvbuf >> 1)) {
skb_condense(skb);
size = skb->truesize;
+ rmem = atomic_add_return(size, &sk->sk_rmem_alloc);
+ if (rmem > rcvbuf)
+ goto uncharge_drop;
busy = busylock_acquire(sk);
+ } else {
+ atomic_add(size, &sk->sk_rmem_alloc);
}
udp_set_dev_scratch(skb);
- atomic_add(size, &sk->sk_rmem_alloc);
-
spin_lock(&list->lock);
err = udp_rmem_schedule(sk, size);
if (err) {