Kuniyuki Iwashima says:
====================
af_unix: Remove spin_lock_nested() and convert to lock_cmp_fn.
This series removes spin_lock_nested() in AF_UNIX and instead
defines the locking orders as functions tied to each lock by
lockdep_set_lock_cmp_fn().
When the defined function returns a negative value, lockdep
considers it will not cause deadlock. (See ->cmp_fn() in
check_deadlock() and check_prev_add().)
When we cannot define the total ordering, we return -1 for
the allowed ordering and otherwise 0 as undefined. [0]
[0]: https://lore.kernel.org/netdev/thzkgbuwuo3knevpipu4rzsh5qgmwhklihypdgziiruabvh46f@uwdkpcfxgloo/
Changes:
v4:
* Patch 4
* Make unix_state_lock_cmp_fn() symmetric.
v3: https://lore.kernel.org/netdev/
20240614200715.93150-1-kuniyu@amazon.com/
* Patch 3
* Cache sk->sk_state
* s/unix_state_lock()/unix_state_unlock()/
* Patch 8
* Add embryo -> listener locking order
v2: https://lore.kernel.org/netdev/
20240611222905.34695-1-kuniyu@amazon.com/
* Patch 1 & 2
* Use (((l) > (r)) - ((l) < (r))) for comparison
v1: https://lore.kernel.org/netdev/
20240610223501.73191-1-kuniyu@amazon.com/
====================
Link: https://lore.kernel.org/r/20240620205623.60139-1-kuniyu@amazon.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>