The updating of 'bfqg->ref' should be protected by 'bfqd->lock', however,
during code review, we found that bfq_pd_free() update 'bfqg->ref'
without holding the lock, which is problematic:
1) bfq_pd_free() triggered by removing cgroup is called asynchronously;
2) bfqq will grab bfqg reference, and exit bfqq will drop the reference,
which can concurrent with 1).
Unfortunately, 'bfqd->lock' can't be held here because 'bfqd' might already
be freed in bfq_pd_free(). Fix the problem by using atomic refcount apis.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Link: https://lore.kernel.org/r/20230103084755.1256479-1-yukuai1@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
 
 static void bfqg_get(struct bfq_group *bfqg)
 {
-       bfqg->ref++;
+       refcount_inc(&bfqg->ref);
 }
 
 static void bfqg_put(struct bfq_group *bfqg)
 {
-       bfqg->ref--;
-
-       if (bfqg->ref == 0)
+       if (refcount_dec_and_test(&bfqg->ref))
                kfree(bfqg);
 }
 
        }
 
        /* see comments in bfq_bic_update_cgroup for why refcounting */
-       bfqg_get(bfqg);
+       refcount_set(&bfqg->ref, 1);
        return &bfqg->pd;
 }
 
 
        char blkg_path[128];
 
        /* reference counter (see comments in bfq_bic_update_cgroup) */
-       int ref;
+       refcount_t ref;
        /* Is bfq_group still online? */
        bool online;