====================
Create shadow types for struct_ops maps in skeletons
This patchset allows skeleton users to change the values of the fields
in struct_ops maps at runtime. It will create a shadow type pointer in
a skeleton for each struct_ops map, allowing users to access the
values of fields through these pointers. For instance, if there is an
integer field named "FOO" in a struct_ops map called "testmap", you
can access the value of "FOO" in this way.
skel->struct_ops.testmap->FOO = 13;
With this feature, the users can pass flags or other data along with
the map from the user space to the kernel without creating separate
struct_ops map for different values in BPF.
== Shadow Type ==
The shadow type of a struct_ops map is a variant of the original
struct type of the map. The code generator translates each field in
the original struct type to a field in the shadow type. The type of a
field in the shadow type may not be the same as the corresponding
field in the original struct type. For example, modifiers like
volatile, const, etc., are removed from the fields in a shadow
type. Function pointers are translated to pointers of struct
bpf_program.
Currently, only scalar types and function pointers are
supported. Fields belonging to structs, unions, non-function pointers,
arrays, or other types are not supported. For those unsupported
fields, they are converted to arrays of characters to preserve their
space within the original struct type.
The padding between consecutive fields is handled by padding fields
(__padding_*). This helps to maintain the memory layout consistent
with the original struct_type.
Here is an example of shadow types.
The origin struct type of a struct_ops map is
struct bpf_testmod_ops {
int (*test_1)(void);
void (*test_2)(int a, int b);
/* Used to test nullable arguments. */
int (*test_maybe_null)(int dummy, struct task_struct *task);
/* The following fields are used to test shadow copies. */
char onebyte;
struct {
int a;
int b;
} unsupported;
int data;
};
The struct_ops map, named testmod_1, of this type will be translated
to a pointer in the shadow type.
libbpf converts st_ops->data to the format of the shadow type for each
struct_ops map. This means that the bytes where function pointers are
located are converted to the values of the pointers of struct
bpf_program. The fields of other types are kept as they were.
Libbpf will synchronize the pointers of struct bpf_program with
st_ops->progs[] so that users can change function pointers
(bpf_program) before loading the map.
---
Changes from v5:
- Generate names for shadow types.
- Check btf and the number of struct_ops maps in gen_st_ops_shadow()
and gen_st_ops_shadow_init() instead of do_skeleton() and
do_subskeleton().
- Name unsupported fields in the pattern __unsupported_*.
- Have a padding field for a unsupported fields as well if necessary.
- Implement resolve_func_ptr() in gen.c instead of reusing the one in
libbpf. (Remove the part 1 in v4.)
- Fix stylistic issues.
Changes from v4:
- Convert function pointers to the pointers to struct bpf_program in
bpf_object__collect_st_ops_relos().
Changes from v3:
- Add comment to avoid people from removing resolve_func_ptr() from
libbpf_internal.h
- Fix commit logs and comments.
- Add an example about using the pointers of shadow types
for struct_ops maps to bpftool-gen.8.
Kui-Feng Lee (5):
libbpf: set btf_value_type_id of struct bpf_map for struct_ops.
libbpf: Convert st_ops->data to shadow type.
bpftool: generated shadow variables for struct_ops maps.
bpftool: Add an example for struct_ops map and shadow type.
selftests/bpf: Test if shadow types work correctly.
Kui-Feng Lee [Thu, 29 Feb 2024 06:45:23 +0000 (22:45 -0800)]
selftests/bpf: Test if shadow types work correctly.
Change the values of fields, including scalar types and function pointers,
and check if the struct_ops map works as expected.
The test changes the field "test_2" of "testmod_1" from the pointer to
test_2() to pointer to test_3() and the field "data" to 13. The function
test_2() and test_3() both compute a new value for "test_2_result", but in
different way. By checking the value of "test_2_result", it ensures the
struct_ops map works as expected with changes through shadow types.
Kui-Feng Lee [Thu, 29 Feb 2024 06:45:21 +0000 (22:45 -0800)]
bpftool: Generated shadow variables for struct_ops maps.
Declares and defines a pointer of the shadow type for each struct_ops map.
The code generator will create an anonymous struct type as the shadow type
for each struct_ops map. The shadow type is translated from the original
struct type of the map. The user of the skeleton use pointers of them to
access the values of struct_ops maps.
However, shadow types only supports certain types of fields, including
scalar types and function pointers. Any fields of unsupported types are
translated into an array of characters to occupy the space of the original
field. Function pointers are translated into pointers of the struct
bpf_program. Additionally, padding fields are generated to occupy the space
between two consecutive fields.
The pointers of shadow types of struct_osp maps are initialized when
*__open_opts() in skeletons are called. For a map called FOO, the user can
access it through the pointer at skel->struct_ops.FOO.
Kui-Feng Lee [Thu, 29 Feb 2024 06:45:20 +0000 (22:45 -0800)]
libbpf: Convert st_ops->data to shadow type.
Convert st_ops->data to the shadow type of the struct_ops map. The shadow
type of a struct_ops type is a variant of the original struct type
providing a way to access/change the values in the maps of the struct_ops
type.
bpf_map__initial_value() will return st_ops->data for struct_ops types. The
skeleton is going to use it as the pointer to the shadow type of the
original struct type.
One of the main differences between the original struct type and the shadow
type is that all function pointers of the shadow type are converted to
pointers of struct bpf_program. Users can replace these bpf_program
pointers with other BPF programs. The st_ops->progs[] will be updated
before updating the value of a map to reflect the changes made by users.
Kui-Feng Lee [Thu, 29 Feb 2024 06:45:19 +0000 (22:45 -0800)]
libbpf: Set btf_value_type_id of struct bpf_map for struct_ops.
For a struct_ops map, btf_value_type_id is the type ID of it's struct
type. This value is required by bpftool to generate skeleton including
pointers of shadow types. The code generator gets the type ID from
bpf_map__btf_value_type_id() in order to get the type information of the
struct type of a map.
While direct references to the "data" member haven't been found, there
are static initializers what include the final member. For example,
the "{}" here:
To avoid the build time and run time warnings seen with a 0-sized
trailing array for struct bpf_lpm_trie_key, introduce a new struct
that correctly uses a flexible array for the trailing bytes,
struct bpf_lpm_trie_key_u8. As part of this, include the "header"
portion (which is just the "prefixlen" member), so it can be used
by anything building a bpf_lpr_trie_key that has trailing members that
aren't a u8 flexible array (like the self-test[1]), which is named
struct bpf_lpm_trie_key_hdr.
Unfortunately, C++ refuses to parse the __struct_group() helper, so
it is not possible to define struct bpf_lpm_trie_key_hdr directly in
struct bpf_lpm_trie_key_u8, so we must open-code the union directly.
Adjust the kernel code to use struct bpf_lpm_trie_key_u8 through-out,
and for the selftest to use struct bpf_lpm_trie_key_hdr. Add a comment
to the UAPI header directing folks to the two new options.
====================
bpf, arm64: use BPF prog pack allocator in BPF JIT
Changes in V8 => V9:
V8: https://lore.kernel.org/bpf/20240221145106.105995-1-puranjay12@gmail.com/
1. Rebased on bpf-next/master
2. Added Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Changes in V7 => V8:
V7: https://lore.kernel.org/bpf/20240125133159.85086-1-puranjay12@gmail.com/
1. Rebase on bpf-next/master
2. Fix __text_poke() by removing usage of 'ret' that was never set.
Changes in V6 => V7:
V6: https://lore.kernel.org/all/20240124164917.119997-1-puranjay12@gmail.com/
1. Rebase on bpf-next/master.
Changes in V5 => V6:
V5: https://lore.kernel.org/all/20230908144320.2474-1-puranjay12@gmail.com/
1. Implement a text poke api to reduce code repeatition.
2. Use flush_icache_range() in place of caches_clean_inval_pou() in the
functions that modify code.
3. Optimize the bpf_jit_free() by not copying the all instructions on
the rw image to the ro_image
Changes in V4 => v5:
1. Remove the patch for making prog pack allocator portable as it will come
through the RISCV tree[1].
2. Add a new function aarch64_insn_set() to be used in
bpf_arch_text_invalidate() for putting illegal instructions after a
program is removed. The earlier implementation of bpf_arch_text_invalidate()
was calling aarch64_insn_patch_text_nosync() in a loop and making it slow
because each call invalidated the cache.
Here is test_tag now:
[root@ip-172-31-6-176 bpf]# time ./test_tag
test_tag: OK (40945 tests)
real 0m19.695s
user 0m1.514s
sys 0m17.841s
test_tag without these patches:
[root@ip-172-31-6-176 bpf]# time ./test_tag
test_tag: OK (40945 tests)
real 0m21.487s
user 0m1.647s
sys 0m19.106s
test_tag in the previous version was really slow > 2 minutes. see [2]
3. Add cache invalidation in aarch64_insn_copy() so other users can call the
function without worrying about the cache. Currently only bpf_arch_text_copy()
is using it, but there might be more users in the future.
Chanes in V3 => V4: Changes only in 3rd patch
1. Fix the I-cache maintenance: Clean the data cache and invalidate the i-Cache
only *after* the instructions have been copied to the ROX region.
Chanes in V2 => V3: Changes only in 3rd patch
1. Set prog = orig_prog; in the failure path of bpf_jit_binary_pack_finalize()
call.
2. Add comments explaining the usage of the offsets in the exception table.
Changes in v1 => v2:
1. Make the naming consistent in the 3rd patch:
ro_image and image
ro_header and header
ro_image_ptr and image_ptr
2. Use names dst/src in place of addr/opcode in second patch.
3. Add Acked-by: Song Liu <song@kernel.org> in 1st and 2nd patch.
BPF programs currently consume a page each on ARM64. For systems with many BPF
programs, this adds significant pressure to instruction TLB. High iTLB pressure
usually causes slow down for the whole system.
Song Liu introduced the BPF prog pack allocator[3] to mitigate the above issue.
It packs multiple BPF programs into a single huge page. It is currently only
enabled for the x86_64 BPF JIT.
This patch series enables the BPF prog pack allocator for the ARM64 BPF JIT.
====================================================
Performance Analysis of prog pack allocator on ARM64
====================================================
To test the performance of the BPF prog pack allocator on ARM64, a stresser
tool[4] was built. This tool loads 8 BPF programs on the system and triggers
5 of them in an infinite loop by doing system calls.
The runner script starts 20 instances of the above which loads 8*20=160 BPF
programs on the system, 5*20=100 of which are being constantly triggered.
In the above environment we try to build Python-3.8.4 and try to find different
iTLB metrics for the compilation done by gcc-12.2.0.
The source code[5] is configured with the following command:
./configure --enable-optimizations --with-ensurepip=install
Then the runner script is executed with the following command:
./run.sh "perf stat -e ITLB_WALK,L1I_TLB,INST_RETIRED,iTLB-load-misses -a make -j32"
This builds Python while 160 BPF programs are loaded and 100 are being constantly
triggered and measures iTLB related metrics.
The output of the above command is discussed below before and after enabling the
BPF prog pack allocator.
The tests were run on qemu-system-aarch64 with 32 cpus, 4G memory, -machine virt,
-cpu host, and -enable-kvm.
Results
-------
Before enabling prog pack allocator:
------------------------------------
Puranjay Mohan [Wed, 28 Feb 2024 14:18:24 +0000 (14:18 +0000)]
bpf, arm64: use bpf_prog_pack for memory management
Use bpf_jit_binary_pack_alloc for memory management of JIT binaries in
ARM64 BPF JIT. The bpf_jit_binary_pack_alloc creates a pair of RW and RX
buffers. The JIT writes the program into the RW buffer. When the JIT is
done, the program is copied to the final RX buffer
with bpf_jit_binary_pack_finalize.
Implement bpf_arch_text_copy() and bpf_arch_text_invalidate() for ARM64
JIT as these functions are required by bpf_jit_binary_pack allocator.
Puranjay Mohan [Wed, 28 Feb 2024 14:18:23 +0000 (14:18 +0000)]
arm64: patching: implement text_poke API
The text_poke API is used to implement functions like memcpy() and
memset() for instruction memory (RO+X). The implementation is similar to
the x86 version.
This will be used by the BPF JIT to write and modify BPF programs. There
could be more users of this in the future.
Alexei Starovoitov [Tue, 27 Feb 2024 21:54:18 +0000 (13:54 -0800)]
Merge branch 'bpf-arm64-support-exceptions'
Puranjay Mohan says:
====================
bpf, arm64: Support Exceptions
Changes in V2->V3:
V2: https://lore.kernel.org/all/20230917000045.56377-1-puranjay12@gmail.com/
- Use unwinder from stacktrace.c rather than open coding the unwind logic.
- Fix a bug in the prologue related to BPF_FP (Xu Kuohai)
Changes in V1->V2:
V1: https://lore.kernel.org/all/20230912233942.6734-1-puranjay12@gmail.com/
- Remove exceptions from DENYLIST.aarch64 as they are supported now.
The base support for exceptions was merged with [1] and it was enabled for
x86-64.
This patch set enables the support on ARM64, all sefltests are passing:
Puranjay Mohan [Thu, 1 Feb 2024 12:52:25 +0000 (12:52 +0000)]
bpf, arm64: support exceptions
The prologue generation code has been modified to make the callback
program use the stack of the program marked as exception boundary where
callee-saved registers are already pushed.
As the bpf_throw function never returns, if it clobbers any callee-saved
registers, they would remain clobbered. So, the prologue of the
exception-boundary program is modified to push R23 and R24 as well,
which the callback will then recover in its epilogue.
The Procedure Call Standard for the Arm 64-bit Architecture[1] states
that registers r19 to r28 should be saved by the callee. BPF programs on
ARM64 already save all callee-saved registers except r23 and r24. This
patch adds an instruction in prologue of the program to save these
two registers and another instruction in the epilogue to recover them.
These extra instructions are only added if bpf_throw() is used. Otherwise
the emitted prologue/epilogue remains unchanged.
Benjamin Tissoires [Wed, 21 Feb 2024 16:25:17 +0000 (17:25 +0100)]
bpf: allow more maps in sleepable bpf programs
These 2 maps types are required for HID-BPF when a user wants to do
IO with a device from a sleepable tracing point.
Allowing BPF_MAP_TYPE_QUEUE (and therefore BPF_MAP_TYPE_STACK) allows
for a BPF program to prepare from an IRQ the list of HID commands to send
back to the device and then these commands can be retrieved from the
sleepable trace point.
Martin KaFai Lau [Thu, 22 Feb 2024 19:19:40 +0000 (11:19 -0800)]
Merge branch 'Check cfi_stubs before registering a struct_ops type.'
Kui-Feng Lee says:
====================
Recently, st_ops->cfi_stubs was introduced. However, the upcoming new
struct_ops support (e.g. sched_ext) is not aware of this and does not
provide its own cfi_stubs. The kernel ends up NULL dereferencing the
st_ops->cfi_stubs.
Considering struct_ops supports kernel module now, this NULL check
is necessary. This patch set is to reject struct_ops registration
that does not provide a cfi_stubs.
Changes from v4:
- Remove changes of check_member.
- Remove checks of the pointers in cfi_stubs[].
Changes from v3:
- Remove CFI stub function for get_info.
- Allow passing NULL prog arg to check_member of struct
bpf_struct_ops type.
- Call check_member to determines if a CFI stub function should be
defined for an operator.
Changes from v2:
- Add a stub function for get_info of struct tcp_congestion_ops.
Changes from v1:
- Check *(void **)(cfi_stubs + moff) to make sure stub functions are
provided for every operator.
- Add a test case to ensure that struct_ops rejects incomplete
cfi_stub.
Kui-Feng Lee [Thu, 22 Feb 2024 02:11:05 +0000 (18:11 -0800)]
selftests/bpf: Test case for lacking CFI stub functions.
Ensure struct_ops rejects the registration of struct_ops types without
proper CFI stub functions.
bpf_test_no_cfi.ko is a module that attempts to register a struct_ops type
called "bpf_test_no_cfi_ops" with cfi_stubs of NULL and non-NULL value.
The NULL one should fail, and the non-NULL one should succeed. The module
can only be loaded successfully if these registrations yield the expected
results.
Kui-Feng Lee [Thu, 22 Feb 2024 02:11:04 +0000 (18:11 -0800)]
bpf: Check cfi_stubs before registering a struct_ops type.
Recently, st_ops->cfi_stubs was introduced. However, the upcoming new
struct_ops support (e.g. sched_ext) is not aware of this and does not
provide its own cfi_stubs. The kernel ends up NULL dereferencing the
st_ops->cfi_stubs.
Considering struct_ops supports kernel module now, this NULL check
is necessary. This patch is to reject struct_ops registration
that does not provide a cfi_stubs.
The batch lookup and lookup_and_delete APIs have two parameters,
in_batch and out_batch, to facilitate iterative
lookup/lookup_and_deletion operations for supported maps. Except NULL
for in_batch at the start of these two batch operations, both parameters
need to point to memory equal or larger than the respective map key
size, except for various hashmaps (hash, percpu_hash, lru_hash,
lru_percpu_hash) where the in_batch/out_batch memory size should be
at least 4 bytes.
Dave Thaler [Wed, 21 Feb 2024 17:54:19 +0000 (09:54 -0800)]
bpf, docs: specify which BPF_ABS and BPF_IND fields were zero
Specifying which fields were unused allows IANA to only list as deprecated
instructions that were actually used, leaving the rest as unassigned and
possibly available for future use for something else.
Dave Thaler [Wed, 21 Feb 2024 17:35:35 +0000 (09:35 -0800)]
bpf, docs: Fix typos in instruction-set.rst
* "BPF ADD" should be "BPF_ADD".
* "src" should be "src_reg" in several places. The latter is the field name
in the instruction. The former refers to the value of the register, or the
immediate.
* Add '' around field names in one sentence, for consistency with the rest
of the document.
Thread [0] discusses a fix for bpf_loop() handling bug.
That change makes tcp_custom_syncookie test too complex to verify.
The fix discussed in [0] would be sent via 'bpf' tree,
tcp_custom_syncookie test is not in 'bpf' tree yet.
As agreed in [0] I'm sending syncookie test update separately.
Eduard Zingerman [Thu, 22 Feb 2024 15:03:00 +0000 (17:03 +0200)]
selftests/bpf: update tcp_custom_syncookie to use scalar packet offset
This commit updates tcp_custom_syncookie.c:tcp_parse_option() to use
explicit packet offset (ctx->off) for packet access instead of ever
moving pointer (ctx->ptr), this reduces verification complexity:
- the tcp_parse_option() is passed as a callback to bpf_loop();
- suppose a checkpoint is created each time at function entry;
- the ctx->ptr is tracked by verifier as PTR_TO_PACKET;
- the ctx->ptr is incremented in tcp_parse_option(),
thus umax_value field tracked for it is incremented as well;
- on each next iteration of tcp_parse_option()
checkpoint from a previous iteration can't be reused
for state pruning, because PTR_TO_PACKET registers are
considered equivalent only if old->umax_value >= cur->umax_value;
- on the other hand, the ctx->off is a SCALAR,
subject to widen_imprecise_scalars();
- it's exact bounds are eventually forgotten and it is tracked as
unknown scalar at entry to tcp_parse_option();
- hence checkpoints created at the start of the function eventually
converge.
The change is similar to one applied in [0] to xdp_synproxy_kern.c.
Comparing before and after with veristat yields following results:
Alexei Starovoitov [Tue, 20 Feb 2024 23:50:01 +0000 (15:50 -0800)]
bpf: Shrink size of struct bpf_map/bpf_array.
Back in 2018 the commit be95a845cc44 ("bpf: avoid false sharing of map refcount
with max_entries") added ____cacheline_aligned to "struct bpf_map" to make sure
that fields like refcnt don't share a cache line with max_entries that is used
to bounds check map access. That was done to make spectre style attacks harder.
The main mitigation is done via code similar to array_index_nospec(), of course.
This was an additional precaution.
It increased the size of "struct bpf_map" a little, but it's affect on all
other maps (like array) is significant, since "struct bpf_map" is typically
the first member in other map types.
Undo this ____cacheline_aligned tag. Instead move freeze_mutex field around, so
that refcnt and max_entries are still in different cache lines.
The main effect is seen in sizeof(struct bpf_array) that reduces from 320
to 248 bytes.
Marcos Paulo de Souza [Fri, 16 Feb 2024 12:42:45 +0000 (09:42 -0300)]
selftests/bpf: Remove empty TEST_CUSTOM_PROGS
Commit f04a32b2c5b5 ("selftests/bpf: Do not use sign-file as testcase")
removed the TEST_CUSTOM_PROGS assignment, and removed it from being used
on TEST_GEN_FILES. Remove two leftovers from that cleanup. Found by
inspection.
Yonghong Song [Wed, 14 Feb 2024 23:29:51 +0000 (15:29 -0800)]
bpf: Fix test verif_scale_strobemeta_subprogs failure due to llvm19
With latest llvm19, I hit the following selftest failures with
$ ./test_progs -j
libbpf: prog 'on_event': BPF program load failed: Permission denied
libbpf: prog 'on_event': -- BEGIN PROG LOAD LOG --
combined stack size of 4 calls is 544. Too large
verification time 1344153 usec
stack depth 24+440+0+32
processed 51008 insns (limit 1000000) max_states_per_insn 19 total_states 1467 peak_states 303 mark_read 146
-- END PROG LOAD LOG --
libbpf: prog 'on_event': failed to load: -13
libbpf: failed to load object 'strobemeta_subprogs.bpf.o'
scale_test:FAIL:expect_success unexpected error: -13 (errno 13)
#498 verif_scale_strobemeta_subprogs:FAIL
The verifier complains too big of the combined stack size (544 bytes) which
exceeds the maximum stack limit 512. This is a regression from llvm19 ([1]).
In the above error log, the original stack depth is 24+440+0+32.
To satisfy interpreter's need, in verifier the stack depth is adjusted to
32+448+32+32=544 which exceeds 512, hence the error. The same adjusted
stack size is also used for jit case.
But the jitted codes could use smaller stack size.
In the above, STACK_ALIGN in riscv/net/bpf_jit_comp32.c is defined as 16.
So stack is aligned in either 8 or 16, x86/s390 having 8-byte stack alignment and
the rest having 16-byte alignment.
This patch calculates total stack depth based on 16-byte alignment if jit is requested.
For the above failing case, the new stack size will be 32+448+0+32=512 and no verification
failure. llvm19 regression will be discussed separately in llvm upstream.
The verifier change caused three test failures as these tests compared messages
with stack size. More specifically,
- test_global_funcs/global_func1: fail with interpreter mode and success with jit mode.
Adjusted stack sizes so both jit and interpreter modes will fail.
- async_stack_depth/{pseudo_call_check, async_call_root_check}: since jit and interpreter
will calculate different stack sizes, the failure msg is adjusted to omit those
specific stack size numbers.
Andrii Nakryiko [Wed, 14 Feb 2024 17:41:00 +0000 (09:41 -0800)]
bpf: improve duplicate source code line detection
Verifier log avoids printing the same source code line multiple times
when a consecutive block of BPF assembly instructions are covered by the
same original (C) source code line. This greatly improves verifier log
legibility.
Unfortunately, this check is imperfect and in production applications it
quite often happens that verifier log will have multiple duplicated
source lines emitted, for no apparently good reason. E.g., this is
excerpt from a real-world BPF application (with register states omitted
for clarity):
As can be seen, line 394 is emitted thrice, 396 is emitted twice, and
line 400 is duplicated as well. Note that there are no intermingling
other lines of source code in between these duplicates, so the issue is
not compiler reordering assembly instruction such that multiple original
source code lines are in effect.
It becomes more obvious what's going on if we look at *full* original line info
information (using btfdump for this, [0]):
#2764: line: insn #5363 --> 394:3 @ ./././strobemeta_probe.bpf.c
for (int i = 0; i < STROBE_MAX_MAP_ENTRIES; ++i) {
#2765: line: insn #5373 --> 394:21 @ ./././strobemeta_probe.bpf.c
for (int i = 0; i < STROBE_MAX_MAP_ENTRIES; ++i) {
#2766: line: insn #5375 --> 394:47 @ ./././strobemeta_probe.bpf.c
for (int i = 0; i < STROBE_MAX_MAP_ENTRIES; ++i) {
#2767: line: insn #5377 --> 394:3 @ ./././strobemeta_probe.bpf.c
for (int i = 0; i < STROBE_MAX_MAP_ENTRIES; ++i) {
#2768: line: insn #5378 --> 414:10 @ ./././strobemeta_probe.bpf.c
return off;
We can see that there are four line info records covering
instructions #5363 through #5377 (instruction indices are shifted due to
subprog instruction being appended to main program), all of them are
pointing to the same C source code line #394. But each of them points to
a different part of that line, which is denoted by differing column
numbers (3, 21, 47, 3).
But verifier log doesn't distinguish between parts of the same source code line
and doesn't emit this column number information, so for end user it's just a
repetitive visual noise. So let's improve the detection of repeated source code
line and avoid this.
With the changes in this patch, we get this output for the same piece of BPF
program log:
Andrii Nakryiko [Wed, 14 Feb 2024 00:23:11 +0000 (16:23 -0800)]
bpf: Use O(log(N)) binary search to find line info record
Real-world BPF applications keep growing in size. Medium-sized production
application can easily have 50K+ verified instructions, and its line
info section in .BTF.ext has more than 3K entries.
When verifier emits log with log_level>=1, it annotates assembly code
with matched original C source code. Currently it uses linear search
over line info records to find a match. As complexity of BPF
applications grows, this O(K * N) approach scales poorly.
So, let's instead of linear O(N) search for line info record use faster
equivalent O(log(N)) binary search algorithm. It's not a plain binary
search, as we don't look for exact match. It's an upper bound search
variant, looking for rightmost line info record that starts at or before
given insn_off.
Some unscientific measurements were done before and after this change.
They were done in VM and fluctuate a bit, but overall the speed up is
undeniable.
While Katran's speed up is pretty modest (about 105ms, or 6%), for
production pyperf BPF program (on_py_event) it's much greater already,
going from 387ms down to 295ms (23% improvement).
Looking at BPF selftests's biggest pyperf example, we can see even more
dramatic improvement, shaving more than 50% of time, going from 12.3s
down to 5.6s.
Different amount of improvement is the function of overall amount of BPF
assembly instructions in .bpf.o files (which contributes to how much
line info records there will be and thus, on average, how much time linear
search will take), among other things:
Granted, this only applies to debugging cases (e.g., using veristat, or
failing verification in production), but seems worth doing to improve
overall developer experience anyways.
Matt Bobrowski [Wed, 14 Feb 2024 09:14:23 +0000 (09:14 +0000)]
libbpf: Make remark about zero-initializing bpf_*_info structs
In some situations, if you fail to zero-initialize the
bpf_{prog,map,btf,link}_info structs supplied to the set of LIBBPF
helpers bpf_{prog,map,btf,link}_get_info_by_fd(), you can expect the
helper to return an error. This can possibly leave people in a
situation where they're scratching their heads for an unnnecessary
amount of time. Make an explicit remark about the requirement of
zero-initializing the supplied bpf_{prog,map,btf,link}_info structs
for the respective LIBBPF helpers.
Internally, LIBBPF helpers bpf_{prog,map,btf,link}_get_info_by_fd()
call into bpf_obj_get_info_by_fd() where the bpf(2)
BPF_OBJ_GET_INFO_BY_FD command is used. This specific command is
effectively backed by restrictions enforced by the
bpf_check_uarg_tail_zero() helper. This function ensures that if the
size of the supplied bpf_{prog,map,btf,link}_info structs are larger
than what the kernel can handle, trailing bits are zeroed. This can be
a problem when compiling against UAPI headers that don't necessarily
match the sizes of the same underlying types known to the kernel.
Andrii Nakryiko [Mon, 12 Feb 2024 23:59:44 +0000 (15:59 -0800)]
bpf: emit source code file name and line number in verifier log
As BPF applications grow in size and complexity and are separated into
multiple .bpf.c files that are statically linked together, it becomes
harder and harder to match verifier's BPF assembly level output to
original C code. While often annotated C source code is unique enough to
be able to identify the file it belongs to, quite often this is actually
problematic as parts of source code can be quite generic.
Long story short, it is very useful to see source code file name and
line number information along with the original C code. Verifier already
knows this information, we just need to output it.
This patch extends verifier log with file name and line number
information, emitted next to original (presumably C) source code,
annotating BPF assembly output, like so:
; <original C code> @ <filename>.bpf.c:<line>
If file name has directory names in it, they are stripped away. This
should be fine in practice as file names tend to be pretty unique with
C code anyways, and keeping log size smaller is always good.
In practice this might look something like below, where some code is
coming from application files, while others are from libbpf's usdt.bpf.h
header file:
====================
Fix global subprog PTR_TO_CTX arg handling
Fix confusing and incorrect inference of PTR_TO_CTX argument type in BPF
global subprogs. For some program types (iters, tracepoint, any program type
that doesn't have fixed named "canonical" context type) when user uses (in
a correct and valid way) a pointer argument to user-defined anonymous struct
type, verifier will incorrectly assume that it has to be PTR_TO_CTX argument.
While it should be just a PTR_TO_MEM argument with allowed size calculated
from user-provided (even if anonymous) struct.
This did come up in practice and was very confusing to users, so let's prevent
this going forward. We had to do a slight refactoring of
btf_get_prog_ctx_type() to make it easy to support a special s390x KPROBE use
cases. See details in respective patches.
v1->v2:
- special-case typedef bpf_user_pt_regs_t handling for KPROBE programs,
fixing s390x after changes in patch #2.
====================
Andrii Nakryiko [Mon, 12 Feb 2024 23:32:20 +0000 (15:32 -0800)]
bpf: don't infer PTR_TO_CTX for programs with unnamed context type
For program types that don't have named context type name (e.g., BPF
iterator programs or tracepoint programs), ctx_tname will be a non-NULL
empty string. For such programs it shouldn't be possible to have
PTR_TO_CTX argument for global subprogs based on type name alone.
arg:ctx tag is the only way to have PTR_TO_CTX passed into global
subprog for such program types.
Fix this loophole, which currently would assume PTR_TO_CTX whenever
user uses a pointer to anonymous struct as an argument to their global
subprogs. This happens in practice with the following (quite common, in
practice) approach:
typedef struct { /* anonymous */
int x;
} my_type_t;
int my_subprog(my_type_t *arg) { ... }
User's intent is to have PTR_TO_MEM argument for `arg`, but verifier
will complain about expecting PTR_TO_CTX.
This fix also closes unintended s390x-specific KPROBE handling of
PTR_TO_CTX case. Selftest change is necessary to accommodate this.
Andrii Nakryiko [Mon, 12 Feb 2024 23:32:19 +0000 (15:32 -0800)]
bpf: handle bpf_user_pt_regs_t typedef explicitly for PTR_TO_CTX global arg
Expected canonical argument type for global function arguments
representing PTR_TO_CTX is `bpf_user_pt_regs_t *ctx`. This currently
works on s390x by accident because kernel resolves such typedef to
underlying struct (which is anonymous on s390x), and erroneously
accepting it as expected context type. We are fixing this problem next,
which would break s390x arch, so we need to handle `bpf_user_pt_regs_t`
case explicitly for KPROBE programs.
Andrii Nakryiko [Mon, 12 Feb 2024 23:32:18 +0000 (15:32 -0800)]
bpf: simplify btf_get_prog_ctx_type() into btf_is_prog_ctx_type()
Return result of btf_get_prog_ctx_type() is never used and callers only
check NULL vs non-NULL case to determine if given type matches expected
PTR_TO_CTX type. So rename function to `btf_is_prog_ctx_type()` and
return a simple true/false. We'll use this simpler interface to handle
kprobe program type's special typedef case in the next patch.
Oliver Crumrine [Fri, 9 Feb 2024 19:41:22 +0000 (14:41 -0500)]
bpf: remove check in __cgroup_bpf_run_filter_skb
Originally, this patch removed a redundant check in
BPF_CGROUP_RUN_PROG_INET_EGRESS, as the check was already being done in
the function it called, __cgroup_bpf_run_filter_skb. For v2, it was
reccomended that I remove the check from __cgroup_bpf_run_filter_skb,
and add the checks to the other macro that calls that function,
BPF_CGROUP_RUN_PROG_INET_INGRESS.
To sum it up, checking that the socket exists and that it is a full
socket is now part of both macros BPF_CGROUP_RUN_PROG_INET_EGRESS and
BPF_CGROUP_RUN_PROG_INET_INGRESS, and it is no longer part of the
function they call, __cgroup_bpf_run_filter_skb.
v3->v4: Fixed weird merge conflict.
v2->v3: Sent to bpf-next instead of generic patch
v1->v2: Addressed feedback about where check should be removed.
Martin KaFai Lau [Tue, 13 Feb 2024 21:28:12 +0000 (13:28 -0800)]
Merge branch 'Support PTR_MAYBE_NULL for struct_ops arguments.'
Kui-Feng Lee says:
====================
Allow passing null pointers to the operators provided by a struct_ops
object. This is an RFC to collect feedbacks/opinions.
The function pointers that are passed to struct_ops operators (the function
pointers) are always considered reliable until now. They cannot be
null. However, in certain scenarios, it should be possible to pass null
pointers to these operators. For instance, sched_ext may pass a null
pointer in the struct task type to an operator that is provided by its
struct_ops objects.
The proposed solution here is to add PTR_MAYBE_NULL annotations to
arguments and create instances of struct bpf_ctx_arg_aux (arg_info) for
these arguments. These arg_infos will be installed at
prog->aux->ctx_arg_info and will be checked by the BPF verifier when
loading the programs. When a struct_ops program accesses arguments in the
ctx, the verifier will call btf_ctx_access() (through
bpf_verifier_ops->is_valid_access) to verify the access. btf_ctx_access()
will check arg_info and use the information of the matched arg_info to
properly set reg_type.
For nullable arguments, this patch sets an arg_info to label them with
PTR_TO_BTF_ID | PTR_TRUSTED | PTR_MAYBE_NULL. This enforces the verifier to
check programs and ensure that they properly check the pointer. The
programs should check if the pointer is null before reading/writing the
pointed memory.
The implementer of a struct_ops should annotate the arguments that can
be null. The implementer should define a stub function (empty) as a
placeholder for each defined operator. The name of a stub function
should be in the pattern "<st_op_type>__<operator name>". For example,
for test_maybe_null of struct bpf_testmod_ops, it's stub function name
should be "bpf_testmod_ops__test_maybe_null". You mark an argument
nullable by suffixing the argument name with "__nullable" at the stub
function. Here is the example in bpf_testmod.c.
This means that the argument 1 (2nd) of bpf_testmod_ops->test_maybe_null,
which is a function pointer that can be null. With this annotation, the
verifier will understand how to check programs using this arguments. A BPF
program that implement test_maybe_null should check the pointer to make
sure it is not null before using it. For example,
if (task__nullable)
save_tgid = task__nullable->tgid
Without the check, the verifier will reject the program.
Since we already has stub functions for kCFI, we just reuse these stub
functions with the naming convention mentioned earlier. These stub
functions with the naming convention is only required if there are nullable
arguments to annotate. For functions without nullable arguments, stub
functions are not necessary for the purpose of this patch.
---
Major changes from v7:
- Update a comment that is out of date.
Major changes from v6:
- Remove "len" from bpf_struct_ops_desc_release().
- Rename arg_info(s) to info, and rename all_arg_info to arg_info in
prepare_arg_info().
- Rename arg_info to info in struct bpf_struct_ops_arg_info.
Major changes from v5:
- Rename all member_arg_info variables.
- Refactor to bpf_struct_ops_desc_release() to share code
between btf_free_struct_ops_tab() and bpf_struct_ops_desc_init().
- Refactor to btf_param_match_suffix(). (Add a new patch as the part 2.)
- Clean up the commit log and remaining code in the patch of test cases.
- Update a comment in struct_ops_maybe_null.c.
Major changes from v4:
- Remove the support of pointers to types other than struct
types. That would be a separate patchset.
- Remove the patch about extending PTR_TO_BTF_ID.
- Remove the test against various pointer types from selftests.
- Remove the patch "bpf: Remove an unnecessary check" and send that
patch separately.
- Remove member_arg_info_cnt from struct bpf_struct_ops_desc.
- Use btf_id from FUNC_PROTO of a function pointer instead of a stub
function.
Major changes from v3:
- Move the code collecting argument information to prepare_arg_info()
called in the loop in bpf_struct_ops_desc_init().
- Simplify the memory allocation by having separated arg_info for
each member of a struct_ops type.
- Extend PTR_TO_BTF_ID to pointers to scalar types and array types,
not only to struct types.
Major changes from v2:
- Remove dead code.
- Add comments to explain the code itself.
Major changes from v1:
- Annotate arguments by suffixing argument names with "__nullable" at
stub functions.
Kui-Feng Lee [Fri, 9 Feb 2024 02:37:50 +0000 (18:37 -0800)]
selftests/bpf: Test PTR_MAYBE_NULL arguments of struct_ops operators.
Test if the verifier verifies nullable pointer arguments correctly for BPF
struct_ops programs.
"test_maybe_null" in struct bpf_testmod_ops is the operator defined for the
test cases here.
A BPF program should check a pointer for NULL beforehand to access the
value pointed by the nullable pointer arguments, or the verifier should
reject the programs. The test here includes two parts; the programs
checking pointers properly and the programs not checking pointers
beforehand. The test checks if the verifier accepts the programs checking
properly and rejects the programs not checking at all.
Kui-Feng Lee [Fri, 9 Feb 2024 02:37:49 +0000 (18:37 -0800)]
bpf: Create argument information for nullable arguments.
Collect argument information from the type information of stub functions to
mark arguments of BPF struct_ops programs with PTR_MAYBE_NULL if they are
nullable. A nullable argument is annotated by suffixing "__nullable" at
the argument name of stub function.
For nullable arguments, this patch sets a struct bpf_ctx_arg_aux to label
their reg_type with PTR_TO_BTF_ID | PTR_TRUSTED | PTR_MAYBE_NULL. This
makes the verifier to check programs and ensure that they properly check
the pointer. The programs should check if the pointer is null before
accessing the pointed memory.
The implementer of a struct_ops type should annotate the arguments that can
be null. The implementer should define a stub function (empty) as a
placeholder for each defined operator. The name of a stub function should
be in the pattern "<st_op_type>__<operator name>". For example, for
test_maybe_null of struct bpf_testmod_ops, it's stub function name should
be "bpf_testmod_ops__test_maybe_null". You mark an argument nullable by
suffixing the argument name with "__nullable" at the stub function.
Since we already has stub functions for kCFI, we just reuse these stub
functions with the naming convention mentioned earlier. These stub
functions with the naming convention is only required if there are nullable
arguments to annotate. For functions having not nullable arguments, stub
functions are not necessary for the purpose of this patch.
This patch will prepare a list of struct bpf_ctx_arg_aux, aka arg_info, for
each member field of a struct_ops type. "arg_info" will be assigned to
"prog->aux->ctx_arg_info" of BPF struct_ops programs in
check_struct_ops_btf_id() so that it can be used by btf_ctx_access() later
to set reg_type properly for the verifier.
Cupertino Miranda [Tue, 13 Feb 2024 17:35:43 +0000 (17:35 +0000)]
libbpf: Add support to GCC in CORE macro definitions
Due to internal differences between LLVM and GCC the current
implementation for the CO-RE macros does not fit GCC parser, as it will
optimize those expressions even before those would be accessible by the
BPF backend.
As examples, the following would be optimized out with the original
definitions:
- As enums are converted to their integer representation during
parsing, the IR would not know how to distinguish an integer
constant from an actual enum value.
- Types need to be kept as temporary variables, as the existing type
casts of the 0 address (as expanded for LLVM), are optimized away by
the GCC C parser, never really reaching GCCs IR.
Although, the macros appear to add extra complexity, the expanded code
is removed from the compilation flow very early in the compilation
process, not really affecting the quality of the generated assembly.
Jose E. Marchesi [Thu, 8 Feb 2024 20:36:12 +0000 (21:36 +0100)]
bpf: Abstract loop unrolling pragmas in BPF selftests
[Changes from V1:
- Avoid conflict by rebasing with latest master.]
Some BPF tests use loop unrolling compiler pragmas that are clang
specific and not supported by GCC. These pragmas, along with their
GCC equivalences are:
#pragma clang loop unroll_count(N)
#pragma GCC unroll N
#pragma unroll [aka #pragma clang loop unroll(enable)]
There is no GCC equivalence to this pragma. It enables unrolling on
loops that the compiler would not ordinarily unroll even with
-O2|-funroll-loops, but it is not equivalent to full unrolling
either.
This patch adds a new header progs/bpf_compiler.h that defines the
following macros, which correspond to each pair of compiler-specific
pragmas above:
Yonghong Song [Wed, 7 Feb 2024 07:01:07 +0000 (23:01 -0800)]
selftests/bpf: Ensure fentry prog cannot attach to bpf_spin_{lock,unlcok}()
Add two tests to ensure fentry programs cannot attach to
bpf_spin_{lock,unlock}() helpers. The tracing_failure.c files
can be used in the future for other tracing failure cases.
Yonghong Song [Wed, 7 Feb 2024 07:01:02 +0000 (23:01 -0800)]
bpf: Mark bpf_spin_{lock,unlock}() helpers with notrace correctly
Currently tracing is supposed not to allow for bpf_spin_{lock,unlock}()
helper calls. This is to prevent deadlock for the following cases:
- there is a prog (prog-A) calling bpf_spin_{lock,unlock}().
- there is a tracing program (prog-B), e.g., fentry, attached
to bpf_spin_lock() and/or bpf_spin_unlock().
- prog-B calls bpf_spin_{lock,unlock}().
For such a case, when prog-A calls bpf_spin_{lock,unlock}(),
a deadlock will happen.
The related source codes are below in kernel/bpf/helpers.c:
notrace BPF_CALL_1(bpf_spin_lock, struct bpf_spin_lock *, lock)
notrace BPF_CALL_1(bpf_spin_unlock, struct bpf_spin_lock *, lock)
notrace is supposed to prevent fentry prog from attaching to
bpf_spin_{lock,unlock}().
But actually this is not the case and fentry prog can successfully
attached to bpf_spin_lock(). Siddharth Chintamaneni reported
the issue in [1]. The following is the macro definition for
above BPF_CALL_1:
#define BPF_CALL_x(x, name, ...) \
static __always_inline \
u64 ____##name(__BPF_MAP(x, __BPF_DECL_ARGS, __BPF_V, __VA_ARGS__)); \
typedef u64 (*btf_##name)(__BPF_MAP(x, __BPF_DECL_ARGS, __BPF_V, __VA_ARGS__)); \
u64 name(__BPF_REG(x, __BPF_DECL_REGS, __BPF_N, __VA_ARGS__)); \
u64 name(__BPF_REG(x, __BPF_DECL_REGS, __BPF_N, __VA_ARGS__)) \
{ \
return ((btf_##name)____##name)(__BPF_MAP(x,__BPF_CAST,__BPF_N,__VA_ARGS__));\
} \
static __always_inline \
u64 ____##name(__BPF_MAP(x, __BPF_DECL_ARGS, __BPF_V, __VA_ARGS__))
The notrace attribute is actually applied to the static always_inline function
____bpf_spin_{lock,unlock}(). The actual callback function
bpf_spin_{lock,unlock}() is not marked with notrace, hence
allowing fentry prog to attach to two helpers, and this
may cause the above mentioned deadlock. Siddharth Chintamaneni
actually has a reproducer in [2].
To fix the issue, a new macro NOTRACE_BPF_CALL_1 is introduced which
will add notrace attribute to the original function instead of
the hidden always_inline function and this fixed the problem.
Daniel Xu [Sun, 4 Feb 2024 21:06:34 +0000 (14:06 -0700)]
bpf: Have bpf_rdonly_cast() take a const pointer
Since 20d59ee55172 ("libbpf: add bpf_core_cast() macro"), libbpf is now
exporting a const arg version of bpf_rdonly_cast(). This causes the
following conflicting type error when generating kfunc prototypes from
BTF:
In file included from skeleton/pid_iter.bpf.c:5:
/home/dxu/dev/linux/tools/bpf/bpftool/bootstrap/libbpf/include/bpf/bpf_core_read.h:297:14: error: conflicting types for 'bpf_rdonly_cast'
extern void *bpf_rdonly_cast(const void *obj__ign, __u32 btf_id__k) __ksym __weak;
^
./vmlinux.h:135625:14: note: previous declaration is here
extern void *bpf_rdonly_cast(void *obj__ign, u32 btf_id__k) __weak __ksym;
This is b/c the kernel defines bpf_rdonly_cast() with non-const arg.
Since const arg is more permissive and thus backwards compatible, we
change the kernel definition as well to avoid conflicting type errors.
Marco Elver [Wed, 7 Feb 2024 12:26:17 +0000 (13:26 +0100)]
bpf: Allow compiler to inline most of bpf_local_storage_lookup()
In various performance profiles of kernels with BPF programs attached,
bpf_local_storage_lookup() appears as a significant portion of CPU
cycles spent. To enable the compiler generate more optimal code, turn
bpf_local_storage_lookup() into a static inline function, where only the
cache insertion code path is outlined
Notably, outlining cache insertion helps avoid bloating callers by
duplicating setting up calls to raw_spin_{lock,unlock}_irqsave() (on
architectures which do not inline spin_lock/unlock, such as x86), which
would cause the compiler produce worse code by deciding to outline
otherwise inlinable functions. The call overhead is neutral, because we
make 2 calls either way: either calling raw_spin_lock_irqsave() and
raw_spin_unlock_irqsave(); or call __bpf_local_storage_insert_cache(),
which calls raw_spin_lock_irqsave(), followed by a tail-call to
raw_spin_unlock_irqsave() where the compiler can perform TCO and (in
optimized uninstrumented builds) turns it into a plain jump. The call to
__bpf_local_storage_insert_cache() can be elided entirely if
cacheit_lockit is a false constant expression.
Based on results from './benchs/run_bench_local_storage.sh' (21 trials,
reboot between each trial; x86 defconfig + BPF, clang 16) this produces
improvements in throughput and latency in the majority of cases, with an
average (geomean) improvement of 8%:
The on_lookup test in {cgrp,task}_ls_recursion.c is removed
because the bpf_local_storage_lookup is no longer traceable
and adding tracepoint will make the compiler generate worse
code: https://lore.kernel.org/bpf/ZcJmok64Xqv6l4ZS@elver.google.com/
Signed-off-by: Marco Elver <elver@google.com> Cc: Martin KaFai Lau <martin.lau@linux.dev> Acked-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20240207122626.3508658-1-elver@google.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Martin KaFai Lau [Thu, 8 Feb 2024 19:05:09 +0000 (11:05 -0800)]
Merge branch 'bpf, btf: Add DEBUG_INFO_BTF checks for __register_bpf_struct_ops'
Geliang Tang says:
====================
bpf: Add DEBUG_INFO_BTF checks for __register_bpf_struct_ops
This patch set avoids module loading failure when the module
trying to register a struct_ops and the module has its btf section
stripped. This will then work similarly as module kfunc registration in
commit 3de4d22cc9ac ("bpf, btf: Warn but return no error for NULL btf from __register_btf_kfunc_id_set()")
v5:
- drop CONFIG_MODULE_ALLOW_BTF_MISMATCH check as Martin suggested.
v4:
- add a new patch to fix error checks for btf_get_module_btf.
- rename the helper to check_btf_kconfigs.
v3:
- fix this build error:
kernel/bpf/btf.c:7750:11: error: incomplete definition of type 'struct module'
Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202402040934.Fph0XeEo-lkp@intel.com/
v2:
- add register_check_missing_btf helper as Jiri suggested.
====================
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Geliang Tang [Thu, 8 Feb 2024 06:24:23 +0000 (14:24 +0800)]
bpf, btf: Check btf for register_bpf_struct_ops
Similar to the handling in the functions __register_btf_kfunc_id_set()
and register_btf_id_dtor_kfuncs(), this patch uses the newly added
helper check_btf_kconfigs() to handle module with its btf section
stripped.
While at it, the patch also adds the missed IS_ERR() check to fix the
commit f6be98d19985 ("bpf, net: switch to dynamic registration")
Geliang Tang [Thu, 8 Feb 2024 06:24:22 +0000 (14:24 +0800)]
bpf, btf: Add check_btf_kconfigs helper
This patch extracts duplicate code on error path when btf_get_module_btf()
returns NULL from the functions __register_btf_kfunc_id_set() and
register_btf_id_dtor_kfuncs() into a new helper named check_btf_kconfigs()
to check CONFIG_DEBUG_INFO_BTF and CONFIG_DEBUG_INFO_BTF_MODULES in it.
Geliang Tang [Thu, 8 Feb 2024 06:24:21 +0000 (14:24 +0800)]
bpf, btf: Fix return value of register_btf_id_dtor_kfuncs
The same as __register_btf_kfunc_id_set(), to let the modules with
stripped btf section loaded, this patch changes the return value of
register_btf_id_dtor_kfuncs() too from -ENOENT to 0 when btf is NULL.
Masahiro Yamada [Sun, 4 Feb 2024 07:56:34 +0000 (16:56 +0900)]
bpf: Merge two CONFIG_BPF entries
'config BPF' exists in both init/Kconfig and kernel/bpf/Kconfig.
Commit b24abcff918a ("bpf, kconfig: Add consolidated menu entry for bpf
with core options") added the second one to kernel/bpf/Kconfig instead
of moving the existing one.
Yafang Shao [Tue, 6 Feb 2024 08:14:15 +0000 (16:14 +0800)]
selftests/bpf: Mark cpumask kfunc declarations as __weak
After the series "Annotate kfuncs in .BTF_ids section"[0], kfuncs can be
generated from bpftool. Let's mark the existing cpumask kfunc declarations
__weak so they don't conflict with definitions that will eventually come
from vmlinux.h.
====================
tools/resolve_btfids: fix cross-compilation to non-host endianness
The .BTF_ids section is pre-filled with zeroed BTF ID entries during the
build and afterwards patched by resolve_btfids with correct values.
Since resolve_btfids always writes in host-native endianness, it relies
on libelf to do the translation when the target ELF is cross-compiled to
a different endianness (this was introduced in commit 61e8aeda9398
("bpf: Fix libelf endian handling in resolv_btfids")).
Unfortunately, the translation will corrupt the flags fields of SET8
entries because these were written during vmlinux compilation and are in
the correct endianness already. This will lead to numerous selftests
failures such as:
Since it's not possible to instruct libelf to translate just certain
values, let's manually bswap the flags (both global and entry flags) in
resolve_btfids when needed, so that libelf then translates everything
correctly.
The first patch of the series refactors resolve_btfids by using types
from btf_ids.h instead of accessing the BTF ID data using magic offsets. Acked-by: Jiri Olsa <jolsa@kernel.org>
---
Changes in v4:
- remove unnecessary vars and pointer casts (suggested by Daniel Xu)
Changes in v3:
- add byte swap of global 'flags' field in btf_id_set8 (suggested by
Jiri Olsa)
- cleaner refactoring of sets_patch (suggested by Jiri Olsa)
- add compile-time assertion that IDs are at the beginning of pairs
struct in btf_id_set8 (suggested by Daniel Borkmann)
Changes in v2:
- use type defs from btf_ids.h (suggested by Andrii Nakryiko)
====================
Viktor Malik [Tue, 6 Feb 2024 12:46:10 +0000 (13:46 +0100)]
tools/resolve_btfids: Fix cross-compilation to non-host endianness
The .BTF_ids section is pre-filled with zeroed BTF ID entries during the
build and afterwards patched by resolve_btfids with correct values.
Since resolve_btfids always writes in host-native endianness, it relies
on libelf to do the translation when the target ELF is cross-compiled to
a different endianness (this was introduced in commit 61e8aeda9398
("bpf: Fix libelf endian handling in resolv_btfids")).
Unfortunately, the translation will corrupt the flags fields of SET8
entries because these were written during vmlinux compilation and are in
the correct endianness already. This will lead to numerous selftests
failures such as:
Since it's not possible to instruct libelf to translate just certain
values, let's manually bswap the flags (both global and entry flags) in
resolve_btfids when needed, so that libelf then translates everything
correctly.
Viktor Malik [Tue, 6 Feb 2024 12:46:09 +0000 (13:46 +0100)]
tools/resolve_btfids: Refactor set sorting with types from btf_ids.h
Instead of using magic offsets to access BTF ID set data, leverage types
from btf_ids.h (btf_id_set and btf_id_set8) which define the actual
layout of the data. Thanks to this change, set sorting should also
continue working if the layout changes.
This requires to sync the definition of 'struct btf_id_set8' from
include/linux/btf_ids.h to tools/include/linux/btf_ids.h. We don't sync
the rest of the file at the moment, b/c that would require to also sync
multiple dependent headers and we don't need any other defs from
btf_ids.h.
Toke Høiland-Jørgensen [Tue, 6 Feb 2024 12:59:22 +0000 (13:59 +0100)]
libbpf: Use OPTS_SET() macro in bpf_xdp_query()
When the feature_flags and xdp_zc_max_segs fields were added to the libbpf
bpf_xdp_query_opts, the code writing them did not use the OPTS_SET() macro.
This causes libbpf to write to those fields unconditionally, which means
that programs compiled against an older version of libbpf (with a smaller
size of the bpf_xdp_query_opts struct) will have its stack corrupted by
libbpf writing out of bounds.
The patch adding the feature_flags field has an early bail out if the
feature_flags field is not part of the opts struct (via the OPTS_HAS)
macro, but the patch adding xdp_zc_max_segs does not. For consistency, this
fix just changes the assignments to both fields to use the OPTS_SET()
macro.
Jose E. Marchesi [Tue, 6 Feb 2024 10:23:30 +0000 (11:23 +0100)]
bpf: Use -Wno-address-of-packed-member in some selftests
[Differences from V2:
- Remove conditionals in the source files pragmas, as the
pragma is supported by both GCC and clang.]
Both GCC and clang implement the -Wno-address-of-packed-member
warning, which is enabled by -Wall, that warns about taking the
address of a packed struct field when it can lead to an "unaligned"
address.
This triggers the following errors (-Werror) when building three
particular BPF selftests with GCC:
Andrii Nakryiko [Tue, 6 Feb 2024 00:40:08 +0000 (16:40 -0800)]
selftests/bpf: mark dynptr kfuncs __weak to make them optional on old kernels
Mark dynptr kfuncs as __weak to allow
verifier_global_subprogs/arg_ctx_{perf,kprobe,raw_tp} subtests to be
loadable on old kernels. Because bpf_dynptr_from_xdp() kfunc is used
from arg_tag_dynptr BPF program in progs/verifier_global_subprogs.c
*and* is not marked as __weak, loading any subtest from
verifier_global_subprogs fails on old kernels that don't have
bpf_dynptr_from_xdp() kfunc defined. Even if arg_tag_dynptr program
itself is not loaded, libbpf bails out on non-weak reference to
bpf_dynptr_from_xdp (that can't be resolved), which shared across all
programs in progs/verifier_global_subprogs.c.
So mark all dynptr-related kfuncs as __weak to unblock libbpf CI ([0]).
In the upcoming "kfunc in vmlinux.h" work we should make sure that
kfuncs are always declared __weak as well.
Andrii Nakryiko [Tue, 6 Feb 2024 00:22:43 +0000 (16:22 -0800)]
libbpf: fix return value for PERF_EVENT __arg_ctx type fix up check
If PERF_EVENT program has __arg_ctx argument with matching
architecture-specific pt_regs/user_pt_regs/user_regs_struct pointer
type, libbpf should still perform type rewrite for old kernels, but not
emit the warning. Fix copy/paste from kernel code where 0 is meant to
signify "no error" condition. For libbpf we need to return "true" to
proceed with type rewrite (which for PERF_EVENT program will be
a canonical `struct bpf_perf_event_data *` type).
====================
xsk: support redirect to any socket bound to the same umem
This patch set adds support for directing a packet to any socket bound
to the same umem. This makes it possible to use the XDP program to
select what socket the packet should be received on. The user can
populate the XSKMAP with various sockets and as long as they share the
same umem, the XDP program can pick any one of them.
The implementation is straight-forward. Instead of testing that the
incoming packet is targeting the same device and queue id as the
socket is bound to, just check that the umem the packet was received
on is the same as the socket we want it to be received on. This
guarantees that the redirect is legal as it is already in the correct
umem.
Patch #1 implements the feature and patch #2 adds documentation.
Magnus Karlsson [Mon, 5 Feb 2024 12:35:50 +0000 (13:35 +0100)]
xsk: support redirect to any socket bound to the same umem
Add support for directing a packet to any socket bound to the same
umem. This makes it possible to use the XDP program to select what
socket the packet should be received on. The user can populate the
XSKMAP with various sockets and as long as they share the same umem,
the XDP program can pick any one of them.
====================
Transfer RCU lock state across subprog calls
David suggested during the discussion in [0] that we should handle RCU
locks in a similar fashion to spin locks where the verifier understands
when a lock held in a caller is released in callee, or lock taken in
callee is released in a caller, or the callee is called within a lock
critical section. This set extends the same semantics to RCU read locks
and adds a few selftests to verify correct behavior. This issue has also
come up for sched-ext programs.
This would now allow static subprog calls to be made without errors
within RCU read sections, for subprogs to release RCU locks of callers
and return to them, or for subprogs to take RCU lock which is later
released in the caller.
selftests/bpf: Add tests for RCU lock transfer between subprogs
Add selftests covering the following cases:
- A static or global subprog called from within a RCU read section works
- A static subprog taking an RCU read lock which is released in caller works
- A static subprog releasing the caller's RCU read lock works
Global subprogs that leave the lock in an imbalanced state will not
work, as they are verified separately, so ensure those cases fail as
well.
Acked-by: Yonghong Song <yonghong.song@linux.dev> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Acked-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/r/20240205055646.1112186-3-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
bpf: Transfer RCU lock state between subprog calls
Allow transferring an imbalanced RCU lock state between subprog calls
during verification. This allows patterns where a subprog call returns
with an RCU lock held, or a subprog call releases an RCU lock held by
the caller. Currently, the verifier would end up complaining if the RCU
lock is not released when processing an exit from a subprog, which is
non-ideal if its execution is supposed to be enclosed in an RCU read
section of the caller.
Instead, simply only check whether we are processing exit for frame#0
and do not complain on an active RCU lock otherwise. We only need to
update the check when processing BPF_EXIT insn, as copy_verifier_state
is already set up to do the right thing.
Suggested-by: David Vernet <void@manifault.com> Tested-by: Yafang Shao <laoar.shao@gmail.com> Acked-by: Yonghong Song <yonghong.song@linux.dev> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Acked-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/r/20240205055646.1112186-2-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
====================
Enable static subprog calls in spin lock critical sections
This set allows a BPF program to make a call to a static subprog within
a bpf_spin_lock critical section. This problem has been hit in sched-ext
and ghOSt [0] as well, and is mostly an annoyance which is worked around
by inling the static subprog into the critical section.
In case of sched-ext, there are a lot of other helper/kfunc calls that
need to be allow listed for the support to be complete, but a separate
follow up will deal with that.
Unlike static subprogs, global subprogs cannot be allowed yet as the
verifier will not explore their body when encountering a call
instruction for them. Therefore, we would need an alternative approach
(some sort of function summarization to ensure a lock is never taken
from a global subprog and all its callees).
selftests/bpf: Add test for static subprog call in lock cs
Add selftests for static subprog calls within bpf_spin_lock critical
section, and ensure we still reject global subprog calls. Also test the
case where a subprog call will unlock the caller's held lock, or the
caller will unlock a lock taken by a subprog call, ensuring correct
transfer of lock state across frames on exit.
Acked-by: Yonghong Song <yonghong.song@linux.dev> Acked-by: David Vernet <void@manifault.com> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20240204222349.938118-3-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
bpf: Allow calling static subprogs while holding a bpf_spin_lock
Currently, calling any helpers, kfuncs, or subprogs except the graph
data structure (lists, rbtrees) API kfuncs while holding a bpf_spin_lock
is not allowed. One of the original motivations of this decision was to
force the BPF programmer's hand into keeping the bpf_spin_lock critical
section small, and to ensure the execution time of the program does not
increase due to lock waiting times. In addition to this, some of the
helpers and kfuncs may be unsafe to call while holding a bpf_spin_lock.
However, when it comes to subprog calls, atleast for static subprogs,
the verifier is able to explore their instructions during verification.
Therefore, it is similar in effect to having the same code inlined into
the critical section. Hence, not allowing static subprog calls in the
bpf_spin_lock critical section is mostly an annoyance that needs to be
worked around, without providing any tangible benefit.
Unlike static subprog calls, global subprog calls are not safe to permit
within the critical section, as the verifier does not explore them
during verification, therefore whether the same lock will be taken
again, or unlocked, cannot be ascertained.
Therefore, allow calling static subprogs within a bpf_spin_lock critical
section, and only reject it in case the subprog linkage is global.
Acked-by: Yonghong Song <yonghong.song@linux.dev> Acked-by: David Vernet <void@manifault.com> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20240204222349.938118-2-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Dave Thaler [Fri, 2 Feb 2024 22:11:10 +0000 (14:11 -0800)]
bpf, docs: Expand set of initial conformance groups
This patch attempts to update the ISA specification according
to the latest mailing list discussion about conformance groups,
in a way that is intended to be consistent with IANA registry
processes and IETF 118 WG meeting discussion.
It does the following:
* Split basic into base32 and base64 for 32-bit vs 64-bit base
instructions
* Split division/multiplication/modulo instructions out of base groups
* Split atomic instructions out of base groups
There may be additional changes as discussion continues,
but there seems to be consensus on the principles above.
v1->v2: fixed typo pointed out by David Vernet
v2->v3: Moved multiplication to same groups as division/modulo
One possibility is that both lwt_redirect and lwt_reroute create
netns with the same name "ns_lwt" which may cause conflict. I tried
the following example:
$ sudo ip netns add abc
$ echo $?
0
$ sudo ip netns add abc
Cannot create namespace file "/var/run/netns/abc": File exists
$ echo $?
1
$
The return code for above netns_create() is 256. The internet search
suggests that the return value for 'ip netns add ns_lwt' is 1, which
matches the above 'sudo ip netns add abc' example.
This patch tried to use different netns names for two tests to avoid
'ip netns add <name>' failure.
I ran './test_progs -j' 10 times and all succeeded with
lwt_redirect/lwt_reroute tests.
Kui-Feng Lee [Sun, 4 Feb 2024 06:12:04 +0000 (22:12 -0800)]
selftests/bpf: Suppress warning message of an unused variable.
"r" is used to receive the return value of test_2 in bpf_testmod.c, but it
is not actually used. So, we remove "r" and change the return type to
"void".
Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202401300557.z5vzn8FM-lkp@intel.com/ Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com> Acked-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20240204061204.1864529-1-thinker.li@gmail.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Yonghong Song [Sun, 4 Feb 2024 19:44:52 +0000 (11:44 -0800)]
selftests/bpf: Fix flaky test ptr_untrusted
Somehow recently I frequently hit the following test failure
with either ./test_progs or ./test_progs-cpuv4:
serial_test_ptr_untrusted:PASS:skel_open 0 nsec
serial_test_ptr_untrusted:PASS:lsm_attach 0 nsec
serial_test_ptr_untrusted:PASS:raw_tp_attach 0 nsec
serial_test_ptr_untrusted:FAIL:cmp_tp_name unexpected cmp_tp_name: actual -115 != expected 0
#182 ptr_untrusted:FAIL
Further investigation found the failure is due to
bpf_probe_read_user_str()
where reading user-level string attr->raw_tracepoint.name
is not successfully, most likely due to the
string itself still in disk and not populated into memory yet.
One solution is do a printf() call of the string before doing bpf
syscall which will force the raw_tracepoint.name into memory.
But I think a more robust solution is to use bpf_copy_from_user()
which is used in sleepable program and can tolerate page fault,
and the fix here used the latter approach.
====================
Two small fixes for global subprog tagging
Fix a bug with passing trusted PTR_TO_BTF_ID_OR_NULL register into global
subprog that expects `__arg_trusted __arg_nullable` arguments, which was
discovered when adopting production BPF application.
Also fix annoying warnings that are irrelevant for static subprogs, which are
just an artifact of using btf_prepare_func_args() for both static and global
subprogs.
====================
Andrii Nakryiko [Fri, 2 Feb 2024 19:05:29 +0000 (11:05 -0800)]
bpf: don't emit warnings intended for global subprogs for static subprogs
When btf_prepare_func_args() was generalized to handle both static and
global subprogs, a few warnings/errors that are meant only for global
subprog cases started to be emitted for static subprogs, where they are
sort of expected and irrelavant.
Stop polutting verifier logs with irrelevant scary-looking messages.
Andrii Nakryiko [Fri, 2 Feb 2024 19:05:28 +0000 (11:05 -0800)]
selftests/bpf: add more cases for __arg_trusted __arg_nullable args
Add extra layer of global functions to ensure that passing around
(trusted) PTR_TO_BTF_ID_OR_NULL registers works as expected. We also
extend trusted_task_arg_nullable subtest to check three possible valid
argumements: known NULL, known non-NULL, and maybe NULL cases.
Andrii Nakryiko [Fri, 2 Feb 2024 19:05:27 +0000 (11:05 -0800)]
bpf: handle trusted PTR_TO_BTF_ID_OR_NULL in argument check logic
Add PTR_TRUSTED | PTR_MAYBE_NULL modifiers for PTR_TO_BTF_ID to
check_reg_type() to support passing trusted nullable PTR_TO_BTF_ID
registers into global functions accepting `__arg_trusted __arg_nullable`
arguments. This hasn't been caught earlier because tests were either
passing known non-NULL PTR_TO_BTF_ID registers or known NULL (SCALAR)
registers.
When utilizing this functionality in complicated real-world BPF
application that passes around PTR_TO_BTF_ID_OR_NULL, it became apparent
that verifier rejects valid case because check_reg_type() doesn't handle
this case explicitly. Existing check_reg_type() logic is already
anticipating this combination, so we just need to explicitly list this
combo in the switch statement.
Shung-Hsi Yu [Fri, 2 Feb 2024 09:55:58 +0000 (17:55 +0800)]
selftests/bpf: trace_helpers.c: do not use poisoned type
After commit c698eaebdf47 ("selftests/bpf: trace_helpers.c: Optimize
kallsyms cache") trace_helpers.c now includes libbpf_internal.h, and
thus can no longer use the u32 type (among others) since they are poison
in libbpf_internal.h. Replace u32 with __u32 to fix the following error
when building trace_helpers.c on powerpc:
error: attempt to use poisoned "u32"
Fixes: c698eaebdf47 ("selftests/bpf: trace_helpers.c: Optimize kallsyms cache") Signed-off-by: Shung-Hsi Yu <shung-hsi.yu@suse.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20240202095559.12900-1-shung-hsi.yu@suse.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
====================
Improvements for tracking scalars in the BPF verifier
From: Maxim Mikityanskiy <maxim@isovalent.com>
The goal of this series is to extend the verifier's capabilities of
tracking scalars when they are spilled to stack, especially when the
spill or fill is narrowing. It also contains a fix by Eduard for
infinite loop detection and a state pruning optimization by Eduard that
compensates for a verification complexity regression introduced by
tracking unbounded scalars. These improvements reduce the surface of
false rejections that I saw while working on Cilium codebase.
Patches 1-9 of the original series were previously applied in v2.
Patches 1-2 (Maxim): Support the case when boundary checks are first
performed after the register was spilled to the stack.
Patches 3-4 (Maxim): Support narrowing fills.
Patches 5-6 (Eduard): Optimization for state pruning in stacksafe() to
mitigate the verification complexity regression.
Eduard: Patch #5 (#14 in v2) changed significantly:
- Logical changes:
- Handling of STACK_{MISC,ZERO} mix turned out to be incorrect:
a mix of MISC and ZERO in old state is not equivalent to e.g.
just MISC is current state, because verifier could have deduced
zero scalars from ZERO slots in old state for some loads.
- There is no reason to limit the change only to cases when
old or current stack is a spill of unbounded scalar,
it is valid to compare any 64-bit scalar spill with fake
register impersonating MISC.
- STACK_ZERO vs spilled zero case was dropped,
after recent changes for zero handling by Andrii and Yonghong
it is hard (impossible?) to conjure all ZERO slots for an spi.
=> the case does not make any difference in veristat results.
- Use global static variable for unbound_reg (Andrii)
- Code shuffling to remove duplication in stacksafe() (Andrii)
====================
Eduard Zingerman [Sat, 27 Jan 2024 17:52:37 +0000 (19:52 +0200)]
selftests/bpf: States pruning checks for scalar vs STACK_MISC
Check that stacksafe() compares spilled scalars with STACK_MISC.
The following combinations are explored:
- old spill of imprecise scalar is equivalent to cur STACK_{MISC,INVALID}
(plus error in unpriv mode);
- old spill of precise scalar is not equivalent to cur STACK_MISC;
- old STACK_MISC is equivalent to cur scalar;
- old STACK_MISC is not equivalent to cur non-scalar.
Eduard Zingerman [Sat, 27 Jan 2024 17:52:36 +0000 (19:52 +0200)]
bpf: Handle scalar spill vs all MISC in stacksafe()
When check_stack_read_fixed_off() reads value from an spi
all stack slots of which are set to STACK_{MISC,INVALID},
the destination register is set to unbound SCALAR_VALUE.
Exploit this fact by allowing stacksafe() to use a fake
unbound scalar register to compare 'mmmm mmmm' stack value
in old state vs spilled 64-bit scalar in current state
and vice versa.
Veristat results after this patch show some gains:
Maxim Mikityanskiy [Sat, 27 Jan 2024 17:52:34 +0000 (19:52 +0200)]
bpf: Preserve boundaries and track scalars on narrowing fill
When the width of a fill is smaller than the width of the preceding
spill, the information about scalar boundaries can still be preserved,
as long as it's coerced to the right width (done by coerce_reg_to_size).
Even further, if the actual value fits into the fill width, the ID can
be preserved as well for further tracking of equal scalars.
Implement the above improvements, which makes narrowing fills behave the
same as narrowing spills and MOVs between registers.
Two tests are adjusted to accommodate for endianness differences and to
take into account that it's now allowed to do a narrowing fill from the
least significant bits.
reg_bounds_sync is added to coerce_reg_to_size to correctly adjust
umin/umax boundaries after the var_off truncation, for example, a 64-bit
value 0xXXXXXXXX00000000, when read as a 32-bit, gets umin = 0, umax =
0xFFFFFFFF, var_off = (0x0; 0xffffffff00000000), which needs to be
synced down to umax = 0, otherwise reg_bounds_sanity_check doesn't pass.
Maxim Mikityanskiy [Sat, 27 Jan 2024 17:52:32 +0000 (19:52 +0200)]
bpf: Track spilled unbounded scalars
Support the pattern where an unbounded scalar is spilled to the stack,
then boundary checks are performed on the src register, after which the
stack frame slot is refilled into a register.
Before this commit, the verifier didn't treat the src register and the
stack slot as related if the src register was an unbounded scalar. The
register state wasn't copied, the id wasn't preserved, and the stack
slot was marked as STACK_MISC. Subsequent boundary checks on the src
register wouldn't result in updating the boundaries of the spilled
variable on the stack.
After this commit, the verifier will preserve the bond between src and
dst even if src is unbounded, which permits to do boundary checks on src
and refill dst later, still remembering its boundaries. Such a pattern
is sometimes generated by clang when compiling complex long functions.
One test is adjusted to reflect that now unbounded scalars are tracked.
Andrii Nakryiko [Thu, 1 Feb 2024 17:20:27 +0000 (09:20 -0800)]
selftests/bpf: Fix bench runner SIGSEGV
Some benchmarks don't have either "consumer" or "producer" sides. For
example, trig-tp and other BPF triggering benchmarks don't have
consumers, as they only do "producing" by calling into syscall or
predefined uproes. As such it's valid for some benchmarks to have zero
consumers or producers. So allows to specify `-c0` explicitly.
This triggers another problem. If benchmark doesn't support either
consumer or producer side, consumer_thread/producer_thread callback will
be NULL, but benchmark runner will attempt to use those NULL callback to
create threads anyways. So instead of crashing with SIGSEGV in case of
misconfigured benchmark, detect the condition and report error.
Andrii Nakryiko [Thu, 1 Feb 2024 17:20:26 +0000 (09:20 -0800)]
libbpf: Add missed btf_ext__raw_data() API
Another API that was declared in libbpf.map but actual implementation
was missing. btf_ext__get_raw_data() was intended as a discouraged alias
to consistently-named btf_ext__raw_data(), so make this an actuality.
Fixes: 20eccf29e297 ("libbpf: hide and discourage inconsistently named getters") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/bpf/20240201172027.604869-5-andrii@kernel.org
Andrii Nakryiko [Thu, 1 Feb 2024 17:20:25 +0000 (09:20 -0800)]
libbpf: Add btf__new_split() API that was declared but not implemented
Seems like original commit adding split BTF support intended to add
btf__new_split() API, and even declared it in libbpf.map, but never
added (trivial) implementation. Fix this.
Andrii Nakryiko [Thu, 1 Feb 2024 17:20:24 +0000 (09:20 -0800)]
libbpf: Add missing LIBBPF_API annotation to libbpf_set_memlock_rlim API
LIBBPF_API annotation seems missing on libbpf_set_memlock_rlim API, so
add it to make this API callable from libbpf's shared library version.
Fixes: e542f2c4cd16 ("libbpf: Auto-bump RLIMIT_MEMLOCK if kernel needs it for BPF") Fixes: ab9a5a05dc48 ("libbpf: fix up few libbpf.map problems") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/bpf/20240201172027.604869-3-andrii@kernel.org
Andrii Nakryiko [Thu, 1 Feb 2024 17:20:23 +0000 (09:20 -0800)]
libbpf: Call memfd_create() syscall directly
Some versions of Android do not implement memfd_create() wrapper in
their libc implementation, leading to build failures ([0]). On the other
hand, memfd_create() is available as a syscall on quite old kernels
(3.17+, while bpf() syscall itself is available since 3.18+), so it is
ok to assume that syscall availability and call into it with syscall()
helper to avoid Android-specific workarounds.
Matt Bobrowski [Thu, 1 Feb 2024 10:43:40 +0000 (10:43 +0000)]
bpf: Minor clean-up to sleepable_lsm_hooks BTF set
There's already one main CONFIG_SECURITY_NETWORK ifdef block within
the sleepable_lsm_hooks BTF set. Consolidate this duplicated ifdef
block as there's no need for it and all things guarded by it should
remain in one place in this specific context.
Pu Lehui [Tue, 30 Jan 2024 12:46:58 +0000 (12:46 +0000)]
riscv, bpf: Enable inline bpf_kptr_xchg() for RV64
RV64 JIT supports 64-bit BPF_XCHG atomic instructions. At the same time,
the underlying implementation of xchg() and atomic64_xchg() in RV64 both
are raw_xchg() that supported 64-bit. Therefore inline bpf_kptr_xchg()
will have equivalent semantics. Let's inline it for better performance.
Dave Thaler [Wed, 31 Jan 2024 03:37:59 +0000 (19:37 -0800)]
bpf, docs: Clarify which legacy packet instructions existed
As discussed on the BPF IETF mailing list (see link), this patch updates
the "Legacy BPF Packet access instructions" section to clarify which
instructions are deprecated (vs which were never defined and so are not
deprecated).