diff options
| author | Jason Xing <kernelxing@tencent.com> | 2025-11-18 15:06:43 +0800 |
|---|---|---|
| committer | Jakub Kicinski <kuba@kernel.org> | 2025-11-19 20:29:24 -0800 |
| commit | 3505730d9042a8d26e89288ecab04e5d32794e4a (patch) | |
| tree | 086d2ff78f854246e39baa7557daffd13777ba47 /net/core | |
| parent | 7c9dd386020db9c46b7faf5c6fa928df2831b6fb (diff) | |
net: increase default NAPI_SKB_CACHE_SIZE to 128
After commit b61785852ed0 ("net: increase skb_defer_max default to 128")
changed the value sysctl_skb_defer_max to avoid many calls to
kick_defer_list_purge(), the same situation can be applied to
NAPI_SKB_CACHE_SIZE that was proposed in 2016. It's a trade-off between
using pre-allocated memory in skb_cache and saving more a bit heavy
function calls in the softirq context.
With this patch applied, we can have more skbs per-cpu to accelerate the
sending path that needs to acquire new skbs.
Suggested-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jason Xing <kernelxing@tencent.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Link: https://patch.msgid.link/20251118070646.61344-2-kerneljasonxing@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'net/core')
| -rw-r--r-- | net/core/skbuff.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 9feea830a4db..e4abf0e56776 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -223,7 +223,7 @@ static void skb_under_panic(struct sk_buff *skb, unsigned int sz, void *addr) skb_panic(skb, sz, addr, __func__); } -#define NAPI_SKB_CACHE_SIZE 64 +#define NAPI_SKB_CACHE_SIZE 128 #define NAPI_SKB_CACHE_BULK 16 #define NAPI_SKB_CACHE_HALF (NAPI_SKB_CACHE_SIZE / 2) |
