Filtered by vendor Linux
Subscribe
Total
12249 CVE
| CVE | Vendors | Products | Updated | CVSS v2 | CVSS v3 |
|---|---|---|---|---|---|
| CVE-2023-52828 | 1 Linux | 1 Linux Kernel | 2025-09-26 | N/A | 5.5 MEDIUM |
| In the Linux kernel, the following vulnerability has been resolved: bpf: Detect IP == ksym.end as part of BPF program Now that bpf_throw kfunc is the first such call instruction that has noreturn semantics within the verifier, this also kicks in dead code elimination in unprecedented ways. For one, any instruction following a bpf_throw call will never be marked as seen. Moreover, if a callchain ends up throwing, any instructions after the call instruction to the eventually throwing subprog in callers will also never be marked as seen. The tempting way to fix this would be to emit extra 'int3' instructions which bump the jited_len of a program, and ensure that during runtime when a program throws, we can discover its boundaries even if the call instruction to bpf_throw (or to subprogs that always throw) is emitted as the final instruction in the program. An example of such a program would be this: do_something(): ... r0 = 0 exit foo(): r1 = 0 call bpf_throw r0 = 0 exit bar(cond): if r1 != 0 goto pc+2 call do_something exit call foo r0 = 0 // Never seen by verifier exit // main(ctx): r1 = ... call bar r0 = 0 exit Here, if we do end up throwing, the stacktrace would be the following: bpf_throw foo bar main In bar, the final instruction emitted will be the call to foo, as such, the return address will be the subsequent instruction (which the JIT emits as int3 on x86). This will end up lying outside the jited_len of the program, thus, when unwinding, we will fail to discover the return address as belonging to any program and end up in a panic due to the unreliable stack unwinding of BPF programs that we never expect. To remedy this case, make bpf_prog_ksym_find treat IP == ksym.end as part of the BPF program, so that is_bpf_text_address returns true when such a case occurs, and we are able to unwind reliably when the final instruction ends up being a call instruction. | |||||
| CVE-2023-52834 | 1 Linux | 1 Linux Kernel | 2025-09-26 | N/A | 5.5 MEDIUM |
| In the Linux kernel, the following vulnerability has been resolved: atl1c: Work around the DMA RX overflow issue This is based on alx driver commit 881d0327db37 ("net: alx: Work around the DMA RX overflow issue"). The alx and atl1c drivers had RX overflow error which was why a custom allocator was created to avoid certain addresses. The simpler workaround then created for alx driver, but not for atl1c due to lack of tester. Instead of using a custom allocator, check the allocated skb address and use skb_reserve() to move away from problematic 0x...fc0 address. Tested on AR8131 on Acer 4540. | |||||
| CVE-2023-52839 | 1 Linux | 1 Linux Kernel | 2025-09-26 | N/A | 3.3 LOW |
| In the Linux kernel, the following vulnerability has been resolved: drivers: perf: Do not broadcast to other cpus when starting a counter This command: $ perf record -e cycles:k -e instructions:k -c 10000 -m 64M dd if=/dev/zero of=/dev/null count=1000 gives rise to this kernel warning: [ 444.364395] WARNING: CPU: 0 PID: 104 at kernel/smp.c:775 smp_call_function_many_cond+0x42c/0x436 [ 444.364515] Modules linked in: [ 444.364657] CPU: 0 PID: 104 Comm: perf-exec Not tainted 6.6.0-rc6-00051-g391df82e8ec3-dirty #73 [ 444.364771] Hardware name: riscv-virtio,qemu (DT) [ 444.364868] epc : smp_call_function_many_cond+0x42c/0x436 [ 444.364917] ra : on_each_cpu_cond_mask+0x20/0x32 [ 444.364948] epc : ffffffff8009f9e0 ra : ffffffff8009fa5a sp : ff20000000003800 [ 444.364966] gp : ffffffff81500aa0 tp : ff60000002b83000 t0 : ff200000000038c0 [ 444.364982] t1 : ffffffff815021f0 t2 : 000000000000001f s0 : ff200000000038b0 [ 444.364998] s1 : ff60000002c54d98 a0 : ff60000002a73940 a1 : 0000000000000000 [ 444.365013] a2 : 0000000000000000 a3 : 0000000000000003 a4 : 0000000000000100 [ 444.365029] a5 : 0000000000010100 a6 : 0000000000f00000 a7 : 0000000000000000 [ 444.365044] s2 : 0000000000000000 s3 : ffffffffffffffff s4 : ff60000002c54d98 [ 444.365060] s5 : ffffffff81539610 s6 : ffffffff80c20c48 s7 : 0000000000000000 [ 444.365075] s8 : 0000000000000000 s9 : 0000000000000001 s10: 0000000000000001 [ 444.365090] s11: ffffffff80099394 t3 : 0000000000000003 t4 : 00000000eac0c6e6 [ 444.365104] t5 : 0000000400000000 t6 : ff60000002e010d0 [ 444.365120] status: 0000000200000100 badaddr: 0000000000000000 cause: 0000000000000003 [ 444.365226] [<ffffffff8009f9e0>] smp_call_function_many_cond+0x42c/0x436 [ 444.365295] [<ffffffff8009fa5a>] on_each_cpu_cond_mask+0x20/0x32 [ 444.365311] [<ffffffff806e90dc>] pmu_sbi_ctr_start+0x7a/0xaa [ 444.365327] [<ffffffff806e880c>] riscv_pmu_start+0x48/0x66 [ 444.365339] [<ffffffff8012111a>] perf_adjust_freq_unthr_context+0x196/0x1ac [ 444.365356] [<ffffffff801237aa>] perf_event_task_tick+0x78/0x8c [ 444.365368] [<ffffffff8003faf4>] scheduler_tick+0xe6/0x25e [ 444.365383] [<ffffffff8008a042>] update_process_times+0x80/0x96 [ 444.365398] [<ffffffff800991ec>] tick_sched_handle+0x26/0x52 [ 444.365410] [<ffffffff800993e4>] tick_sched_timer+0x50/0x98 [ 444.365422] [<ffffffff8008a6aa>] __hrtimer_run_queues+0x126/0x18a [ 444.365433] [<ffffffff8008b350>] hrtimer_interrupt+0xce/0x1da [ 444.365444] [<ffffffff806cdc60>] riscv_timer_interrupt+0x30/0x3a [ 444.365457] [<ffffffff8006afa6>] handle_percpu_devid_irq+0x80/0x114 [ 444.365470] [<ffffffff80065b82>] generic_handle_domain_irq+0x1c/0x2a [ 444.365483] [<ffffffff8045faec>] riscv_intc_irq+0x2e/0x46 [ 444.365497] [<ffffffff808a9c62>] handle_riscv_irq+0x4a/0x74 [ 444.365521] [<ffffffff808aa760>] do_irq+0x7c/0x7e [ 444.365796] ---[ end trace 0000000000000000 ]--- That's because the fix in commit 3fec323339a4 ("drivers: perf: Fix panic in riscv SBI mmap support") was wrong since there is no need to broadcast to other cpus when starting a counter, that's only needed in mmap when the counters could have already been started on other cpus, so simply remove this broadcast. | |||||
| CVE-2023-52787 | 1 Linux | 1 Linux Kernel | 2025-09-26 | N/A | 5.5 MEDIUM |
| In the Linux kernel, the following vulnerability has been resolved: blk-mq: make sure active queue usage is held for bio_integrity_prep() blk_integrity_unregister() can come if queue usage counter isn't held for one bio with integrity prepared, so this request may be completed with calling profile->complete_fn, then kernel panic. Another constraint is that bio_integrity_prep() needs to be called before bio merge. Fix the issue by: - call bio_integrity_prep() with one queue usage counter grabbed reliably - call bio_integrity_prep() before bio merge | |||||
| CVE-2024-35824 | 1 Linux | 1 Linux Kernel | 2025-09-26 | N/A | 5.5 MEDIUM |
| In the Linux kernel, the following vulnerability has been resolved: misc: lis3lv02d_i2c: Fix regulators getting en-/dis-abled twice on suspend/resume When not configured for wakeup lis3lv02d_i2c_suspend() will call lis3lv02d_poweroff() even if the device has already been turned off by the runtime-suspend handler and if configured for wakeup and the device is runtime-suspended at this point then it is not turned back on to serve as a wakeup source. Before commit b1b9f7a49440 ("misc: lis3lv02d_i2c: Add missing setting of the reg_ctrl callback"), lis3lv02d_poweroff() failed to disable the regulators which as a side effect made calling poweroff() twice ok. Now that poweroff() correctly disables the regulators, doing this twice triggers a WARN() in the regulator core: unbalanced disables for regulator-dummy WARNING: CPU: 1 PID: 92 at drivers/regulator/core.c:2999 _regulator_disable ... Fix lis3lv02d_i2c_suspend() to not call poweroff() a second time if already runtime-suspended and add a poweron() call when necessary to make wakeup work. lis3lv02d_i2c_resume() has similar issues, with an added weirness that it always powers on the device if it is runtime suspended, after which the first runtime-resume will call poweron() again, causing the enabled count for the regulator to increase by 1 every suspend/resume. These unbalanced regulator_enable() calls cause the regulator to never be turned off and trigger the following WARN() on driver unbind: WARNING: CPU: 1 PID: 1724 at drivers/regulator/core.c:2396 _regulator_put Fix this by making lis3lv02d_i2c_resume() mirror the new suspend(). | |||||
| CVE-2024-27418 | 1 Linux | 1 Linux Kernel | 2025-09-26 | N/A | 5.5 MEDIUM |
| In the Linux kernel, the following vulnerability has been resolved: net: mctp: take ownership of skb in mctp_local_output Currently, mctp_local_output only takes ownership of skb on success, and we may leak an skb if mctp_local_output fails in specific states; the skb ownership isn't transferred until the actual output routing occurs. Instead, make mctp_local_output free the skb on all error paths up to the route action, so it always consumes the passed skb. | |||||
| CVE-2024-27432 | 1 Linux | 1 Linux Kernel | 2025-09-26 | N/A | 5.5 MEDIUM |
| In the Linux kernel, the following vulnerability has been resolved: net: ethernet: mtk_eth_soc: fix PPE hanging issue A patch to resolve an issue was found in MediaTek's GPL-licensed SDK: In the mtk_ppe_stop() function, the PPE scan mode is not disabled before disabling the PPE. This can potentially lead to a hang during the process of disabling the PPE. Without this patch, the PPE may experience a hang during the reboot test. | |||||
| CVE-2024-27434 | 1 Linux | 1 Linux Kernel | 2025-09-26 | N/A | 5.5 MEDIUM |
| In the Linux kernel, the following vulnerability has been resolved: wifi: iwlwifi: mvm: don't set the MFP flag for the GTK The firmware doesn't need the MFP flag for the GTK, it can even make the firmware crash. in case the AP is configured with: group cipher TKIP and MFPC. We would send the GTK with cipher = TKIP and MFP which is of course not possible. | |||||
| CVE-2025-21656 | 1 Linux | 1 Linux Kernel | 2025-09-26 | N/A | 5.5 MEDIUM |
| In the Linux kernel, the following vulnerability has been resolved: hwmon: (drivetemp) Fix driver producing garbage data when SCSI errors occur scsi_execute_cmd() function can return both negative (linux codes) and positive (scsi_cmnd result field) error codes. Currently the driver just passes error codes of scsi_execute_cmd() to hwmon core, which is incorrect because hwmon only checks for negative error codes. This leads to hwmon reporting uninitialized data to userspace in case of SCSI errors (for example if the disk drive was disconnected). This patch checks scsi_execute_cmd() output and returns -EIO if it's error code is positive. [groeck: Avoid inline variable declaration for portability] | |||||
| CVE-2024-35787 | 1 Linux | 1 Linux Kernel | 2025-09-26 | N/A | 5.5 MEDIUM |
| In the Linux kernel, the following vulnerability has been resolved: md/md-bitmap: fix incorrect usage for sb_index Commit d7038f951828 ("md-bitmap: don't use ->index for pages backing the bitmap file") removed page->index from bitmap code, but left wrong code logic for clustered-md. current code never set slot offset for cluster nodes, will sometimes cause crash in clustered env. Call trace (partly): md_bitmap_file_set_bit+0x110/0x1d8 [md_mod] md_bitmap_startwrite+0x13c/0x240 [md_mod] raid1_make_request+0x6b0/0x1c08 [raid1] md_handle_request+0x1dc/0x368 [md_mod] md_submit_bio+0x80/0xf8 [md_mod] __submit_bio+0x178/0x300 submit_bio_noacct_nocheck+0x11c/0x338 submit_bio_noacct+0x134/0x614 submit_bio+0x28/0xdc submit_bh_wbc+0x130/0x1cc submit_bh+0x1c/0x28 | |||||
| CVE-2024-35793 | 1 Linux | 1 Linux Kernel | 2025-09-26 | N/A | 5.5 MEDIUM |
| In the Linux kernel, the following vulnerability has been resolved: debugfs: fix wait/cancellation handling during remove Ben Greear further reports deadlocks during concurrent debugfs remove while files are being accessed, even though the code in question now uses debugfs cancellations. Turns out that despite all the review on the locking, we missed completely that the logic is wrong: if the refcount hits zero we can finish (and need not wait for the completion), but if it doesn't we have to trigger all the cancellations. As written, we can _never_ get into the loop triggering the cancellations. Fix this, and explain it better while at it. | |||||
| CVE-2024-35794 | 1 Linux | 1 Linux Kernel | 2025-09-26 | N/A | 5.5 MEDIUM |
| In the Linux kernel, the following vulnerability has been resolved: dm-raid: really frozen sync_thread during suspend 1) commit f52f5c71f3d4 ("md: fix stopping sync thread") remove MD_RECOVERY_FROZEN from __md_stop_writes() and doesn't realize that dm-raid relies on __md_stop_writes() to frozen sync_thread indirectly. Fix this problem by adding MD_RECOVERY_FROZEN in md_stop_writes(), and since stop_sync_thread() is only used for dm-raid in this case, also move stop_sync_thread() to md_stop_writes(). 2) The flag MD_RECOVERY_FROZEN doesn't mean that sync thread is frozen, it only prevent new sync_thread to start, and it can't stop the running sync thread; In order to frozen sync_thread, after seting the flag, stop_sync_thread() should be used. 3) The flag MD_RECOVERY_FROZEN doesn't mean that writes are stopped, use it as condition for md_stop_writes() in raid_postsuspend() doesn't look correct. Consider that reentrant stop_sync_thread() do nothing, always call md_stop_writes() in raid_postsuspend(). 4) raid_message can set/clear the flag MD_RECOVERY_FROZEN at anytime, and if MD_RECOVERY_FROZEN is cleared while the array is suspended, new sync_thread can start unexpected. Fix this by disallow raid_message() to change sync_thread status during suspend. Note that after commit f52f5c71f3d4 ("md: fix stopping sync thread"), the test shell/lvconvert-raid-reshape.sh start to hang in stop_sync_thread(), and with previous fixes, the test won't hang there anymore, however, the test will still fail and complain that ext4 is corrupted. And with this patch, the test won't hang due to stop_sync_thread() or fail due to ext4 is corrupted anymore. However, there is still a deadlock related to dm-raid456 that will be fixed in following patches. | |||||
| CVE-2023-52853 | 1 Linux | 1 Linux Kernel | 2025-09-26 | N/A | 5.5 MEDIUM |
| In the Linux kernel, the following vulnerability has been resolved: hid: cp2112: Fix duplicate workqueue initialization Previously the cp2112 driver called INIT_DELAYED_WORK within cp2112_gpio_irq_startup, resulting in duplicate initilizations of the workqueue on subsequent IRQ startups following an initial request. This resulted in a warning in set_work_data in workqueue.c, as well as a rare NULL dereference within process_one_work in workqueue.c. Initialize the workqueue within _probe instead. | |||||
| CVE-2023-52868 | 1 Linux | 1 Linux Kernel | 2025-09-26 | N/A | 7.8 HIGH |
| In the Linux kernel, the following vulnerability has been resolved: thermal: core: prevent potential string overflow The dev->id value comes from ida_alloc() so it's a number between zero and INT_MAX. If it's too high then these sprintf()s will overflow. | |||||
| CVE-2024-35826 | 1 Linux | 1 Linux Kernel | 2025-09-26 | N/A | 5.5 MEDIUM |
| In the Linux kernel, the following vulnerability has been resolved: block: Fix page refcounts for unaligned buffers in __bio_release_pages() Fix an incorrect number of pages being released for buffers that do not start at the beginning of a page. | |||||
| CVE-2024-35831 | 1 Linux | 1 Linux Kernel | 2025-09-26 | N/A | 5.5 MEDIUM |
| In the Linux kernel, the following vulnerability has been resolved: io_uring: Fix release of pinned pages when __io_uaddr_map fails Looking at the error path of __io_uaddr_map, if we fail after pinning the pages for any reasons, ret will be set to -EINVAL and the error handler won't properly release the pinned pages. I didn't manage to trigger it without forcing a failure, but it can happen in real life when memory is heavily fragmented. | |||||
| CVE-2024-56624 | 1 Linux | 1 Linux Kernel | 2025-09-26 | N/A | 5.5 MEDIUM |
| In the Linux kernel, the following vulnerability has been resolved: iommufd: Fix out_fput in iommufd_fault_alloc() As fput() calls the file->f_op->release op, where fault obj and ictx are getting released, there is no need to release these two after fput() one more time, which would result in imbalanced refcounts: refcount_t: decrement hit 0; leaking memory. WARNING: CPU: 48 PID: 2369 at lib/refcount.c:31 refcount_warn_saturate+0x60/0x230 Call trace: refcount_warn_saturate+0x60/0x230 (P) refcount_warn_saturate+0x60/0x230 (L) iommufd_fault_fops_release+0x9c/0xe0 [iommufd] ... VFS: Close: file count is 0 (f_op=iommufd_fops [iommufd]) WARNING: CPU: 48 PID: 2369 at fs/open.c:1507 filp_flush+0x3c/0xf0 Call trace: filp_flush+0x3c/0xf0 (P) filp_flush+0x3c/0xf0 (L) __arm64_sys_close+0x34/0x98 ... imbalanced put on file reference count WARNING: CPU: 48 PID: 2369 at fs/file.c:74 __file_ref_put+0x100/0x138 Call trace: __file_ref_put+0x100/0x138 (P) __file_ref_put+0x100/0x138 (L) __fput_sync+0x4c/0xd0 Drop those two lines to fix the warnings above. | |||||
| CVE-2024-35841 | 1 Linux | 1 Linux Kernel | 2025-09-26 | N/A | 5.5 MEDIUM |
| In the Linux kernel, the following vulnerability has been resolved: net: tls, fix WARNIING in __sk_msg_free A splice with MSG_SPLICE_PAGES will cause tls code to use the tls_sw_sendmsg_splice path in the TLS sendmsg code to move the user provided pages from the msg into the msg_pl. This will loop over the msg until msg_pl is full, checked by sk_msg_full(msg_pl). The user can also set the MORE flag to hint stack to delay sending until receiving more pages and ideally a full buffer. If the user adds more pages to the msg than can fit in the msg_pl scatterlist (MAX_MSG_FRAGS) we should ignore the MORE flag and send the buffer anyways. What actually happens though is we abort the msg to msg_pl scatterlist setup and then because we forget to set 'full record' indicating we can no longer consume data without a send we fallthrough to the 'continue' path which will check if msg_data_left(msg) has more bytes to send and then attempts to fit them in the already full msg_pl. Then next iteration of sender doing send will encounter a full msg_pl and throw the warning in the syzbot report. To fix simply check if we have a full_record in splice code path and if not send the msg regardless of MORE flag. | |||||
| CVE-2024-50220 | 1 Linux | 1 Linux Kernel | 2025-09-26 | N/A | 4.7 MEDIUM |
| In the Linux kernel, the following vulnerability has been resolved: fork: do not invoke uffd on fork if error occurs Patch series "fork: do not expose incomplete mm on fork". During fork we may place the virtual memory address space into an inconsistent state before the fork operation is complete. In addition, we may encounter an error during the fork operation that indicates that the virtual memory address space is invalidated. As a result, we should not be exposing it in any way to external machinery that might interact with the mm or VMAs, machinery that is not designed to deal with incomplete state. We specifically update the fork logic to defer khugepaged and ksm to the end of the operation and only to be invoked if no error arose, and disallow uffd from observing fork events should an error have occurred. This patch (of 2): Currently on fork we expose the virtual address space of a process to userland unconditionally if uffd is registered in VMAs, regardless of whether an error arose in the fork. This is performed in dup_userfaultfd_complete() which is invoked unconditionally, and performs two duties - invoking registered handlers for the UFFD_EVENT_FORK event via dup_fctx(), and clearing down userfaultfd_fork_ctx objects established in dup_userfaultfd(). This is problematic, because the virtual address space may not yet be correctly initialised if an error arose. The change in commit d24062914837 ("fork: use __mt_dup() to duplicate maple tree in dup_mmap()") makes this more pertinent as we may be in a state where entries in the maple tree are not yet consistent. We address this by, on fork error, ensuring that we roll back state that we would otherwise expect to clean up through the event being handled by userland and perform the memory freeing duty otherwise performed by dup_userfaultfd_complete(). We do this by implementing a new function, dup_userfaultfd_fail(), which performs the same loop, only decrementing reference counts. Note that we perform mmgrab() on the parent and child mm's, however userfaultfd_ctx_put() will mmdrop() this once the reference count drops to zero, so we will avoid memory leaks correctly here. | |||||
| CVE-2024-35844 | 1 Linux | 1 Linux Kernel | 2025-09-26 | N/A | 5.5 MEDIUM |
| In the Linux kernel, the following vulnerability has been resolved: f2fs: compress: fix reserve_cblocks counting error when out of space When a file only needs one direct_node, performing the following operations will cause the file to be unrepairable: unisoc # ./f2fs_io compress test.apk unisoc #df -h | grep dm-48 /dev/block/dm-48 112G 112G 1.2M 100% /data unisoc # ./f2fs_io release_cblocks test.apk 924 unisoc # df -h | grep dm-48 /dev/block/dm-48 112G 112G 4.8M 100% /data unisoc # dd if=/dev/random of=file4 bs=1M count=3 3145728 bytes (3.0 M) copied, 0.025 s, 120 M/s unisoc # df -h | grep dm-48 /dev/block/dm-48 112G 112G 1.8M 100% /data unisoc # ./f2fs_io reserve_cblocks test.apk F2FS_IOC_RESERVE_COMPRESS_BLOCKS failed: No space left on device adb reboot unisoc # df -h | grep dm-48 /dev/block/dm-48 112G 112G 11M 100% /data unisoc # ./f2fs_io reserve_cblocks test.apk 0 This is because the file has only one direct_node. After returning to -ENOSPC, reserved_blocks += ret will not be executed. As a result, the reserved_blocks at this time is still 0, which is not the real number of reserved blocks. Therefore, fsck cannot be set to repair the file. After this patch, the fsck flag will be set to fix this problem. unisoc # df -h | grep dm-48 /dev/block/dm-48 112G 112G 1.8M 100% /data unisoc # ./f2fs_io reserve_cblocks test.apk F2FS_IOC_RESERVE_COMPRESS_BLOCKS failed: No space left on device adb reboot then fsck will be executed unisoc # df -h | grep dm-48 /dev/block/dm-48 112G 112G 11M 100% /data unisoc # ./f2fs_io reserve_cblocks test.apk 924 | |||||
