Skip to content

Commit 0c4762e

Browse files
Fuad TabbaMarc Zyngier
authored andcommitted
KVM: arm64: nv: Avoid NV stage-2 code when NV is not supported
The NV stage-2 manipulation functions kvm_nested_s2_unmap(), kvm_nested_s2_wp(), and others, are being called for any stage-2 manipulation regardless of whether nested virtualization is supported or enabled for the VM. For protected KVM (pKVM), `struct kvm_pgtable` uses the `pkvm_mappings` member of the union. This member aliases `ia_bits`, which is used by the non-protected NV code paths. Attempting to read `pgt->ia_bits` in these functions results in treating protected mapping pointers or state values as bit-shift amounts. This triggers a UBSAN shift-out-of-bounds error: UBSAN: shift-out-of-bounds in arch/arm64/kvm/nested.c:1127:34 shift exponent 174565952 is too large for 64-bit type 'unsigned long' Call trace: __ubsan_handle_shift_out_of_bounds+0x28c/0x2c0 kvm_nested_s2_unmap+0x228/0x248 kvm_arch_flush_shadow_memslot+0x98/0xc0 kvm_set_memslot+0x248/0xce0 Since pKVM and NV are mutually exclusive, prevent entry into these NV handling functions if the VM has not allocated any nested MMUs (i.e., `kvm->arch.nested_mmus_size` is 0). Fixes: 7270cc9 ("KVM: arm64: nv: Handle VNCR_EL2 invalidation from MMU notifiers") Suggested-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Fuad Tabba <tabba@google.com> Link: https://patch.msgid.link/20260202152310.113467-1-tabba@google.com Signed-off-by: Marc Zyngier <maz@kernel.org>
1 parent 82a32ea commit 0c4762e

File tree

1 file changed

+12
-0
lines changed

1 file changed

+12
-0
lines changed

arch/arm64/kvm/nested.c

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1101,6 +1101,9 @@ void kvm_nested_s2_wp(struct kvm *kvm)
11011101

11021102
lockdep_assert_held_write(&kvm->mmu_lock);
11031103

1104+
if (!kvm->arch.nested_mmus_size)
1105+
return;
1106+
11041107
for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
11051108
struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
11061109

@@ -1117,6 +1120,9 @@ void kvm_nested_s2_unmap(struct kvm *kvm, bool may_block)
11171120

11181121
lockdep_assert_held_write(&kvm->mmu_lock);
11191122

1123+
if (!kvm->arch.nested_mmus_size)
1124+
return;
1125+
11201126
for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
11211127
struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
11221128

@@ -1133,6 +1139,9 @@ void kvm_nested_s2_flush(struct kvm *kvm)
11331139

11341140
lockdep_assert_held_write(&kvm->mmu_lock);
11351141

1142+
if (!kvm->arch.nested_mmus_size)
1143+
return;
1144+
11361145
for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
11371146
struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
11381147

@@ -1145,6 +1154,9 @@ void kvm_arch_flush_shadow_all(struct kvm *kvm)
11451154
{
11461155
int i;
11471156

1157+
if (!kvm->arch.nested_mmus_size)
1158+
return;
1159+
11481160
for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
11491161
struct kvm_s2_mmu *mmu = &kvm->arch.nested_mmus[i];
11501162

0 commit comments

Comments
 (0)