APPLIED: [Artful][PATCH 1/1] CVE-2017-17741 KVM: Fix stack-out-of-bounds read in write_mmio

Kleber Souza kleber.souza at canonical.com
Mon Feb 5 11:23:38 UTC 2018


Hi Khaled,

On 02/03/18 02:36, Khaled Elmously wrote:
> Applied to Artful
> 
> On 2018-01-04 01:57:52 , Khalid Elmously wrote:
>> From: Wanpeng Li <wanpeng.li at hotmail.com>
>>
>> Reported by syzkaller:
>>
>>   BUG: KASAN: stack-out-of-bounds in write_mmio+0x11e/0x270 [kvm]
>>   Read of size 8 at addr ffff8803259df7f8 by task syz-executor/32298
>>
>>   CPU: 6 PID: 32298 Comm: syz-executor Tainted: G           OE    4.15.0-rc2+ #18
>>   Hardware name: LENOVO ThinkCentre M8500t-N000/SHARKBAY, BIOS FBKTC1AUS 02/16/2016
>>   Call Trace:
>>    dump_stack+0xab/0xe1
>>    print_address_description+0x6b/0x290
>>    kasan_report+0x28a/0x370
>>    write_mmio+0x11e/0x270 [kvm]
>>    emulator_read_write_onepage+0x311/0x600 [kvm]
>>    emulator_read_write+0xef/0x240 [kvm]
>>    emulator_fix_hypercall+0x105/0x150 [kvm]
>>    em_hypercall+0x2b/0x80 [kvm]
>>    x86_emulate_insn+0x2b1/0x1640 [kvm]
>>    x86_emulate_instruction+0x39a/0xb90 [kvm]
>>    handle_exception+0x1b4/0x4d0 [kvm_intel]
>>    vcpu_enter_guest+0x15a0/0x2640 [kvm]
>>    kvm_arch_vcpu_ioctl_run+0x549/0x7d0 [kvm]
>>    kvm_vcpu_ioctl+0x479/0x880 [kvm]
>>    do_vfs_ioctl+0x142/0x9a0
>>    SyS_ioctl+0x74/0x80
>>    entry_SYSCALL_64_fastpath+0x23/0x9a
>>
>> The path of patched vmmcall will patch 3 bytes opcode 0F 01 C1(vmcall)
>> to the guest memory, however, write_mmio tracepoint always prints 8 bytes
>> through *(u64 *)val since kvm splits the mmio access into 8 bytes. This
>> leaks 5 bytes from the kernel stack (CVE-2017-17741).  This patch fixes
>> it by just accessing the bytes which we operate on.
>>
>> Before patch:
>>
>> syz-executor-5567  [007] .... 51370.561696: kvm_mmio: mmio write len 3 gpa 0x10 val 0x1ffff10077c1010f
>>
>> After patch:
>>
>> syz-executor-13416 [002] .... 51302.299573: kvm_mmio: mmio write len 3 gpa 0x10 val 0xc1010f
>>
>>
>> (backported from e39d200fa5bf5b94a0948db0dae44c1b73b84a56)
>>
>>
>> CVE-2017-17741
>>
>> References: CVE-2017-17741
>> Reported-by: Dmitry Vyukov <dvyukov at google.com>
>> Reviewed-by: Darren Kenny <darren.kenny at oracle.com>
>> Reviewed-by: Marc Zyngier <marc.zyngier at arm.com>
>> Tested-by: Marc Zyngier <marc.zyngier at arm.com>
>> Cc: Paolo Bonzini <pbonzini at redhat.com>
>> Cc: Radim Krčmář <rkrcmar at redhat.com>
>> Cc: Marc Zyngier <marc.zyngier at arm.com> 
>> Cc: Christoffer Dall <christoffer.dall at linaro.org> 
>> Signed-off-by: Wanpeng Li <wanpeng.li at hotmail.com> 
>> Signed-off-by: Paolo Bonzini <pbonzini at redhat.com>
>> Signed-off-by: Khalid Elmously <khalid.elmously at canonical.com>

We see the SOB block as kind of a log of how the patch got into our
trees. I noticed this patch has been applied to the artful tree with
Canonical's SOB block as:

(backported from e39d200fa5bf5b94a0948db0dae44c1b73b84a56)
Acked-by: Stefan Bader <stefan.bader at canonical.com>
Acked-by: Khalid Elmously <khalid.elmously at canonical.com>
Signed-off-by: Khalid Elmously <khalid.elmously at canonical.com>

This block is missing your SOB right below the "(backported from ...)"
line. This is important since this is how we can quickly spot who was
the person responsible for backporting/cherry-picking the patch (and
probably who has sent it to the mailing list). In that case, it's OK to
have two SOB's from the same person, one for backporting the patch and
the other for applying it.

We tend not to self-ACK a patch since we are generally not very
trustworthy to review our own code :-). Meaning that the general rule is
to wait for the ACK's from at least two other independent persons before
applying a patch.


Thanks,
Kleber


>>
>>
>> ---
>>  arch/x86/kvm/x86.c         | 8 ++++----
>>  include/trace/events/kvm.h | 7 +++++--
>>  virt/kvm/arm/mmio.c        | 6 +++---
>>  3 files changed, 12 insertions(+), 9 deletions(-)
>>
>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>> index 7351cdc46cc7..b51c939c32af 100644
>> --- a/arch/x86/kvm/x86.c
>> +++ b/arch/x86/kvm/x86.c
>> @@ -4334,7 +4334,7 @@ static int vcpu_mmio_read(struct kvm_vcpu *vcpu, gpa_t addr, int len, void *v)
>>  					 addr, n, v))
>>  		    && kvm_io_bus_read(vcpu, KVM_MMIO_BUS, addr, n, v))
>>  			break;
>> -		trace_kvm_mmio(KVM_TRACE_MMIO_READ, n, addr, *(u64 *)v);
>> +		trace_kvm_mmio(KVM_TRACE_MMIO_READ, n, addr, v);
>>  		handled += n;
>>  		addr += n;
>>  		len -= n;
>> @@ -4593,7 +4593,7 @@ static int read_prepare(struct kvm_vcpu *vcpu, void *val, int bytes)
>>  {
>>  	if (vcpu->mmio_read_completed) {
>>  		trace_kvm_mmio(KVM_TRACE_MMIO_READ, bytes,
>> -			       vcpu->mmio_fragments[0].gpa, *(u64 *)val);
>> +			       vcpu->mmio_fragments[0].gpa, val);
>>  		vcpu->mmio_read_completed = 0;
>>  		return 1;
>>  	}
>> @@ -4615,14 +4615,14 @@ static int write_emulate(struct kvm_vcpu *vcpu, gpa_t gpa,
>>  
>>  static int write_mmio(struct kvm_vcpu *vcpu, gpa_t gpa, int bytes, void *val)
>>  {
>> -	trace_kvm_mmio(KVM_TRACE_MMIO_WRITE, bytes, gpa, *(u64 *)val);
>> +	trace_kvm_mmio(KVM_TRACE_MMIO_WRITE, bytes, gpa, val);
>>  	return vcpu_mmio_write(vcpu, gpa, bytes, val);
>>  }
>>  
>>  static int read_exit_mmio(struct kvm_vcpu *vcpu, gpa_t gpa,
>>  			  void *val, int bytes)
>>  {
>> -	trace_kvm_mmio(KVM_TRACE_MMIO_READ_UNSATISFIED, bytes, gpa, 0);
>> +	trace_kvm_mmio(KVM_TRACE_MMIO_READ_UNSATISFIED, bytes, gpa, NULL);
>>  	return X86EMUL_IO_NEEDED;
>>  }
>>  
>> diff --git a/include/trace/events/kvm.h b/include/trace/events/kvm.h
>> index 8ade3eb6c640..90fce4d6956a 100644
>> --- a/include/trace/events/kvm.h
>> +++ b/include/trace/events/kvm.h
>> @@ -208,7 +208,7 @@ TRACE_EVENT(kvm_ack_irq,
>>  	{ KVM_TRACE_MMIO_WRITE, "write" }
>>  
>>  TRACE_EVENT(kvm_mmio,
>> -	TP_PROTO(int type, int len, u64 gpa, u64 val),
>> +	TP_PROTO(int type, int len, u64 gpa, void *val),
>>  	TP_ARGS(type, len, gpa, val),
>>  
>>  	TP_STRUCT__entry(
>> @@ -222,7 +222,10 @@ TRACE_EVENT(kvm_mmio,
>>  		__entry->type		= type;
>>  		__entry->len		= len;
>>  		__entry->gpa		= gpa;
>> -		__entry->val		= val;
>> +		__entry->val		= 0;
>> +		if (val)
>> +			memcpy(&__entry->val, val,
>> +			       min_t(u32, sizeof(__entry->val), len));
>>  	),
>>  
>>  	TP_printk("mmio %s len %u gpa 0x%llx val 0x%llx",
>> diff --git a/virt/kvm/arm/mmio.c b/virt/kvm/arm/mmio.c
>> index b6e715fd3c90..dac7ceb1a677 100644
>> --- a/virt/kvm/arm/mmio.c
>> +++ b/virt/kvm/arm/mmio.c
>> @@ -112,7 +112,7 @@ int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run)
>>  		}
>>  
>>  		trace_kvm_mmio(KVM_TRACE_MMIO_READ, len, run->mmio.phys_addr,
>> -			       data);
>> +			       &data);
>>  		data = vcpu_data_host_to_guest(vcpu, data, len);
>>  		vcpu_set_reg(vcpu, vcpu->arch.mmio_decode.rt, data);
>>  	}
>> @@ -182,14 +182,14 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
>>  		data = vcpu_data_guest_to_host(vcpu, vcpu_get_reg(vcpu, rt),
>>  					       len);
>>  
>> -		trace_kvm_mmio(KVM_TRACE_MMIO_WRITE, len, fault_ipa, data);
>> +		trace_kvm_mmio(KVM_TRACE_MMIO_WRITE, len, fault_ipa, &data);
>>  		kvm_mmio_write_buf(data_buf, len, data);
>>  
>>  		ret = kvm_io_bus_write(vcpu, KVM_MMIO_BUS, fault_ipa, len,
>>  				       data_buf);
>>  	} else {
>>  		trace_kvm_mmio(KVM_TRACE_MMIO_READ_UNSATISFIED, len,
>> -			       fault_ipa, 0);
>> +			       fault_ipa, NULL);
>>  
>>  		ret = kvm_io_bus_read(vcpu, KVM_MMIO_BUS, fault_ipa, len,
>>  				      data_buf);
>> -- 
>> 2.14.1
>>
> 




More information about the kernel-team mailing list