QEMU/KVM internals

Discussions on more advanced topics such as monolithic vs micro-kernels, transactional memory models, and paging vs segmentation should go here. Use this forum to expand and improve the wiki!
Post Reply
cianfa72
Member
Member
Posts: 73
Joined: Sat Dec 22, 2012 12:01 pm

QEMU/KVM internals

Post by cianfa72 »

Hi,

I'd like to ask some details about QEMU/KVM implementation. On my Linux server (dual socket CPU w/ 12 core/24 HT each) I run a VM via qemu-system-x86_64:

Code: Select all

root@eve-ng03:~# numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 24 25 26 27 28 29 30 31 32 33 34 35
node 0 size: 64180 MB
node 0 free: 22819 MB
node 1 cpus: 12 13 14 15 16 17 18 19 20 21 22 23 36 37 38 39 40 41 42 43 44 45 46 47
node 1 size: 64491 MB
node 1 free: 51078 MB
node distances:
node   0   1 
  0:  10  21 
  1:  21  10 
root@eve-ng03:~#
root@eve-ng03:~#
root@eve-ng03:~# ps -wwf -p 45802
UID        PID  PPID  C STIME TTY          TIME CMD
root     45802 45786 99 Jun18 ?        153-02:15:51 /opt/qemu/bin/qemu-system-x86_64 -nographic -device virtio-net-pci,addr=3.0,multifunction=on,netdev=net0,mac=50:00:00:07:00:00 -netdev tap,id=net0,ifname=vunl0_7_0,script=no -device virtio-net-pci,addr=3.1,multifunction=on,netdev=net1,mac=50:00:00:07:00:01 -netdev tap,id=net1,ifname=vunl0_7_1,script=no -device virtio-net-pci,addr=3.2,multifunction=on,netdev=net2,mac=50:00:00:07:00:02 -netdev tap,id=net2,ifname=vunl0_7_2,script=no -device virtio-net-pci,addr=3.3,multifunction=on,netdev=net3,mac=50:00:00:07:00:03 -netdev tap,id=net3,ifname=vunl0_7_3,script=no -device virtio-net-pci,addr=3.4,multifunction=on,netdev=net4,mac=50:00:00:07:00:04 -netdev tap,id=net4,ifname=vunl0_7_4,script=no -device virtio-net-pci,addr=3.5,multifunction=on,netdev=net5,mac=50:00:00:07:00:05 -netdev tap,id=net5,ifname=vunl0_7_5,script=no -device virtio-net-pci,addr=3.6,multifunction=on,netdev=net6,mac=50:00:00:07:00:06 -netdev tap,id=net6,ifname=vunl0_7_6,script=no -device virtio-net-pci,addr=3.7,multifunction=on,netdev=net7,mac=50:00:00:07:00:07 -netdev tap,id=net7,ifname=vunl0_7_7,script=no -device virtio-net-pci,addr=4.0,multifunction=on,netdev=net8,mac=50:00:00:07:00:08 -netdev tap,id=net8,ifname=vunl0_7_8,script=no -device virtio-net-pci,addr=4.1,multifunction=on,netdev=net9,mac=50:00:00:07:00:09 -netdev tap,id=net9,ifname=vunl0_7_9,script=no -device virtio-net-pci,addr=4.2,multifunction=on,netdev=net10,mac=50:00:00:07:00:0a -netdev tap,id=net10,ifname=vunl0_7_10,script=no -smp 4 -m 8192 -name R_02 -uuid a368f2e7-eebf-418b-9ffd-5c190134a670 -drive file=virtioa.qcow2,if=virtio,bus=0,unit=0,cache=none -enable-kvm -smbios type=1,manufacturer=cisco,product=Cisco IOS XRv 9000,uuid=97fc351b-431d-4cf2-9c01-43c283faf2a3 -machine type=pc,accel=kvm,usb=off -serial mon:stdio -nographic -nodefconfig -nodefaults -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -realtime mlock=off -no-shutdown -boot order=c -serial null -serial null -serial null -cpu host
root@eve-ng03:~#
root@eve-ng03:~#
root@eve-ng03:~#
root@eve-ng03:~# ps -T -p 45802
  PID  SPID TTY          TIME CMD
45802 45802 ?        14-05:23:18 qemu-system-x86
45802   812 ?        00:00:00 qemu-system-x86
45802   824 ?        7-07:15:51 qemu-system-x86
45802   825 ?        7-06:48:23 qemu-system-x86
45802   826 ?        8-04:20:31 qemu-system-x86
45802   827 ?        116-01:07:17 qemu-system-x86
45802   658 ?        00:00:00 qemu-system-x86
45802   678 ?        00:00:00 qemu-system-x86
45802   681 ?        00:00:00 qemu-system-x86
45802   733 ?        00:00:00 qemu-system-x86
45802   734 ?        00:00:00 qemu-system-x86
45802   761 ?        00:00:00 qemu-system-x86
root@eve-ng03:~#
As you can see there are 12 threads that run inside the QEMU process (PID 45802). Digging inside qemu monitor (Ctrl+A C)

Code: Select all

QEMU 2.4.0 monitor - type 'help' for more information
(qemu) info cpus
* CPU #0: pc=0xffffffff8d0454e6 (halted) thread_id=824
  CPU #1: pc=0xffffffff8d0454e6 (halted) thread_id=825
  CPU #2: pc=0xffffffff8d0454e6 (halted) thread_id=826
  CPU #3: pc=0x00007f7b8a64f3ff thread_id=827
(qemu)
I can see the four vCPU threads (as expected since the VM started with 4 vCPUs) and the main thread -- it should be the thread having Thread-ID (SPID) the same as the Process-ID (45802).

My question is: which are actually the tasks in charge of all other (non main or vCPUs) threads inside the QEMU process ? Thank you.
cianfa72
Member
Member
Posts: 73
Joined: Sat Dec 22, 2012 12:01 pm

Re: QEMU/KVM internals

Post by cianfa72 »

Any feedback ? Thank you.
reapersms
Member
Member
Posts: 48
Joined: Fri Oct 04, 2019 10:10 am

Re: QEMU/KVM internals

Post by reapersms »

A random, mostly uneducated guess is that QEMU spawns up a thread for every physical CPU no matter what, and allocates those as needed to the VM. The other 8 all having a time of 00:00:00 suggests they all immediately went idle.

If you spawned up another 4 CPU VM, you'd probably see another 4 wake up.
cianfa72
Member
Member
Posts: 73
Joined: Sat Dec 22, 2012 12:01 pm

Re: QEMU/KVM internals

Post by cianfa72 »

reapersms wrote:A random, mostly uneducated guess is that QEMU spawns up a thread for every physical CPU no matter what, and allocates those as needed to the VM.
I take it as QEMU spawns up 12 threads since there are 12 core per socket on my system. Then 8 of them immediately go idle.

Does it make sense ?
reapersms
Member
Member
Posts: 48
Joined: Fri Oct 04, 2019 10:10 am

Re: QEMU/KVM internals

Post by reapersms »

Yes.
Post Reply