In my already virtualized host, trying to pass the option the option -enable-kvm -m 1024
, will fail:
qemu-system-x86_64 -vga std -enable-kvm -m 1024 -monitor telnet:localhost:9313,server,nowait -drive file=my_img.img,cache=none
# Could not access KVM kernel module: No such file or directory
# failed to initialize KVM: No such file or directory
If I remove that option -enable-kvm -m 1024
, qemu will load (but it will take forever, because it is using software emulation):
qemu-system-x86_64 -vga std -monitor telnet:localhost:9313,server,nowait -drive file=my_img.img,cache=none
# qemu running, OK, but image taking forever to load.
Surely, this virtualized host of mine has capabilities of nesting its own virtualization. Everywhere I find information about it [like here: https://docs.openstack.org/developer/devstack/guides/devstack-with-nested-kvm.html ] tells me that I must check the file /sys/module/kvm_intel/parameters/nested
which is simply not available, because kvm-intel
isn't and can't be loaded from inside an image:
sudo modprobe kvm-intel
# modprobe: ERROR: could not insert 'kvm_intel': Operation not supported
Probably that method of debugging nested virtualization only works in the bare metal. So, how do I enable (forward the support of) kvm from inside a kvm?
Additional info:
lscpu # from inside the virtualized host
# Architecture: x86_64
# ...
# Vendor ID: GenuineIntel
# CPU family: 6
# Model: 13
# Model name: QEMU Virtual CPU version (cpu64-rhel6)
# Stepping: 3
# ...
# Hypervisor vendor: KVM
ltrace
of qemu:
# open64("/dev/kvm", 524290, 00) = -1
# __errno_location() = 0x7f958673c730
# __fprintf_chk(0x7f957fd81060, 1, 0x7f9586474ce0, 0Could not access KVM kernel module: No such file or directory