The reason this is occurring is because the hardware on which you are installing Kubernetes does not have enough resources. The developers in the Kubernetes community have mutually agreed that running Kubernetes with less than 2 CPU Cores is not advisable.
This is because in order to run Kubernetes you have to account for a certain amount of overhead. And, when doing so, if you have a system with a very small amount of compute power you will not be able to properly run applications simultaneously.
@Arghya is correct. You can opt to circumvent this by ignoring the Pre-Flight check that evaluates the capabilities of your hardware before installing the software. However, this is not advisable due to what I explained above.
If you're curious about learning more about how CPU cores relate to Kubernetes and Linux Containers, here is some really good documentation. In a nutshell, a Linux Container is effectively a process that is partitioned off from the rest of the operating system by what is known as a Kernel Namespace. Furthermore, this process can have limitations or requirements set around the amount of memory and cpu it can consume by using Control Groups.
When running a Linux Container in Kubernetes, the Kubernetes API Server schedules pods on worker nodes based on their available resources. If a Pod requires 200m
of CPU, for example, then you would have already allocated 20% of your hardware to a single process running on it. See how much this can impact the required overhead to run the software? Kubernetes itself provisions half a dozen pods just to run. All of which have CPU Limitations and Requests specified.
Here is a good doc if you want to learn more about how CPU resources are applied to containerized processes with Linux cgroups.