How to find out the number of CPUs using python
Asked Answered
F

15

825

I want to know the number of CPUs on the local machine using Python. The result should be user/real as output by time(1) when called with an optimally scaling userspace-only program.

Fervid answered 17/6, 2009 at 10:41 Comment(2)
You should keep cpusets (in Linux) in mind. If you're in a cpuset, the solutions below will still give the number of real CPUs in the system, not the number available to your process. /proc/<PID>/status has some lines that tell you the number of CPUs in the current cpuset: look for Cpus_allowed_list.Freudberg
if you are using torch you can do import torch.multiprocessing; mp.cpu_count()Titillate
F
236

If you're interested into the number of processors available to your current process, you have to check cpuset first. Otherwise (or if cpuset is not in use), multiprocessing.cpu_count() is the way to go in Python 2.6 and newer. The following method falls back to a couple of alternative methods in older versions of Python:

import os
import re
import subprocess


def available_cpu_count():
    """ Number of available virtual or physical CPUs on this system, i.e.
    user/real as output by time(1) when called with an optimally scaling
    userspace-only program"""

    # cpuset
    # cpuset may restrict the number of *available* processors
    try:
        m = re.search(r'(?m)^Cpus_allowed:\s*(.*)$',
                      open('/proc/self/status').read())
        if m:
            res = bin(int(m.group(1).replace(',', ''), 16)).count('1')
            if res > 0:
                return res
    except IOError:
        pass

    # Python 2.6+
    try:
        import multiprocessing
        return multiprocessing.cpu_count()
    except (ImportError, NotImplementedError):
        pass

    # https://github.com/giampaolo/psutil
    try:
        import psutil
        return psutil.cpu_count()   # psutil.NUM_CPUS on old versions
    except (ImportError, AttributeError):
        pass

    # POSIX
    try:
        res = int(os.sysconf('SC_NPROCESSORS_ONLN'))

        if res > 0:
            return res
    except (AttributeError, ValueError):
        pass

    # Windows
    try:
        res = int(os.environ['NUMBER_OF_PROCESSORS'])

        if res > 0:
            return res
    except (KeyError, ValueError):
        pass

    # jython
    try:
        from java.lang import Runtime
        runtime = Runtime.getRuntime()
        res = runtime.availableProcessors()
        if res > 0:
            return res
    except ImportError:
        pass

    # BSD
    try:
        sysctl = subprocess.Popen(['sysctl', '-n', 'hw.ncpu'],
                                  stdout=subprocess.PIPE)
        scStdout = sysctl.communicate()[0]
        res = int(scStdout)

        if res > 0:
            return res
    except (OSError, ValueError):
        pass

    # Linux
    try:
        res = open('/proc/cpuinfo').read().count('processor\t:')

        if res > 0:
            return res
    except IOError:
        pass

    # Solaris
    try:
        pseudoDevices = os.listdir('/devices/pseudo/')
        res = 0
        for pd in pseudoDevices:
            if re.match(r'^cpuid@[0-9]+$', pd):
                res += 1

        if res > 0:
            return res
    except OSError:
        pass

    # Other UNIXes (heuristic)
    try:
        try:
            dmesg = open('/var/run/dmesg.boot').read()
        except IOError:
            dmesgProcess = subprocess.Popen(['dmesg'], stdout=subprocess.PIPE)
            dmesg = dmesgProcess.communicate()[0]

        res = 0
        while '\ncpu' + str(res) + ':' in dmesg:
            res += 1

        if res > 0:
            return res
    except OSError:
        pass

    raise Exception('Can not determine number of CPUs on this system')
Fervid answered 17/6, 2009 at 10:41 Comment(12)
On a MacPro 1,0 running the latest Ubuntu, on an HP Laptop running a recent Debian, and on an old eMachine running an old Ubuntu, the cpus_allowed results of /proc/self/status are respectively ff, f and f--- corresponding to 8, 4 and 4 by your (correct) math. However the actual numbers of CPUs are respectively 4, 2 and 1. I find that counting the number of occurrences of the word "processor" in /proc/cpuinfo may be the better way to go. (Or do I have the question wrong?)Idealistic
With some further research--- if that can be said of "Googling"--- I find from the use of /proc/cpuinfo that if for any one of the listings for each "processor" you multiply the "siblings" by the "cpu cores" you get your "Cpus_allowed" number. And I gather that the siblings refer to hyper-threading, hence your reference to "virtual". But the fact remains that your "Cpus_allowed" number is 8 on my MacPro whereas your multiprocessing.cpu_count() answer is 4. My own open('/proc/cpuinfo').read().count('processor') also produces 4, the number of physical cores (two dual-core processors).Idealistic
open('/proc/self/status').read() forgets to close the file. Use with open('/proc/self/status') as f: f.read() insteadOmaomaha
os.cpu_count()Incorporator
@timdiels Not entirely correct; when the file handle's refcount reaches zero (immediately after the read), the GC will clean it up by calling (in latest versions) io.BufferedReader.__del__, which closes first: github.com/python/cpython/blob/3.7/Lib/_pyio.py#L375-L385Gustafson
@Gustafson That's pretty cool and unless you have a circular reference you don't even have to wait for gc.collect()Omaomaha
@Gustafson Actually, reference counting is an implementation detail of CPython and thus should not be relied on (when easily avoidable as in this case) according to https://mcmap.net/q/15393/-is-explicitly-closing-files-importantOmaomaha
@timdiels Certainly is an artifact of the interpreter implementation, though I have not seen evidence in the question as to which interpreter is being used. Under Pypy, cleanup (call to __del__) will happen "at some point in the future after the last reference is freed, or not if the interpreter is shut down beforehand". The "or not" situation is also acceptable; when the process is shut down, file handles are closed anyway.Gustafson
@Gustafson In this case it's acceptable, agreed, just file handles being left open which I guess is ok if you're not writing a long running daemon/process; which I fear might end up hitting a max open file handles of the OS. It's worse when writing to a file that needs to get read again before the process ends, but that's not the case here so that's a moot point. Still a good idea to have a habit of using with for when you do encounter a case where you need it.Omaomaha
This seems to have been added to Python 3 as os.sched_getaffinity: stackoverflow.com/questions/1006289/…Topaz
It should be noted os.sched_getaffinity does not seem to be available on Windows.Zwiebel
The reply should clarify that the count is the logical count has already mentioned in other comments. As of now (python 3.7) psutil.cpu_count(logical=False) is the only way I know to get the number of physical cores as suggested by @davoud-taghawi-nejad.Eulogy
N
1270

If you have python with a version >= 2.6 you can simply use

import multiprocessing

multiprocessing.cpu_count()

http://docs.python.org/library/multiprocessing.html#multiprocessing.cpu_count

Nasty answered 17/6, 2009 at 10:53 Comment(12)
multiprocessing is also supported in 3.xAllwein
I want to add that this doesn't work in IronPython which raises a NotImplementedError.Multiplicity
This gives the number of CPUs on available... not in use by the program!Hennessey
On Python 3.6.2 I could only use os.cpu_count()Bookman
is there a way to know only the number allocated (for example using slurm)Highspirited
python -c 'import multiprocessing as m; print(m.cpu_count())'Boettcher
Also, as noted below, this count can include "virtual" hyperthreaded cpus, which may not be what you want if you are scheduling cpu-intensive tasks.Ariella
Fun fact, both these calls report "4" on my apple computer that definitely has only 2 cores. (1 dual-core cpu)Underwear
@SamyBencherif Can you share more information on your CPU?Sole
@SamyBencherif this is a result of what was mentioned above. "Hyperthreading" results in the OS seeing double the number of physical processor coresMalocclusion
Besides the hyperthreaded cpus, multiprocessing.cpu_count() does not account for limits in cpu usage through docker run --cpus=2.Vista
This answer isn't correct. It will not work with Docker. For example docker run -it --rm --cpuset-cpus=0,1 python:3 python3 -c "import multiprocessing; print(multiprocessing.cpu_count())".Farrell
F
236

If you're interested into the number of processors available to your current process, you have to check cpuset first. Otherwise (or if cpuset is not in use), multiprocessing.cpu_count() is the way to go in Python 2.6 and newer. The following method falls back to a couple of alternative methods in older versions of Python:

import os
import re
import subprocess


def available_cpu_count():
    """ Number of available virtual or physical CPUs on this system, i.e.
    user/real as output by time(1) when called with an optimally scaling
    userspace-only program"""

    # cpuset
    # cpuset may restrict the number of *available* processors
    try:
        m = re.search(r'(?m)^Cpus_allowed:\s*(.*)$',
                      open('/proc/self/status').read())
        if m:
            res = bin(int(m.group(1).replace(',', ''), 16)).count('1')
            if res > 0:
                return res
    except IOError:
        pass

    # Python 2.6+
    try:
        import multiprocessing
        return multiprocessing.cpu_count()
    except (ImportError, NotImplementedError):
        pass

    # https://github.com/giampaolo/psutil
    try:
        import psutil
        return psutil.cpu_count()   # psutil.NUM_CPUS on old versions
    except (ImportError, AttributeError):
        pass

    # POSIX
    try:
        res = int(os.sysconf('SC_NPROCESSORS_ONLN'))

        if res > 0:
            return res
    except (AttributeError, ValueError):
        pass

    # Windows
    try:
        res = int(os.environ['NUMBER_OF_PROCESSORS'])

        if res > 0:
            return res
    except (KeyError, ValueError):
        pass

    # jython
    try:
        from java.lang import Runtime
        runtime = Runtime.getRuntime()
        res = runtime.availableProcessors()
        if res > 0:
            return res
    except ImportError:
        pass

    # BSD
    try:
        sysctl = subprocess.Popen(['sysctl', '-n', 'hw.ncpu'],
                                  stdout=subprocess.PIPE)
        scStdout = sysctl.communicate()[0]
        res = int(scStdout)

        if res > 0:
            return res
    except (OSError, ValueError):
        pass

    # Linux
    try:
        res = open('/proc/cpuinfo').read().count('processor\t:')

        if res > 0:
            return res
    except IOError:
        pass

    # Solaris
    try:
        pseudoDevices = os.listdir('/devices/pseudo/')
        res = 0
        for pd in pseudoDevices:
            if re.match(r'^cpuid@[0-9]+$', pd):
                res += 1

        if res > 0:
            return res
    except OSError:
        pass

    # Other UNIXes (heuristic)
    try:
        try:
            dmesg = open('/var/run/dmesg.boot').read()
        except IOError:
            dmesgProcess = subprocess.Popen(['dmesg'], stdout=subprocess.PIPE)
            dmesg = dmesgProcess.communicate()[0]

        res = 0
        while '\ncpu' + str(res) + ':' in dmesg:
            res += 1

        if res > 0:
            return res
    except OSError:
        pass

    raise Exception('Can not determine number of CPUs on this system')
Fervid answered 17/6, 2009 at 10:41 Comment(12)
On a MacPro 1,0 running the latest Ubuntu, on an HP Laptop running a recent Debian, and on an old eMachine running an old Ubuntu, the cpus_allowed results of /proc/self/status are respectively ff, f and f--- corresponding to 8, 4 and 4 by your (correct) math. However the actual numbers of CPUs are respectively 4, 2 and 1. I find that counting the number of occurrences of the word "processor" in /proc/cpuinfo may be the better way to go. (Or do I have the question wrong?)Idealistic
With some further research--- if that can be said of "Googling"--- I find from the use of /proc/cpuinfo that if for any one of the listings for each "processor" you multiply the "siblings" by the "cpu cores" you get your "Cpus_allowed" number. And I gather that the siblings refer to hyper-threading, hence your reference to "virtual". But the fact remains that your "Cpus_allowed" number is 8 on my MacPro whereas your multiprocessing.cpu_count() answer is 4. My own open('/proc/cpuinfo').read().count('processor') also produces 4, the number of physical cores (two dual-core processors).Idealistic
open('/proc/self/status').read() forgets to close the file. Use with open('/proc/self/status') as f: f.read() insteadOmaomaha
os.cpu_count()Incorporator
@timdiels Not entirely correct; when the file handle's refcount reaches zero (immediately after the read), the GC will clean it up by calling (in latest versions) io.BufferedReader.__del__, which closes first: github.com/python/cpython/blob/3.7/Lib/_pyio.py#L375-L385Gustafson
@Gustafson That's pretty cool and unless you have a circular reference you don't even have to wait for gc.collect()Omaomaha
@Gustafson Actually, reference counting is an implementation detail of CPython and thus should not be relied on (when easily avoidable as in this case) according to https://mcmap.net/q/15393/-is-explicitly-closing-files-importantOmaomaha
@timdiels Certainly is an artifact of the interpreter implementation, though I have not seen evidence in the question as to which interpreter is being used. Under Pypy, cleanup (call to __del__) will happen "at some point in the future after the last reference is freed, or not if the interpreter is shut down beforehand". The "or not" situation is also acceptable; when the process is shut down, file handles are closed anyway.Gustafson
@Gustafson In this case it's acceptable, agreed, just file handles being left open which I guess is ok if you're not writing a long running daemon/process; which I fear might end up hitting a max open file handles of the OS. It's worse when writing to a file that needs to get read again before the process ends, but that's not the case here so that's a moot point. Still a good idea to have a habit of using with for when you do encounter a case where you need it.Omaomaha
This seems to have been added to Python 3 as os.sched_getaffinity: stackoverflow.com/questions/1006289/…Topaz
It should be noted os.sched_getaffinity does not seem to be available on Windows.Zwiebel
The reply should clarify that the count is the logical count has already mentioned in other comments. As of now (python 3.7) psutil.cpu_count(logical=False) is the only way I know to get the number of physical cores as suggested by @davoud-taghawi-nejad.Eulogy
T
160

len(os.sched_getaffinity(0)) is what you usually want

https://docs.python.org/3/library/os.html#os.sched_getaffinity

os.sched_getaffinity(0) (added in Python 3) returns the set of CPUs available considering the sched_setaffinity Linux system call, which limits which CPUs a process and its children can run on.

0 means to get the value for the current process. The function returns a set() of allowed CPUs, thus the need for len().

multiprocessing.cpu_count() and os.cpu_count() on the other hand just returns the total number of logical CPUs, that is e.g. the number of CPUs considering hyperthreading.

The difference is especially important because certain cluster management systems such as Platform LSF limit job CPU usage with sched_getaffinity.

Therefore, if you use multiprocessing.cpu_count(), your script might try to use way more cores than it has available, which may lead to overload and timeouts.

We can see the difference concretely by restricting the affinity with the taskset utility, which allows us to control the affinity of a process.

Minimal taskset example

For example, if I restrict Python to just 1 core (core 0) in my 16 core system:

taskset -c 0 ./main.py

with the test script:

main.py

#!/usr/bin/env python3

import multiprocessing
import os

print(multiprocessing.cpu_count())
print(os.cpu_count())
print(len(os.sched_getaffinity(0)))

then the output is:

16
16
1

Vs nproc

nproc does respect the affinity by default and:

taskset -c 0 nproc

outputs:

1

and man nproc makes that quite explicit:

print the number of processing units available

Therefore, len(os.sched_getaffinity(0)) behaves like nproc by default.

nproc has the --all flag for the less common case that you want to get the physical CPU count without considering taskset:

taskset -c 0 nproc --all

os.cpu_count documentation

The documentation of os.cpu_count also briefly mentions this https://docs.python.org/3.8/library/os.html#os.cpu_count

This number is not equivalent to the number of CPUs the current process can use. The number of usable CPUs can be obtained with len(os.sched_getaffinity(0))

The same comment is also copied on the documentation of multiprocessing.cpu_count: https://docs.python.org/3/library/multiprocessing.html#multiprocessing.cpu_count

From the 3.8 source under Lib/multiprocessing/context.py we also see that multiprocessing.cpu_count just forwards to os.cpu_count, except that the multiprocessing one throws an exception instead of returning None if os.cpu_count fails:

    def cpu_count(self):
        '''Returns the number of CPUs in the system'''
        num = os.cpu_count()
        if num is None:
            raise NotImplementedError('cannot determine number of cpus')
        else:
            return num

3.8 availability: systems with a native sched_getaffinity function

The only downside of this os.sched_getaffinity is that this appears to be UNIX only as of Python 3.8.

cpython 3.8 seems to just try to compile a small C hello world with a sched_setaffinity function call during configuration time, and if not present HAVE_SCHED_SETAFFINITY is not set and the function will likely be missing:

psutil.Process().cpu_affinity(): third-party version with a Windows port

The third-party psutil package (pip install psutil) had been mentioned at: https://mcmap.net/q/15245/-how-to-find-out-the-number-of-cpus-using-python but not the cpu_affinity function: https://psutil.readthedocs.io/en/latest/#psutil.Process.cpu_affinity

Usage:

import psutil
print(len(psutil.Process().cpu_affinity()))

This function does the same as the standard library os.sched_getaffinity on Linux, but they have also implemented it for Windows by making a call to the GetProcessAffinityMask Windows API function:

So in other words: those Windows users have to stop being lazy and send a patch to the upstream stdlib :-)

Tested in Ubuntu 16.04, Python 3.5.2.

os.process_cpu_count() (Python 3.13)

https://docs.python.org/3.13/library/os.html#os.process_cpu_count

This seems to be the same as len(os.sched_getaffinity(0)), except that:

  • it can be overridden by the -X cpu_count Python CLI argumentor the PYTHON_CPU_COUNT environment variable
  • TODO confirm: it might fall back nicely on platforms where devs were too lazy to implement sched_getaffinity rather than blow an exception

Mentioned by jfs in the comments.

Topaz answered 29/3, 2019 at 18:0 Comment(14)
Only available on Unix.Ariella
Works on jupyter notebook!Analogy
Just to clarify, does this support CPU limits in Docker-ized containers, as well?Jahdal
@DanNissenbaum from a quick read Docker seems to be able to fake nproc to match --cpuset-cpus, but I haven't tested: stackoverflow.com/questions/37871540/… Docker can also limit how much time the kernel should allocate to each instance as wel it seems e.g. with --cpu-quota, docs.docker.com/engine/reference/run/#cpuset-constraint so even nproc might not be fully meaningful in that case however.Topaz
This solution is also necessary for HPC situations where a compute node might have 48 cores but you only requested 10 of them. os.cpu_count() will tell you there's 48 cores but as @CiroSantilli郝海东冠状病六四事件法轮功 says you run into all sorts of problems if you try to use that. os.sched_getaffinity(0) will report what was actually requested from the scheduler eg. PBSAnalogy
on macOS: AttributeError: 'Process' object has no attribute 'cpu_affinity'Isadoraisadore
@Isadoraisadore thanks for this info, what a shame. mac users gotta look into psutil and upstream to stdlib then as well it seems then :-)Topaz
"multiprocessing.cpu_count() and os.cpu_count() on the other hand just returns the total number of physical CPUs." <- both these functions return the number of logical cores: in the case of hyperthreading this will be double the number of physical cores.Megasporangium
This is not a good method to use inside Jupyter notebooks, which will return a value of only 2, when an ordinary python process might return 96. when running Jupyter notebooks then, I will typically copy the sched affinity output from a python REPL in a shell, and set it in Jupyter via os.sched_setaffinity(0, {...paste...})Indium
@Indium I don't reproduce on Ubuntu 22.10, notebook==6.5.3 on a minimal example, the value for os.sched_getaffinity(0) inside Jupyter and outside were the same. I wonder why this happens to you.Topaz
Using the psutil method on macos also does not work: AttributeError: 'Process' object has no attribute 'cpu_affinity'Unisexual
On WSL, all three functions return the same value for me, corresponding to the number of logical processors.Tannenwald
On Python 3.13, there is os.process_cpu_count(). It is like len(os.sched_getaffinity(0)) but can be also overridden by -X cpu_count=<n>, $PYTHON_CPU_COUNT (both os.cpu_count() & os.process_cpu_count() are affected).Phototherapy
@Phototherapy thanks for this. Weird function, feels like stdlib clutter, why not just let users limit CPUs via regular cli parameters.Topaz
R
136

Another option is to use the psutil library, which always turn out useful in these situations:

>>> import psutil
>>> psutil.cpu_count()
2

This should work on any platform supported by psutil(Unix and Windows).

Note that in some occasions multiprocessing.cpu_count may raise a NotImplementedError while psutil will be able to obtain the number of CPUs. This is simply because psutil first tries to use the same techniques used by multiprocessing and, if those fail, it also uses other techniques.

Renteria answered 12/2, 2013 at 19:19 Comment(6)
This one is really good, considering that used method allows to find out is the CPU cores are logical ore physical ones. psutil.cpu_count(logical = True)Karly
Hi @Bakuriu, Is there any way to get the number of cpu cores being used by a specific process using psutil?Anabas
@Karly On Windows on my Intel i7-8700 psutil.cpu_count() gives 12 (it's a 6-core CPU with hyperthreading). This is because the default argument of logical is True, so you explicitly need to write psutil.cpu_count(logical = False) to get the number of physical Cores.Chaisson
psutil.Process().cpu_affinity() is what most users will want I believe as explained at: https://mcmap.net/q/15245/-how-to-find-out-the-number-of-cpus-using-python BTW.Topaz
answer needs updationDisintegrate
psutil.cpu_count(logical=False)Disintegrate
P
69

In Python 3.4+: os.cpu_count().

multiprocessing.cpu_count() is implemented in terms of this function but raises NotImplementedError if os.cpu_count() returns None ("can't determine number of CPUs").

Phototherapy answered 3/9, 2014 at 4:16 Comment(5)
See also the documentation of cpu_count. len(os.sched_getaffinity(0)) might be better, depending on the purpose.Apophasis
@Apophasis yes, the number of CPUs in the system (os.cpu_count()—what OP asks) may differ from the number of CPUs that are available to the current process (os.sched_getaffinity(0)).Phototherapy
I know. I just wanted to add that for other readers, who might miss this difference, to get a more complete picture from them.Apophasis
Also: the os.sched_getaffinity(0) is not available on BSD, so the use of os.cpu_count() is required (without other external library, that is).Marni
It should be noted os.sched_getaffinity does not seem to be available on Windows.Zwiebel
A
63

If you want to know the number of physical cores (not virtual hyperthreaded cores), here is a platform independent solution:

psutil.cpu_count(logical=False)

https://github.com/giampaolo/psutil/blob/master/INSTALL.rst

Note that the default value for logical is True, so if you do want to include hyperthreaded cores you can use:

psutil.cpu_count()

This will give the same number as os.cpu_count() and multiprocessing.cpu_count(), neither of which have the logical keyword argument.

Amaliaamalie answered 11/4, 2016 at 5:42 Comment(3)
What is difference between a logical CPU and not not a logical one? on my laptop: psutil.cpu_count(logical=False) #4 psutil.cpu_count(logical=True) #8 and multiprocessing.cpu_count() #8Tremaine
@Tremaine assuming you have a x86 CPU, you have hyperthreading on this machine, i.e. each physical core corresponds to two hyperthreads ('logical' cores). Hyperthreading allows the physical core to be used to execute instructions from thread B when parts of it are idle for thread A (e.g. waiting for data being fetched from the cache or memory). Depending on your code one can get one or a few tens of percents of additional core utilization but it is far below the performance of a real physical core.Zebrawood
By far the best answer, but it's very difficult to find among all the others.Sandy
S
32

These give you the hyperthreaded CPU count

  1. multiprocessing.cpu_count()
  2. os.cpu_count()

These give you the virtual machine CPU count

  1. psutil.cpu_count()
  2. numexpr.detect_number_of_cores()

Only matters if you works on VMs.

Surakarta answered 14/3, 2019 at 13:16 Comment(2)
Not really. As noted, os.cpu_count() and multiprocessing.cpu_count() will return hyperthreaded cpu counts, not the actual physical cpu count.Ariella
Yes. I reworded. It's typically # of cores x 2. What I mean is that if you are on a virtual machine, that carved out 8 cores, but your host machine is 20 core physically, first set of command give you 20, second set of command give you 8.Surakarta
R
27

For python version above 3.4 you can use

import os
os.cpu_count()

If you are looking for an equivanlent of linux command nproc. You have this option

len(os.sched_getaffinity(0))
Romilly answered 7/7, 2021 at 11:21 Comment(0)
N
22

multiprocessing.cpu_count() will return the number of logical CPUs, so if you have a quad-core CPU with hyperthreading, it will return 8. If you want the number of physical CPUs, use the python bindings to hwloc:

#!/usr/bin/env python
import hwloc
topology = hwloc.Topology()
topology.load()
print topology.get_nbobjs_by_type(hwloc.OBJ_CORE)

hwloc is designed to be portable across OSes and architectures.

Newhall answered 17/7, 2014 at 13:32 Comment(2)
In this case, I want the number of logical CPUs (i.e. how many threads should I start if this program scales really well), but the answer may be helpful nonetheless.Fervid
or psutil.cpu_count(logical=False)Sullage
R
13

This may work for those of us who use different os/systems, but want to get the best of all worlds:

import os
workers = os.cpu_count()
if 'sched_getaffinity' in dir(os):
    workers = len(os.sched_getaffinity(0))
Ratchford answered 7/11, 2019 at 11:47 Comment(0)
A
11

You can also use "joblib" for this purpose.

import joblib
print joblib.cpu_count()

This method will give you the number of cpus in the system. joblib needs to be installed though. More information on joblib can be found here https://pythonhosted.org/joblib/parallel.html

Alternatively you can use numexpr package of python. It has lot of simple functions helpful for getting information about the system cpu.

import numexpr as ne
print ne.detect_number_of_cores()
Another answered 21/4, 2015 at 11:14 Comment(1)
joblib uses the underlying multiprocessing module. It's probably best to call into multiprocessing directly for this.Coulson
G
10

Can't figure out how to add to the code or reply to the message but here's support for jython that you can tack in before you give up:

# jython
try:
    from java.lang import Runtime
    runtime = Runtime.getRuntime()
    res = runtime.availableProcessors()
    if res > 0:
        return res
except ImportError:
    pass
Glomerate answered 2/10, 2010 at 12:16 Comment(0)
T
2

If you are using torch you can do:

import torch.multiprocessing as mp

mp.cpu_count()

the mp library in torch has the same interface as the main python one so you can do this too as the commenter mentioned:

python -c "import multiprocessing; print(multiprocessing.cpu_count())"

hope this helps! ;) it's always nice to have more than 1 option.

Titillate answered 18/2, 2021 at 19:2 Comment(2)
This answer should NOT be the first answer. Why to use torch, a deep learning framework, for a such easy task? Just run: python -c "import multiprocessing; print(multiprocessing.cpu_count())"Lelia
@EliSimhayev oh I forgot that actually the torch mp module has the same interface so they are the same but will add details ;) and no it's not the first answer, there are some before me :)Titillate
P
1

Another option if you don't have Python 2.6:

import commands
n = commands.getoutput("grep -c processor /proc/cpuinfo")
Psi answered 29/8, 2014 at 14:5 Comment(1)
Thanks! This is only available on Linux though, and already included in my answer.Fervid
P
-3

If you are looking for printing the number of cores in your system.

Try this:

import os 
no_of_cores = os.cpu_count()
print(no_of_cores)

This should help.

Pitching answered 21/9, 2021 at 17:7 Comment(2)
This solution is already provided in this existing answer. When answering old questions, please ensure that your answer provides a distinct and valuable contribution to the Q&A.User
ok thanks will provide suitable information.Pitching

© 2022 - 2024 — McMap. All rights reserved.