I was trying to figure out how to do just that for myself. I tried several solutions on this page and other pages. I then did some search and came across https://ipython-books.github.io/44-profiling-the-memory-usage-of-your-code-with-memory_profiler/ which seems to give an alternative solution. The gist of the solution: Use %mprun
in ipython.
- First, install memory_profiler:
pip install memory_profiler
- Start ipython and load memory_profiler: %load_ext memory_profiler
- Create a function in a physical file, say,
myfunc.py
(important: %mprun can only be used on functions defined in physical files). Create the object in question in a function, e.g.:
# myfunc.py
def myfunc():
# create the object, e.g.
a = [*range(10000)]
- Run
from myfunc import myfunc
%mprun -T mprof -f myfunc myfunc()
which generates the file mprof. The content is also displayed:
Line # Mem usage Increment Line Contents
================================================
1 49.1 MiB 49.1 MiB def myfunc():
2 # create the object, e.g.
3 49.4 MiB 0.3 MiB a = [*range(10000)]
From the inrement in line 3, we know the memory used by a
is 0.3 MiB.
Lets try a = [*range(100000)]
:
# myfunc1.py
def myfunc1():
# create the object, e.g.
a = [*range(100000)]
Run
from myfunc1 import myfunc1
%mprun -T mprof1 -f myfunc1 myfunc1()
Line # Mem usage Increment Line Contents
================================================
1 49.2 MiB 49.2 MiB def myfunc1():
2 # create the object, e.g.
3 52.3 MiB 3.0 MiB a = [*range(100000)]
seems to be in line with our expectation.