What does 'killed' mean when processing a huge CSV with Python, which suddenly stops?
Asked Answered
R

6

155

I have a Python script that imports a large CSV file and then counts the number of occurrences of each word in the file, then exports the counts to another CSV file.

But what is happening is that once that counting part is finished and the exporting begins it says Killed in the terminal.

I don't think this is a memory problem (if it was I assume I would be getting a memory error and not Killed).

Could it be that the process is taking too long? If so, is there a way to extend the time-out period so I can avoid this?

Here is the code:

csv.field_size_limit(sys.maxsize)
    counter={}
    with open("/home/alex/Documents/version2/cooccur_list.csv",'rb') as file_name:
        reader=csv.reader(file_name)
        for row in reader:
            if len(row)>1:
                pair=row[0]+' '+row[1]
                if pair in counter:
                    counter[pair]+=1
                else:
                    counter[pair]=1
    print 'finished counting'
    writer = csv.writer(open('/home/alex/Documents/version2/dict.csv', 'wb'))
    for key, value in counter.items():
        writer.writerow([key, value])

And the Killed happens after finished counting has printed, and the full message is:

killed (program exited with code: 137)
Regeniaregensburg answered 4/10, 2013 at 19:44 Comment(3)
Post the exact wording of the error message you are getting.Eyelid
"killed" generally means that the process received some signal that caused it to exit. In this case since it is happening at the same time of the script there is a good chance that it is a broken pipe, the process is trying to read from or write to a file handle that has been closed on the other end.Ultra
It's not an answer about where the killed message comes from, but if it is due to going over some kind of system memory limit, you might be able to fix that by using counter.iteritems() instead of counter.items() in your final loop. In Python 2, items returns a list of the keys and values in the dictionary, which might require a lot of memory if it is very large. In contrast, iteritems is a generator that only requires a small amount of memory at any given time.Mullet
M
154

Exit code 137 (128+9) indicates that your program exited due to receiving signal 9, which is SIGKILL. This also explains the killed message. The question is, why did you receive that signal?

The most likely reason is probably that your process crossed some limit in the amount of system resources that you are allowed to use. Depending on your OS and configuration, this could mean you had too many open files, used too much filesytem space or something else. The most likely is that your program was using too much memory. Rather than risking things breaking when memory allocations started failing, the system sent a kill signal to the process that was using too much memory.

As I commented earlier, one reason you might hit a memory limit after printing finished counting is that your call to counter.items() in your final loop allocates a list that contains all the keys and values from your dictionary. If your dictionary had a lot of data, this might be a very big list. A possible solution would be to use counter.iteritems() which is a generator. Rather than returning all the items in a list, it lets you iterate over them with much less memory usage.

So, I'd suggest trying this, as your final loop:

for key, value in counter.iteritems():
    writer.writerow([key, value])

Note that in Python 3, items returns a "dictionary view" object which does not have the same overhead as Python 2's version. It replaces iteritems, so if you later upgrade Python versions, you'll end up changing the loop back to the way it was.

Mullet answered 5/10, 2013 at 0:2 Comment(1)
Correct, but the dictionary itself will also take up a lot of memory. OP should consider reading and processing the file incrementally instead of all at once.Natashianatassia
A
37

There are two storage areas involved: the stack and the heap.The stack is where the current state of a method call is kept (ie local variables and references), and the heap is where objects are stored. recursion and memory

I gues there are too many keys in the counter dict that will consume too much memory of the heap region, so the Python runtime will raise a OutOfMemory exception.

To save it, don't create a giant object, e.g. the counter.

1.StackOverflow

a program that create too many local variables.

Python 2.7.9 (default, Mar  1 2015, 12:57:24) 
[GCC 4.9.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> f = open('stack_overflow.py','w')
>>> f.write('def foo():\n')
>>> for x in xrange(10000000):
...   f.write('\tx%d = %d\n' % (x, x))
... 
>>> f.write('foo()')
>>> f.close()
>>> execfile('stack_overflow.py')
Killed

2.OutOfMemory

a program that creats a giant dict includes too many keys.

>>> f = open('out_of_memory.py','w')
>>> f.write('def foo():\n')
>>> f.write('\tcounter = {}\n')
>>> for x in xrange(10000000):
...   f.write('counter[%d] = %d\n' % (x, x))
... 
>>> f.write('foo()\n')
>>> f.close()
>>> execfile('out_of_memory.py')
Killed

References

Adina answered 2/4, 2016 at 6:14 Comment(0)
G
7

Most likely, you ran out of memory, so the Kernel killed your process.

Have you heard about OOM Killer?

Here's a log from a script that I developed for processing a huge set of data from CSV files:

Mar 12 18:20:38 server.com kernel: [63802.396693] Out of memory: Kill process 12216 (python3) score 915 or sacrifice child
Mar 12 18:20:38 server.com kernel: [63802.402542] Killed process 12216 (python3) total-vm:9695784kB, anon-rss:7623168kB, file-rss:4kB, shmem-rss:0kB
Mar 12 18:20:38 server.com kernel: [63803.002121] oom_reaper: reaped process 12216 (python3), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB

It was taken from /var/log/syslog.

Basically:

PID 12216 elected as a victim (due to its use of +9Gb of total-vm), so oom_killer reaped it.

Here's a article about OOM behavior.

Gazette answered 12/3, 2020 at 20:20 Comment(2)
+1, just to clarify, to understand how much RAM my program is trying to use, should I add up the values total-vm, anon-rss, file-rss? Also and total-vm gives how much my program is using and not the actual available memory, right? Sorry, limited knowledge.Radiolucent
My knowledge is limited as well on this context, @momo. I'm little bit out of time for further investigations, but I found this post that might help: #18846357 . What I can tell you, is that indeed, total-vm, is the amount of memory used by the process.Gazette
N
5

I doubt anything is killing the process just because it takes a long time. Killed generically means something from the outside terminated the process, but probably not in this case hitting Ctrl-C since that would cause Python to exit on a KeyboardInterrupt exception. Also, in Python you would get MemoryError exception if that was the problem. What might be happening is you're hitting a bug in Python or standard library code that causes a crash of the process.

Nyberg answered 4/10, 2013 at 19:52 Comment(2)
A crashing bug would be much more likely to result in a segfault than getting SIGKILL, unless Python has a raise(SIGKILL) somewhere in its code for some reason.Natashianatassia
A bug in python would not send SIGKILL.Beforehand
M
5

I just had the same happen on me when I tried to run a python script from a shared folder in VirtualBox within the new Ubuntu 20.04 LTS. Python bailed with Killed while loading my own personal library. When I moved the folder to a local directory, the issue went away. It appears that the Killed stop happened during the initial imports of my library as I got messages of missing libraries once I moved the folder over.

The issue went away after I restarted my computer.

Therefore, people may want to try moving the program to a local directory if its over a share of some kind or it could be a transient problem that just requires a reboot of the OS.

Macgregor answered 27/4, 2020 at 1:52 Comment(2)
Wait, you had to reboot your host or the VM?Braunschweig
I rebooted the VM. The issue for me happened when I was building a new VM and I had just installed Python. After rebooting the VM, the issue went away. I hate rebooting as a way of fixing things so I spent a bunch of time trying to debug and after an hour of digging I gave up and did the bounce.Macgregor
A
0

I had a similar issue and the process was exiting with the code 137 but the script I was running wasn't consuming a whole lot of memory to get OOM Killed. Turns out the Python interpreter I was using was corrupted so had to remove that installation altogether and work off a new one.

Amalia answered 25/10, 2023 at 6:45 Comment(0)

© 2022 - 2025 — McMap. All rights reserved.