I am trying to run through a function with my million lines in a datasets.
- I read the data from CSV in a dataframe
- I use drop list to drop data i don't need
- I pass it through a NLTK function in a for loop.
code:
def nlkt(val):
val=repr(val)
clean_txt = [word for word in val.split() if word.lower() not in stopwords.words('english')]
nopunc = [char for char in str(clean_txt) if char not in string.punctuation]
nonum = [char for char in nopunc if not char.isdigit()]
words_string = ''.join(nonum)
return words_string
Now i am calling the above function using a for loop to run through by million records. Even though i am on a heavy weight server with 24 core cpu and 88 GB Ram i see the loop is taking too much time and not using the computational power that is there
I am calling the above function like this
data = pd.read_excel(scrPath + "UserData_Full.xlsx", encoding='utf-8')
droplist = ['Submitter', 'Environment']
data.drop(droplist,axis=1,inplace=True)
#Merging the columns company and detailed description
data['Anylize_Text']= data['Company'].astype(str) + ' ' + data['Detailed_Description'].astype(str)
finallist =[]
for eachlist in data['Anylize_Text']:
z = nlkt(eachlist)
finallist.append(z)
The above code works perfectly OK just too slow when we have few million record. It is just a sample record in excel but actual data will be in DB which will run in few hundred millions. Is there any way I can speed up the operation to pass the data through the function faster - use more computational power instead?
finallist
. Does it really need to contain all the sentences, or could you process one at a time? Thenlkt
function contains a large number of temporary variables so it will consume like 10x the memory while it's processing one call, though it will be freed when it's done. – Balasset()
in cases where word order is not important. – Announcer