Vectorize Sequences explanation
Asked Answered
E

4

7

Studying Deep Learning with Python, I can't comprehend the following simple batch of code which encodes the integer sequences into a binary matrix.

def vectorize_sequences(sequences, dimension=10000):
    # Create an all-zero matrix of shape (len(sequences), dimension)
    results = np.zeros((len(sequences), dimension))
    for i, sequence in enumerate(sequences):
       results[i, sequence] = 1.  # set specific indices of results[i] to 1s
    return results

(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)

x_train = vectorize_sequences(train_data)

And the output of x_train is something like

x_train[0] array([ 0., 1.,1., ...,0.,0.,0.])

Can someone put some light of the 0.'s existance in x_train array while only 1.'s are appending in each next i iteration? I mean shouldn't be all 1's?

Ephrem answered 7/5, 2018 at 11:34 Comment(1)
A related question: #68422910Vacuous
L
7

The for loop here is not processing all the matrix. As you can see, it enumerates elements of the sequence, so it's looping only on one dimension. Let's take a simple example :

t = np.array([1,2,3,4,5,6,7,8,9]) r = np.zeros((len(t), 10))

Output

array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
   [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
   [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
   [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
   [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
   [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
   [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
   [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
   [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])

then we modify elements with the same way you have :

for i, s in enumerate(t): r[i,s] = 1.

array([[0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
   [0., 0., 1., 0., 0., 0., 0., 0., 0., 0.],
   [0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
   [0., 0., 0., 0., 1., 0., 0., 0., 0., 0.],
   [0., 0., 0., 0., 0., 1., 0., 0., 0., 0.],
   [0., 0., 0., 0., 0., 0., 1., 0., 0., 0.],
   [0., 0., 0., 0., 0., 0., 0., 1., 0., 0.],
   [0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
   [0., 0., 0., 0., 0., 0., 0., 0., 0., 1.]])

you can see that the for loop modified only a set of elements (len(t)) which has index [i,s] (in this case ; (0, 1), (1, 2), (2, 3), an so on))

Laylalayman answered 7/5, 2018 at 12:9 Comment(0)
P
12

The script transforms you dataset into a binary vector space model. Let's disect things one by one.

First, if we examine the x_train content we see that each review is represented as a sequence of word ids. Each word id corresponds to one specific word:

print(train_data[0])  # print the first review
[1, 14, 22, 16, 43, 530, 973, ..., 5345, 19, 178, 32]

Now, this would be very difficult to feed the network. The lengths of reviews varies, fractional values between any integers have no meaning (e.g. what if on the output we get 43.5, what does it mean?)

So what we can do, is create a single looong vector, the size of the entire dictionary, dictionary=10000 in your example. We will then associate each element/index of this vector with one word/word_id. So word represented by word id 14 will now be represented by 14-th element of this vector.

Each element will either be 0 (word is not present in the review) or 1 (word is present in the review). And we can treat this as a probability, so we even have meaning for values in between 0 and 1. Furthermore, every review will now be represented by this very long (sparse) vector which has a constant length for every review.

So on a smaller scale if:

word      word_id
I      -> 0
you    -> 1
he     -> 2
be     -> 3
eat    -> 4
happy  -> 5
sad    -> 6
banana -> 7
a      -> 8

the sentences would then be processed in a following way.

I be happy      -> [0,3,5]   -> [1,0,0,1,0,1,0,0,0]
I eat a banana. -> [0,4,8,7] -> [1,0,0,0,1,0,0,1,1]

Now I highlighted the word sparse. That means, there will have A LOT MORE zeros in comparison with ones. We can take advantage of that. Instead of checking every word, whether it is contained in a review or not; we will check a substantially smaller list of only those words that DO appear in our review.

Therefore, we can make things easy for us and create reviews × vocabulary matrix of zeros right away by np.zeros((len(sequences), dimension)). And then just go through words in each review and flip the indicator to 1.0 at position corresponding to that word:

result[review_id][word_id] = 1.0

So instead of doing 25000 x 10000 = 250 000 000 operations, we only did number of words = 5 967 841. That's just ~2.5% of original amount of operations.

Potation answered 7/5, 2018 at 12:57 Comment(0)
L
7

The for loop here is not processing all the matrix. As you can see, it enumerates elements of the sequence, so it's looping only on one dimension. Let's take a simple example :

t = np.array([1,2,3,4,5,6,7,8,9]) r = np.zeros((len(t), 10))

Output

array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
   [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
   [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
   [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
   [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
   [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
   [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
   [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
   [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])

then we modify elements with the same way you have :

for i, s in enumerate(t): r[i,s] = 1.

array([[0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
   [0., 0., 1., 0., 0., 0., 0., 0., 0., 0.],
   [0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
   [0., 0., 0., 0., 1., 0., 0., 0., 0., 0.],
   [0., 0., 0., 0., 0., 1., 0., 0., 0., 0.],
   [0., 0., 0., 0., 0., 0., 1., 0., 0., 0.],
   [0., 0., 0., 0., 0., 0., 0., 1., 0., 0.],
   [0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
   [0., 0., 0., 0., 0., 0., 0., 0., 0., 1.]])

you can see that the for loop modified only a set of elements (len(t)) which has index [i,s] (in this case ; (0, 1), (1, 2), (2, 3), an so on))

Laylalayman answered 7/5, 2018 at 12:9 Comment(0)
S
0
import numpy as np

def vectorize_sequences(sequences, dimension=10000):
    results = np.zeros((len(sequences), dimension))
    for i, sequence in enumerate(sequences):
        results[i, sequence] = 1.
    return results
Stanwood answered 18/2, 2023 at 12:52 Comment(1)
Welcome to Stack Overflow! While this code may answer the question, providing additional context regarding why and/or how this code answers the question improves its long-term value.Edmundedmunda
M
-1
import numpy as np

def vectorize_sequences(sequences, dimension=10000):
    results = np.zeros((len(sequences), dimension))
    for i, sequence in enumerate(sequences):
        results[i, sequence] = 1.
    return results

this is for vectorizing the data

Marcomarconi answered 9/5, 2023 at 18:12 Comment(1)
As it’s currently written, your answer is unclear. Please edit to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers in the help center.Shareeshareholder

© 2022 - 2024 — McMap. All rights reserved.