How to code a sequence to sequence RNN in keras?
Asked Answered
L

1

6

I am trying to write a sequence to sequence RNN in keras. I coded this program using what I understood from the web. I first tokenized the text then converted the text into sequence and padded to form feature variable X. The target variable Y was obtained first shifting x to left and then padding it. Lastly I fed my feature and target variable to my LSTM model.

This is my code I written in keras for that purpose.

from keras.preprocessing.text import Tokenizer,base_filter
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense, Activation,Dropout,Embedding
from keras.layers import LSTM


def shift(seq, n):
    n = n % len(seq)
    return seq[n:] + seq[:n]

txt="abcdefghijklmn"*100

tk = Tokenizer(nb_words=2000, filters=base_filter(), lower=True, split=" ")
tk.fit_on_texts(txt)
x = tk.texts_to_sequences(txt)
#shifing to left
y = shift(x,1)

#padding sequence
max_len = 100
max_features=len(tk.word_counts)
X = pad_sequences(x, maxlen=max_len)
Y = pad_sequences(y, maxlen=max_len)

#lstm model
model = Sequential()
model.add(Embedding(max_features, 128, input_length=max_len, dropout=0.2))
model.add(LSTM(128, dropout_W=0.2, dropout_U=0.2))
model.add(Dense(max_len))
model.add(Activation('softmax'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop')

model.fit(X, Y, batch_size=200, nb_epoch=10)

The problem is its showing an error

Epoch 1/10
IndexError: index 14 is out of bounds for size 14
Apply node that caused the error: AdvancedSubtensor1(if{inplace}.0, Reshape{1}.0)
Toposort index: 80
Lafountain answered 30/1, 2017 at 10:47 Comment(3)
Could you provide us a max_features value?Asinine
its same as the length of the abcdefghijklmn that is 14Lafountain
I suspect what you really mean is a many to many LSTM. Sequence to Sequence means something else (used in Machine Translation). See here for an example similar to yours: github.com/sachinruk/deepschool.io/blob/master/…Guzzle
E
6

The problem lies in:

model.add(Embedding(max_features, 128, input_length=max_len, dropout=0.2))

In the Embedding documentation you may see that the first argument provided to it should be set to size of vocabulary + 1. It's because there should be always a place for a null word which index is 0. Because of that you need to change this line to:

model.add(Embedding(max_features + 1, 128, input_length=max_len, dropout=0.2))
Earleneearley answered 30/1, 2017 at 22:8 Comment(2)
Hey thanks it worked. Can I ask another question; from this model how should I predict new sequence?Lafountain
In the newest Keras version some parameters have been renamed: from nb_words to num_words, and nb_epoch to epochs .Aerolite

© 2022 - 2024 — McMap. All rights reserved.