Understanding Keras prediction output of a rnn model in R
Asked Answered
R

1

10

I'm trying out the Keras package in R by doing this tutorial about forecasting the temperature. However, the tutorial has no explanation on how to predict with the trained RNN model and I wonder how to do this. To train a model I used the following code copied from the tutorial:

dir.create("~/Downloads/jena_climate", recursive = TRUE)
download.file(
    "https://s3.amazonaws.com/keras-datasets/jena_climate_2009_2016.csv.zip",
      "~/Downloads/jena_climate/jena_climate_2009_2016.csv.zip"
    )
unzip(
  "~/Downloads/jena_climate/jena_climate_2009_2016.csv.zip",
  exdir = "~/Downloads/jena_climate"
)

library(readr)
data_dir <- "~/Downloads/jena_climate"
fname <- file.path(data_dir, "jena_climate_2009_2016.csv")
data <- read_csv(fname)

data <- data.matrix(data[,-1])

train_data <- data[1:200000,]
mean <- apply(train_data, 2, mean)
std <- apply(train_data, 2, sd)
data <- scale(data, center = mean, scale = std)

generator <- function(data, lookback, delay, min_index, max_index,
                      shuffle = FALSE, batch_size = 128, step = 6) {
  if (is.null(max_index))
    max_index <- nrow(data) - delay - 1
  i <- min_index + lookback
  function() {
    if (shuffle) {
      rows <- sample(c((min_index+lookback):max_index), size = batch_size)
    } else {
      if (i + batch_size >= max_index)
        i <<- min_index + lookback
      rows <- c(i:min(i+batch_size, max_index))
      i <<- i + length(rows)
    }

    samples <- array(0, dim = c(length(rows), 
                                lookback / step,
                                dim(data)[[-1]]))
    targets <- array(0, dim = c(length(rows)))

    for (j in 1:length(rows)) {
      indices <- seq(rows[[j]] - lookback, rows[[j]], 
                     length.out = dim(samples)[[2]])
      samples[j,,] <- data[indices,]
      targets[[j]] <- data[rows[[j]] + delay,2]
    }            

    list(samples, targets)
  }
}

lookback <- 1440
step <- 6
delay <- 144
batch_size <- 128

train_gen <- generator(
  data,
  lookback = lookback,
  delay = delay,
  min_index = 1,
  max_index = 200000,
  shuffle = TRUE,
  step = step, 
  batch_size = batch_size
)

val_gen = generator(
  data,
  lookback = lookback,
  delay = delay,
  min_index = 200001,
  max_index = 300000,
  step = step,
  batch_size = batch_size
)

test_gen <- generator(
  data,
  lookback = lookback,
  delay = delay,
  min_index = 300001,
  max_index = NULL,
  step = step,
  batch_size = batch_size
)

# How many steps to draw from val_gen in order to see the entire validation set
val_steps <- (300000 - 200001 - lookback) / batch_size

# How many steps to draw from test_gen in order to see the entire test set
test_steps <- (nrow(data) - 300001 - lookback) / batch_size

library(keras)

model <- keras_model_sequential() %>% 
  layer_flatten(input_shape = c(lookback / step, dim(data)[-1])) %>% 
  layer_dense(units = 32, activation = "relu") %>% 
  layer_dense(units = 1)

model %>% compile(
  optimizer = optimizer_rmsprop(),
  loss = "mae"
)

history <- model %>% fit_generator(
  train_gen,
  steps_per_epoch = 500,
  epochs = 20,
  validation_data = val_gen,
  validation_steps = val_steps
)

I tried to predict the temperature with the code below. If I am correct, this should give me the normalized predicted temperature for every batch. So when I denormalize the values and average them, I get the predicted temperature. Is this correct and if so for which time is then predicted (latest observation time + delay?) ?

prediction.set <- test_gen()[[1]]
prediction <- predict(model, prediction.set)

Also, what is the correct way to use keras::predict_generator() and the test_gen() function? If I use the following code:

model %>% predict_generator(generator = test_gen,
                            steps = test_steps)

it gives this error:

error in py_call_impl(callable, dots$args, dots$keywords) : 
 ValueError: Error when checking model input: the list of Numpy
 arrays that you are passing to your model is not the size the model expected. 
 Expected to see 1 array(s), but instead got the following list of 2 arrays: 
 [array([[[ 0.50394005,  0.6441838 ,  0.5990761 , ...,  0.22060473,
          0.2018686 , -1.7336458 ],
        [ 0.5475698 ,  0.63853574,  0.5890239 , ..., -0.45618412,
         -0.45030192, -1.724062...
Recoverable answered 28/2, 2018 at 14:36 Comment(0)
M
5

Note: my familiarity with syntax of R is very little, so unfortunately I can't give you an answer using R. Instead, I am using Python in my answer. I hope you could easily translate back, my words at least, to R.


... If I am correct, this should give me the normalized predicted temperature for every batch.

Yes, that's right. The predictions would be normalized since you have trained it with normalized labels:

data <- scale(data, center = mean, scale = std)

Therefore, you would need to denormalize the values using the computed mean and std to find the real predictions:

pred = model.predict(test_data)
denorm_pred = pred * std + mean

... for which time is then predicted (latest observation time + delay?)

That's right. Concretely, since in this particular dataset every ten minutes a new obeservation is recorded and you have set delay=144, it would mean that the predicted value is the temperature 24 hours ahead (i.e. 144 * 10 = 1440 minutes = 24 hours) from the last given observation.

Also, what is the correct way to use keras::predict_generator() and the test_gen() function?

predict_generator takes a generator that gives as output only test samples and not the labels (since we don't need labels when we are performing prediction; the labels are needed when training, i.e. fit_generator(), and when evaluating the model, i.e. evaluate_generator()). That's why the error mentions that you need to pass one array instead of two arrays. So you need to define a generator that only gives test samples or one alternative way, in Python, is to wrap your existing generator inside another function that gives only the input samples (I don't know whether you can do this in R or not):

def pred_generator(gen):
    for data, labels in gen:
        yield data  # discards labels

preds = model.predict_generator(pred_generator(test_generator), number_of_steps)

You need to provide one other argument which is the number of steps of generator to cover all the samples in test data. Actually we have num_steps = total_number_of_samples / batch_size. For example, if you have 1000 samples and each time the generator generate 10 samples, you need to use generator for 1000 / 10 = 100 steps.

Bonus: To see how good your model performs you can use evaluate_generator using the existing test generator (i.e. test_gen):

loss = model.evaluate_generator(test_gen, number_of_steps)

The given loss is also normalized and to denormalize it (to get a better sense of prediction error) you just need to multiply it by std (you don't need to add mean since you are using mae, i.e. mean absolute error, as the loss function):

denorm_loss = loss * std

This would tell you how much your predictions are off on average. For example, if you are predicting the temperature, a denorm_loss of 5 means that the predictions are on average 5 degrees off (i.e. are either less or more than the actual value).


Update: For prediction, you can define a new generator using an existing generator in R like this:

pred_generator <- function(gen) {
  function() { # wrap it in a function to make it callable
    gen()[1]  # call the given generator and get the first element (i.e. samples)
  }
}

preds <- model %>% 
  predict_generator(
    generator = pred_generator(test_gen), # pass test_gen directly to pred_generator without calling it
    steps = test_steps
  )

evaluate_generator(model, test_gen, test_steps)
Minyan answered 27/9, 2018 at 19:26 Comment(8)
Thanks for taking the time to answer this question. Following your suggestions (in R) - which I found very helpful - I get errors for both the predict_generator function as well as evaluate_generator that are Python-related, it seems. For the predict_generator function, the error reads " ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()". evaluate_generator(model, test_gen, test_steps) gives "error in py_call_impl(callable, dots$args, dots$keywords) : AttributeError: 'str' object has no attribute 'ndim' ". Any ideas? BestDottie
@Dottie The second error you mentioned has been reported before. It seems by upgrading the Keras package the problem resolved. Try upgrading the Keras to the latest version and see if it is resolved. If not, let me know again and I would investigate more.Minyan
After I updated keras to version 2.2.0.9000 I get below error when I tried to fit the model: "AttributeError: 'str' object has no attribute 'shape' ". Seems strange to me. The error persists after downgrading to version 2.2.0 again.Dottie
@Dottie Pretty strange indeed! At least the error should not have changed if there was anything wrong with the code. I guess it has to with the package. Maybe a package uninstall and re-install helps. Further, would you please put your code in a Github gist and share the link here so that I can take a look at it and maybe test it on my machine?Minyan
Have created a gist. You find it here: gist.github.com. Thanks.Dottie
@Dottie Well, I learned some R today :) You don't need to use a named list. Just use the ordinary index based list as before. As for the definition of pred_generator I have updated my answer and included the correct way. After the modifications, I tested the code and it is working fine on my machine. BTW, my Keras package version is 2.2.0 and the TF version is 1.9 and R version is 3.4.4.Minyan
Thanks a lot! Will test it once I get predict_generator working. The named list is not needed, you are right. It makes extractions just a little safer.Dottie
Working here as well now. Appreciate your help.Dottie

© 2022 - 2024 — McMap. All rights reserved.