Train LSTM model using keras on the given dataset using Glove Embeddings available here - (https://nlp.stanford.edu/projects/glove/)

Arguments

data

the sentiment140 train dataset with text for text of the tweet and polarity for polarity.

max_words

Maximum number of words to consider using word frequency measure.

maxlen

Maximum length of a sequence.

embedding_dim

Output dimension of the embedding layer.

epochs

Number of epochs to run the training for.

batch_size

Batch Size for model fitting.

validation_split

Split ratio for validation

lstm_units

Number of units i.e. output dimension of lstm layer.

seed

Seed for shuffling training data.

glove_file_path

File path location for glove embeddings.

model_save_path

File path location for saving model.

Value

plot of the training operation showing train vs validation loss and accuracy.

Examples

# NOT RUN {
  data(sentiment140_train)
  train_lstm_with_glove(glove_file_path = "./glove.6B.100d.txt",
                        model_save_path = "./train_glove_lstm.h5")
# }