cadl.deprecated package

Submodules

cadl.deprecated.seq2seq module

Sequence to Sequence models w/ Attention and BiDirectional Dynamic RNNs.

Parag K. Mital

cadl.deprecated.seq2seq.batch_generator(Xs, Ys, source_lengths, target_lengths, batch_size=50)[source]
cadl.deprecated.seq2seq.create_model(source_vocab_size=20000, target_vocab_size=20000, input_embed_size=1024, target_embed_size=1024, share_input_and_target_embedding=True, n_neurons=512, n_layers=3, use_lstm=True, use_attention=True, max_sequence_size=50)[source]
cadl.deprecated.seq2seq.id2word(ids, vocab)[source]
cadl.deprecated.seq2seq.preprocess(text, min_count=10, max_length=50)[source]
cadl.deprecated.seq2seq.test_cornell()[source]
cadl.deprecated.seq2seq.word2id(words, vocab)[source]

cadl.deprecated.seq2seq_model module

Sequence-to-sequence model with an attention mechanism.

class cadl.deprecated.seq2seq_model.Seq2SeqModel(source_vocab_size, target_vocab_size, buckets, size, num_layers, max_gradient_norm, batch_size, learning_rate, learning_rate_decay_factor, use_lstm=False, num_samples=512, forward_only=False, dtype=tf.float32)[source]

Bases: object

Sequence-to-sequence model with attention and for multiple buckets.

This class implements a multi-layer recurrent neural network as encoder, and an attention-based decoder. This is the same as the model described in this paper: http://arxiv.org/abs/1412.7449 - please look there for details, or into the seq2seq library for complete model implementation. This class also allows to use GRU cells in addition to LSTM cells, and sampled softmax to handle large output vocabulary size. A single-layer version of this model, but with bi-directional encoder, was presented in

and sampled softmax is described in Section 3 of the following paper.
http://arxiv.org/abs/1412.2007
get_batch(data, bucket_id)[source]

Get a random batch of data from the specified bucket, prepare for step.

To feed data in step(..) it must be a list of batch-major vectors, while data here contains single length-major cases. So the main logic of this function is to re-index data cases to be in the proper format for feeding.

Parameters:
  • data – a tuple of size len(self.buckets) in which each element contains lists of pairs of input and output data that we use to create a batch.
  • bucket_id – integer, which bucket to get the batch for.
Returns:

The triple (encoder_inputs, decoder_inputs, target_weights) for the constructed batch that has the proper format to call step(...) later.

step(session, encoder_inputs, decoder_inputs, target_weights, bucket_id, forward_only)[source]

Run a step of the model feeding the given inputs.

Parameters:
  • session – tensorflow session to use.
  • encoder_inputs – list of numpy int vectors to feed as encoder inputs.
  • decoder_inputs – list of numpy int vectors to feed as decoder inputs.
  • target_weights – list of numpy float vectors to feed as target weights.
  • bucket_id – which bucket of the model to use.
  • forward_only – whether to do the backward step or only forward.
Returns:

A triple consisting of gradient norm (or None if we did not do backward), average perplexity, and the outputs.

Raises:

ValueError – if length of encoder_inputs, decoder_inputs, or target_weights disagrees with bucket size for the specified bucket_id.

cadl.deprecated.seq2seq_utils module

Utilities for downloading data from WMT, tokenizing, vocabularies.

cadl.deprecated.seq2seq_utils.basic_tokenizer(sentence)[source]

Very basic tokenizer: split the sentence into a list of tokens.

cadl.deprecated.seq2seq_utils.create_vocabulary(vocabulary_path, data_path, max_vocabulary_size, tokenizer=None, normalize_digits=True)[source]

Create vocabulary file (if it does not exist yet) from data file.

Data file is assumed to contain one sentence per line. Each sentence is tokenized and digits are normalized (if normalize_digits is set). Vocabulary contains the most-frequent tokens up to max_vocabulary_size. We write it to vocabulary_path in a one-token-per-line format, so that later token in the first line gets id=0, second line gets id=1, and so on.

Parameters:
  • vocabulary_path – path where the vocabulary will be created.
  • data_path – data file that will be used to create vocabulary.
  • max_vocabulary_size – limit on the size of the created vocabulary.
  • tokenizer – a function to use to tokenize each data sentence; if None, basic_tokenizer will be used.
  • normalize_digits – Boolean; if true, all digits are replaced by 0s.
cadl.deprecated.seq2seq_utils.data_to_token_ids(data_path, target_path, vocabulary_path, tokenizer=None, normalize_digits=True)[source]

Tokenize data file and turn into token-ids using given vocabulary file.

This function loads data line-by-line from data_path, calls the above sentence_to_token_ids, and saves the result to target_path. See comment for sentence_to_token_ids on the details of token-ids format.

Parameters:
  • data_path – path to the data file in one-sentence-per-line format.
  • target_path – path where the file with token-ids will be created.
  • vocabulary_path – path to the vocabulary file.
  • tokenizer – a function to use to tokenize each sentence; if None, basic_tokenizer will be used.
  • normalize_digits – Boolean; if true, all digits are replaced by 0s.
cadl.deprecated.seq2seq_utils.get_wmt_enfr_dev_set(directory)[source]

Download the WMT en-fr training corpus to directory unless it’s there.

cadl.deprecated.seq2seq_utils.get_wmt_enfr_train_set(directory)[source]

Download the WMT en-fr training corpus to directory unless it’s there.

cadl.deprecated.seq2seq_utils.gunzip_file(gz_path, new_path)[source]

Unzips from gz_path into new_path.

cadl.deprecated.seq2seq_utils.initialize_vocabulary(vocabulary_path)[source]

Initialize vocabulary from file.

We assume the vocabulary is stored one-item-per-line, so a file:
dog cat

will result in a vocabulary {“dog”: 0, “cat”: 1}, and this function will also return the reversed-vocabulary [“dog”, “cat”].

Parameters:vocabulary_path – path to the file containing the vocabulary.
Returns:the vocabulary (a dictionary mapping string to integers), and the reversed vocabulary (a list, which reverses the vocabulary mapping).
Return type:a pair
Raises:ValueError – if the provided vocabulary_path does not exist.
cadl.deprecated.seq2seq_utils.maybe_download(directory, filename, url)[source]

Download filename from url unless it’s already in directory.

cadl.deprecated.seq2seq_utils.prepare_data(data_dir, from_train_path, to_train_path, from_dev_path, to_dev_path, from_vocabulary_size, to_vocabulary_size, tokenizer=None)[source]

Preapre all necessary files that are required for the training.

Parameters:
  • data_dir – directory in which the data sets will be stored.
  • from_train_path – path to the file that includes “from” training samples.
  • to_train_path – path to the file that includes “to” training samples.
  • from_dev_path – path to the file that includes “from” dev samples.
  • to_dev_path – path to the file that includes “to” dev samples.
  • from_vocabulary_size – size of the “from language” vocabulary to create and use.
  • to_vocabulary_size – size of the “to language” vocabulary to create and use.
  • tokenizer – a function to use to tokenize each data sentence; if None, basic_tokenizer will be used.
Returns:

  1. path to the token-ids for “from language” training data-set,
  2. path to the token-ids for “to language” training data-set,
  3. path to the token-ids for “from language” development data-set,
  4. path to the token-ids for “to language” development data-set,
  5. path to the “from language” vocabulary file,
  6. path to the “to language” vocabulary file.

Return type:

A tuple of 6 elements

cadl.deprecated.seq2seq_utils.prepare_wmt_data(data_dir, en_vocabulary_size, fr_vocabulary_size, tokenizer=None)[source]

Get WMT data into data_dir, create vocabularies and tokenize data.

Parameters:
  • data_dir – directory in which the data sets will be stored.
  • en_vocabulary_size – size of the English vocabulary to create and use.
  • fr_vocabulary_size – size of the French vocabulary to create and use.
  • tokenizer – a function to use to tokenize each data sentence; if None, basic_tokenizer will be used.
Returns:

  1. path to the token-ids for English training data-set,
  2. path to the token-ids for French training data-set,
  3. path to the token-ids for English development data-set,
  4. path to the token-ids for French development data-set,
  5. path to the English vocabulary file,
  6. path to the French vocabulary file.

Return type:

A tuple of 6 elements

cadl.deprecated.seq2seq_utils.sentence_to_token_ids(sentence, vocabulary, tokenizer=None, normalize_digits=True)[source]

Convert a string to list of integers representing token-ids.

For example, a sentence “I have a dog” may become tokenized into [“I”, “have”, “a”, “dog”] and with vocabulary {“I”: 1, “have”: 2, “a”: 4, “dog”: 7”} this function will return [1, 2, 4, 7].

Parameters:
  • sentence – the sentence in bytes format to convert to token-ids.
  • vocabulary – a dictionary mapping tokens to integers.
  • tokenizer – a function to use to tokenize each sentence; if None, basic_tokenizer will be used.
  • normalize_digits – Boolean; if true, all digits are replaced by 0s.
Returns:

a list of integers, the token-ids for the sentence.

Module contents