textattack.constraints.grammaticality.language_models.learning_to_write package

“Learning To Write”

AdaptiveSoftmax

class textattack.constraints.grammaticality.language_models.learning_to_write.adaptive_softmax.AdaptiveLoss(cutoffs)[source]

Bases: torch.nn.modules.module.Module

forward(inp, target)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

remap_target(target)[source]
reset()[source]
training: bool
class textattack.constraints.grammaticality.language_models.learning_to_write.adaptive_softmax.AdaptiveSoftmax(input_size, cutoffs, scale_down=4)[source]

Bases: torch.nn.modules.module.Module

forward(inp)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

log_prob(inp)[source]
reset(init=0.1)[source]
set_target(target)[source]
training: bool

Language model helpers

class textattack.constraints.grammaticality.language_models.learning_to_write.language_model_helpers.QueryHandler(model, word_to_idx, mapto, device)[source]

Bases: object

static load_model(lm_folder_path, device)[source]
query(sentences, swapped_words, batch_size=32)[source]

Since we don’t filter prefixes for OOV ahead of time, it’s possible that some of them will have different lengths. When this is the case, we can’t do RNN prediction in batch.

This method _tries_ to do prediction in batch, and, when it fails, just does prediction sequentially and concatenates all of the results.

try_query(sentences, swapped_words, batch_size=32)[source]
textattack.constraints.grammaticality.language_models.learning_to_write.language_model_helpers.util_reverse(item)[source]

“Learning To Write” Language Model

class textattack.constraints.grammaticality.language_models.learning_to_write.learning_to_write.LearningToWriteLanguageModel(window_size=5, **kwargs)[source]

Bases: textattack.constraints.grammaticality.language_models.language_model_constraint.LanguageModelConstraint

A constraint based on the L2W language model.

The RNN-based language model from “Learning to Write With Cooperative Discriminators” (Holtzman et al, 2018).

https://arxiv.org/pdf/1805.06087.pdf

https://github.com/windweller/l2w

Reused by Jia et al., 2019, as a substitution for the Google 1-billion words language model (in a revised version the attack of Alzantot et al., 2018).

https://worksheets.codalab.org/worksheets/0x79feda5f1998497db75422eca8fcd689

get_log_probs_at_index(text_list, word_index)[source]

Gets the probability of the word at index word_index according to the language model.

CACHE_PATH = 'constraints/grammaticality/language-models/learning-to-write'

RNN Language Model

class textattack.constraints.grammaticality.language_models.learning_to_write.rnn_model.RNNModel(rnn_type, ntoken, ninp, nhid, nlayers, cutoffs, proj=False, dropout=0.5, tie_weights=False, lm1b=False)[source]

Bases: torch.nn.modules.module.Module

Container module with an encoder, a recurrent module, and a decoder.

Based on official pytorch examples

forward(input, hidden)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

init_hidden(bsz)[source]
init_weights()[source]
training: bool