Skip to content
This repository has been archived by the owner on Dec 16, 2022. It is now read-only.

Commit

Permalink
remove deprecated function call in hotflip (#3074)
Browse files Browse the repository at this point in the history
  • Loading branch information
Eric-Wallace authored and matt-gardner committed Jul 17, 2019
1 parent 014fe31 commit 5014d02
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions allennlp/interpret/attackers/hotflip.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,9 +57,9 @@ def _construct_embedding_matrix(self):
tokens = [Token(x) for x in all_tokens]
max_token_length = max(len(x) for x in all_tokens)
indexed_tokens = token_indexer.tokens_to_indices(tokens, self.vocab, "token_characters")
padded_tokens = token_indexer.pad_token_sequence(indexed_tokens,
{"token_characters": len(tokens)},
{"num_token_characters": max_token_length})
padded_tokens = token_indexer.as_padded_tensor(indexed_tokens,
{"token_characters": len(tokens)},
{"num_token_characters": max_token_length})
all_inputs['token_characters'] = torch.LongTensor(padded_tokens['token_characters']).unsqueeze(0)
# for ELMo models
if isinstance(token_indexer, ELMoTokenCharactersIndexer):
Expand Down

0 comments on commit 5014d02

Please sign in to comment.