Skip to content

Commit

Permalink
Checkpoint after batches not epochs (#119)
Browse files Browse the repository at this point in the history
  • Loading branch information
tejaswini authored and kylegao91 committed Jan 21, 2018
1 parent 38e7e21 commit cbd8d8b
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion seq2seq/trainer/supervised_trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ class SupervisedTrainer(object):
by default it makes a folder in the current directory to store the details (default: `experiment`).
loss (seq2seq.loss.loss.Loss, optional): loss for training, (default: seq2seq.loss.NLLLoss)
batch_size (int, optional): batch size for experiment, (default: 64)
checkpoint_every (int, optional): number of epochs to checkpoint after, (default: 100)
checkpoint_every (int, optional): number of batches to checkpoint after, (default: 100)
"""
def __init__(self, expt_dir='experiment', loss=NLLLoss(), batch_size=64,
random_seed=None,
Expand Down

0 comments on commit cbd8d8b

Please sign in to comment.