Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Inception_v4 model config in Fluid API #900

Merged
merged 3 commits into from
May 21, 2018

Conversation

kuke
Copy link
Collaborator

@kuke kuke commented May 10, 2018

@kuke
Copy link
Collaborator Author

kuke commented May 15, 2018

Execute training on flowers data set (4 GPUs):

----------  Configuration Arguments -----------
batch_size: 128
init_model: None
lr_strategy: cosine_decay
model: inception_v4
num_layers: 50
parallel_exe: True
pretrained_model: None
with_mem_opt: True
------------------------------------------------
Pass 0, trainbatch 0, loss 4.6265411377,                        acc1 0.0078125, acc5 0.0234375 time 1.63 sec
Pass 0, trainbatch 10, loss 4.63355541229,                        acc1 0.0078125, acc5 0.0234375 time 1.79 sec
Pass 0, trainbatch 20, loss 4.61045503616,                        acc1 0.03125, acc5 0.0625 time 1.71 sec
Pass 0, trainbatch 30, loss 4.64154863358,                        acc1 0.0, acc5 0.015625 time 1.73 sec
Pass 0, trainbatch 40, loss 4.64159154892,                        acc1 0.0, acc5 0.0234375 time 1.66 sec
End pass 0, train_loss 4.61967611313, train_acc1 0.0224808678031, train_acc5 0.0508609712124,                test_loss 4.62793588638, test_acc1 0.0137033769861, test_acc5 0.0539314523339
Pass 1, trainbatch 0, loss 4.60240983963,                        acc1 0.0390625, acc5 0.046875 time 1.74 sec
Pass 1, trainbatch 10, loss 4.61039352417,                        acc1 0.03125, acc5 0.0546875 time 1.65 sec
Pass 1, trainbatch 20, loss 4.58687925339,                        acc1 0.0546875, acc5 0.0625 time 1.80 sec
Pass 1, trainbatch 30, loss 4.63385629654,                        acc1 0.0078125, acc5 0.03125 time 1.72 sec
Pass 1, trainbatch 40, loss 4.61811351776,                        acc1 0.0234375, acc5 0.0234375 time 1.70 sec
End pass 1, train_loss 4.61435079575, train_acc1 0.0272640306503, train_acc5 0.0497448965907,                test_loss 4.63180065155, test_acc1 0.00986013095826, test_acc5 0.0481665804982
Pass 2, trainbatch 0, loss 4.60251426697,                        acc1 0.0390625, acc5 0.0703125 time 1.74 sec
Pass 2, trainbatch 10, loss 4.58694887161,                        acc1 0.0546875, acc5 0.09375 time 1.69 sec
Pass 2, trainbatch 20, loss 4.61815309525,                        acc1 0.0234375, acc5 0.0390625 time 1.69 sec
Pass 2, trainbatch 30, loss 4.62593412399,                        acc1 0.015625, acc5 0.0234375 time 1.76 sec
Pass 2, trainbatch 40, loss 4.59237337112,                        acc1 0.046875, acc5 0.1015625 time 1.82 sec
End pass 2, train_loss 4.61462306976, train_acc1 0.0267857145518, train_acc5 0.046875,                test_loss 4.63678264618, test_acc1 0.0048828125, test_acc5 0.0441343262792
Pass 3, trainbatch 0, loss 4.61823987961,                        acc1 0.0234375, acc5 0.03125 time 1.71 sec
Pass 3, trainbatch 10, loss 4.61041259766,                        acc1 0.03125, acc5 0.046875 time 1.72 sec
Pass 3, trainbatch 20, loss 4.58694934845,                        acc1 0.0546875, acc5 0.0625 time 1.66 sec
Pass 3, trainbatch 30, loss 4.59472179413,                        acc1 0.046875, acc5 0.0859375 time 1.69 sec
Pass 3, trainbatch 40, loss 4.62600803375,                        acc1 0.015625, acc5 0.03125 time 1.82 sec
End pass 3, train_loss 4.611120224, train_acc1 0.0304528065026, train_acc5 0.0554846934974,                test_loss 4.63385057449, test_acc1 0.0078125, test_acc5 0.0470640137792

@kuke kuke requested a review from qingqing01 May 15, 2018 13:32
@kuke kuke merged commit e0dfa23 into PaddlePaddle:develop May 21, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants