Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add hparams and compliance checks for training and eval samples for all benchmarks #143

Open
emizan76 opened this issue Jun 14, 2021 · 3 comments
Assignees

Comments

@emizan76
Copy link
Contributor

According to issue /~https://github.com/mlcommons/submission_training_1.0/issues/39, the number of training samples is 117266. Many submissions do hardcode this value, even though the reference does not. On that specific issue the submission in question used a different value.

The decision was to add train_samples, and eval_samples as hyperparameters + the related compliance checker rules so we avoid such issues in the future.

@nv-rborkar
Copy link
Contributor

I am assuming we make the check on train_samples & eval_samples to match reference values for all benchmarks, as we noted this for RN50 in /~https://github.com/mlcommons/submission_training_1.0/issues/48 as well.

If you agree, lets edit the title of the issue.

@emizan76
Copy link
Contributor Author

emizan76 commented Jun 16, 2021

Good point. Marek, since it is all the benchmarks, if you need any help let me know.

So, restating the problem: In the 1.0 submission training and eval samples were found to be off for a couple of submissions. This happened also in 0.7 and went undetected.

Let's add compliance checks to avoid such issues in the future.

@emizan76 emizan76 changed the title Add hparams and compliance checks for SSD Add hparams and compliance checks for training and eval samples for all benchmarks Jun 16, 2021
@xyhuang xyhuang added the v1.1 label Jul 20, 2021
@shangw-nvidia
Copy link
Contributor

I'll be addressing this issue, and this is scheduled to be put in for v2.0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants