-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CPU based inference !!! #1274
Comments
Currently it is not feasible since we have not implemented CPU ops for RoIAlign/RoIPool. |
Thanks for the reply... Any plans of implementing it in the near future?? |
This is in our long-term plan, but may not be implemented in the near future. |
OK thanks |
FANGAreNotGnu
pushed a commit
to FANGAreNotGnu/mmdetection
that referenced
this issue
Oct 23, 2023
* fit_with_prune for non-ensemble models * review changes * more changes * make tests pass * review changes: model specific fit_with_prune, works with bagging * review changes * small bugfix * WIP feature importance method comparisons * refactor * remove old code * tweak error message * add dataset setup logic, more plots, bug fixes, etc * minor changes * more plot_test_score runs * wip * minor fixes * refactor and allow different fi computation and pruning strategies * prettify things * various bug fixes, run more shuffles for fi computation in small datasets * feature generator selector + various optimizations ex. skipping pruning for knn/lgb, remove more features for high dimensional datasets * use_child_oof bugfix * bug fixes * bugfix and togglable feature generator based pruning * comment out feature importance quick dict * feature selection now happens after all model fits * new feature importance workflow * be able to feature prune before and after fitting * bugfix and work on proxy model * proxy model workflow implemented * merge master and bugfix * fix ensemble related bug and pandas version to <1.3.0 * remove various scripts from git and only leave new feature pruning workflow * sanity check scripts * also plot feature importance for sanity check * enable automated threshold selection for proxy model * fix usmall time limit related bug * feature selection refactor wip and try get feature selection to work with n-repeated bagging * sanity check * cenable feature pruning with individual models as well as proxy model. add additional sanity check scripts. bugfix * enable multiple feature selection attempts even if model score does not immediately improve * algorithm_prune script bugfix * bugfix and refactoring for feature selector * add debug dictionary for AMLB + sample candidates with sample weights * debug log related change * debug log related change * try lowest fi first before sampling when generating candidates * dont use bagged model as proxy model if enough data, add max fi samples, bugfix * various changes * add feature importance dataframe merging logic and some housekeeping * bugfix * temporarily revert prioritizing feature importance computation from previous iterations * time limit bugfix * add kept feature ratio * tmp changes * incremental changes * different feature importance seed * pimf * bugfix * give at most 2x stack fit time to feature prune, remove max fits, slightly change time_budget_fi * tweak feature importance computation time * bugfix * try 4x time * set time limit to 2 * multi fold time elapsed again for testing * merge feature importance info correctly, bring back max fits and remove multi fold time elapsed based limit, fix child oof issue, etc * remove max_fits again and remove fi time limit lower bound, prioritize features that use last fit's estimates * code cleanup and remove non proxy model pruning * bugfix * enable max k * code cleanup * refactor and add option to just use LGB model as proxy model * add feature pruning tests * code cleanup, readd optional max_fits and improvement_threshold * set proxy model class default to LGB * enable high quality evaluation * temporarily add cpp processing script * refactor stuff * add code comments, refactor, and option to have feature selection crash autogluon * bugfix and add more comments * delete scripts * model deletion bug fix * make tests pass and rename some variables * review changes part 1 * subsample validation data as well, make things pretty * refactor model fit kwargs in abstract_trainer * review changes part 2 * review changes and refactoring * default prune threshold is noise * fix time estimate related bug and subsample validation data too * more safety time for feature selection * fix time bug again, remove validation subsampling again, and reduce max_n_shuffles * experiment setup related changes (set max_n_shuffles to 20, enable no stopping round, enable n_evaluated_features, etc) * set max_fi_samples to 50000 again * up feature pruning trigger requirement * add documentation, force prune, and use original model's predict time if possible. * remove debug info * finally really fixed time limit bug - use proxy model's model.predict_time to set time limits * dont modify time limit and instead compute remaining time everywhere * fix feature metadata bug * adjust feature importance itme budget, remove 5min cap, and set high dimensional max shuffles to 10 * add some comments * modify minimum prune runtime if time remains
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Can models generated using this repository be deployed on a CPU based environment for inference on new images?
If yes can you provide a link on how it can be done?
Thanks,
Chandan
The text was updated successfully, but these errors were encountered: