-
Notifications
You must be signed in to change notification settings - Fork 234
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
autoslim后的模型测试 #189
Comments
We recommend using English or English & Chinese for issues so that we could have broader discussion. |
测试过程1. 国产芯片测试
# deploy_cfg
onnx_config = dict(
type="onnx",
export_params=True,
keep_initializers_as_inputs=False,
opset_version=10,
save_file="mobilenetv2_mmdeploy",
input_names=["input"],
output_names=["output"],
input_shape=[224, 224],
)
backend_config = dict(type="onnxruntime")
# codebase_config = dict(type="mmcls", task="Classification")
codebase_config = dict(type="mmcls", task="Classification", from_mmrazor=True)
# model_cfg 为mmrazor中pruning中mobilenet
_base_ = [
"./autoslim_mbv2_supernet_8xb256_in1k.py",
]
model = dict(head=dict(loss=dict(type="LabelSmoothLoss", mode="original", label_smooth_val=0.1, loss_weight=1.0)))
# FIXME: you may replace this with the channel_cfg searched by yourself
channel_cfg = [
"https://download.openmmlab.com/mmrazor/v0.1/pruning/autoslim/autoslim_mbv2_subnet_8xb256_in1k/autoslim_mbv2_subnet_8xb256_in1k_flops-0.22M_acc-71.39_20211222-43117c7b_channel_cfg.yaml", # noqa: E501
]
algorithm = dict(
architecture=dict(type="MMClsArchitecture", model=model),
distiller=None,
retraining=True,
bn_training_mode=False,
channel_cfg=channel_cfg,
)
runner = dict(type="EpochBasedRunner", max_epochs=300)
find_unused_parameters = True torch 速度测试均使用mmcls的image_demo测试,只是加载的权重不同,使用相同的cfg # model settings
model = dict(
type="ImageClassifier",
backbone=dict(type="MobileNetV2", widen_factor=1.5),
neck=dict(type="GlobalAveragePooling"),
head=dict(
type="LinearClsHead",
num_classes=1000,
in_channels=1920,
loss=dict(type="LabelSmoothLoss", mode="original", label_smooth_val=0.1, loss_weight=1.0),
topk=(1, 5),
),
)
# dataset settings
dataset_type = "CDataset"
img_norm_cfg = dict(mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
dict(type="LoadImageFromFile"),
dict(type="RandomResizedCrop", size=224, backend="pillow"),
dict(type="RandomFlip", flip_prob=0.5, direction="horizontal"),
dict(type="Normalize", **img_norm_cfg),
dict(type="ImageToTensor", keys=["img"]),
dict(type="ToTensor", keys=["gt_label"]),
dict(type="Collect", keys=["img", "gt_label"]),
]
test_pipeline = [
dict(type="LoadImageFromFile"),
dict(type="Resize", size=(256, -1), backend="pillow"),
dict(type="CenterCrop", crop_size=224),
dict(type="Normalize", **img_norm_cfg),
dict(type="ImageToTensor", keys=["img"]),
dict(type="Collect", keys=["img"]),
]
data = dict(
samples_per_gpu=2,
workers_per_gpu=1,
train=dict(
type=dataset_type,
data_prefix="/projects/release/data/train",
ann_file="/projects/release/data/meta/train.txt",
pipeline=train_pipeline,
),
val=dict(
type=dataset_type,
data_prefix="/projects/release/data/val",
ann_file="/projects/release/data/meta/val.txt",
pipeline=test_pipeline,
),
test=dict(
# replace `data/val` with `data/test` for standard test
type=dataset_type,
data_prefix="/projects/release/data/val",
ann_file="/projects/release/data/meta/val.txt",
pipeline=test_pipeline,
),
)
evaluation = dict(interval=1, metric="accuracy")
# optimizer
optimizer = dict(type="SGD", lr=0.045, momentum=0.9, weight_decay=0.00004)
optimizer_config = dict(grad_clip=None)
# learning policy
lr_config = dict(policy="step", gamma=0.98, step=1)
runner = dict(type="EpochBasedRunner", max_epochs=300)
# checkpoint saving
checkpoint_config = dict(interval=1)
# yapf:disable
log_config = dict(
interval=100,
hooks=[
dict(type="TextLoggerHook"),
# dict(type='TensorboardLoggerHook')
],
)
# yapf:enable
dist_params = dict(backend="nccl")
log_level = "INFO"
load_from = None
resume_from = None
workflow = [("train", 1)] |
We found that the file size of the three different checkpoint sizes is the same, but the parameter size in each checkpoint is different. |
Hi! Sorry for the inconvenience to you. We split the checkpoint based on |
Thank you for the guidance, I will look at the source code recently and try to fix it |
_base_ = [
"./supernet.py",
]
algorithm = dict(distiller=None, input_shape=(3, 224, 224))
searcher = dict(
type="GreedySearcher",
target_flops=[300000000, 200000000],
max_channel_bins=12,
metrics="accuracy",
metric_options={"topk": (1,)},
)
data = dict(samples_per_gpu=4, workers_per_gpu=1)
|
oh no, export onnx weight size are equal |
* update doc * resolve comments
@Hiwyl 你好,请问现在测试正常了么 |
在推理速度上没有体现?还是操作有误,分别推理了10000次,预热100次取平均
The text was updated successfully, but these errors were encountered: