Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

3 ops are not supported #336

Closed
wenbbo opened this issue Sep 20, 2021 · 6 comments
Closed

3 ops are not supported #336

wenbbo opened this issue Sep 20, 2021 · 6 comments

Comments

@wenbbo
Copy link

wenbbo commented Sep 20, 2021

  1. I got parameter file from link of PaddleDetection/configs/yolov3/README.md, as follows:
    https://paddledet.bj.bcebos.com/models/yolov3_mobilenet_v1_270e_voc.pdparams.
  2. then I exported models as described in PaddleDetection/deploy/EXPORT_MODEL.md, as follow instruction:
    python tools/export_model.py -c configs/yolov3/yolov3_mobilenet_v1_270e_voc.yml --output_dir=./inference_model // -o eights=weights/yolov3_mobilenet_v1_270e_voc.pdparams
    re: I put the yolov3_mobilenet_v1_270e_voc.pdparams in output/yolov3_mobilenet_v1_270e_voc/ directory and renamed the file as model_final.params.
  3. Now exported four files are : infer_cfg.yml, model.pdiparams, model.pdiparams.info, and model.pdmodel in the inference_model/yolov3_mobilenet_v1_270e_voc/ directory.
  4. then I run : paddle2onnx --model_dir ./inference_mode/yolov3_mobilenet_v1_270e_voc/ --model_filename model.pdmodel --params_filename model.pdiparams --save_file detonnx/model.onnx --opset_version 11
  5. I got the NotImplementedError tips as :
    There's 3 ops are not supported yet
    =============conditional_block=============
    =============logical_not==================
    =============select_input==================
    What can I do to solve the problem?
@jiangjiajun
Copy link
Collaborator

Hi, @wenbbo

It seems that your exported model contains some control flow operators which Paddle2ONNX are not support. But this shouldn't happen on YOLOv3 model, please make sure that you are using PaddleDetection/release 2.1

try following commands to export model again

pip uninstall paddledet
git clone /~https://github.com/PaddlePaddle/PaddleDetection.git
cd PaddleDetection
git checkout release/2.1
python tools/export_model.py -c configs/yolov3/yolov3_darknet53_270e_coco.yml \
                             -o weights=https://paddledet.bj.bcebos.com/models/yolov3_darknet53_270e_coco.pdparams \
                             TestReader.inputs_def.image_shape=[3,608,608] \
                             --output_dir inference_model

@jiangjiajun
Copy link
Collaborator

jiangjiajun commented Sep 21, 2021

Many tks for your comments, Jiang.
following your instructions, I got the same result, 3 ops unsupported as described above.
further help needed, tks again

there's some thing should be noticed

  1. uninstall paddledet in your evironment, pip uninstall paddledet
  2. It must be release/2.1 branch

I just tested the commands, and I'm sure it works well

Here's my execution log

(py37) λ bjyz-sys-gpu-kongming3 /jiangjiajun/github/09 git clone /~https://github.com/PaddlePaddle/PaddleDetection.git
Cloning into 'PaddleDetection'...
remote: Enumerating objects: 16420, done.
remote: Counting objects: 100% (1647/1647), done.
remote: Compressing objects: 100% (660/660), done.
remote: Total 16420 (delta 1086), reused 1391 (delta 982), pack-reused 14773
Receiving objects: 100% (16420/16420), 141.83 MiB | 9.65 MiB/s, done.
Resolving deltas: 100% (11741/11741), done.
Checking connectivity... done.
(py37) λ bjyz-sys-gpu-kongming3 /jiangjiajun/github/09 cd PaddleDetection/
(py37) λ bjyz-sys-gpu-kongming3 /jiangjiajun/github/09/PaddleDetection {release/2.2} git checkout release/2.1
Branch release/2.1 set up to track remote branch release/2.1 from origin.
Switched to a new branch 'release/2.1'
(py37) λ bjyz-sys-gpu-kongming3 /jiangjiajun/github/09/PaddleDetection {release/2.1} python tools/export_model.py -c configs/yolov3/yolov3_darknet53_270e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/yolov3_darknet53_270e_coco.pdparams TestReader.inputs_def.image_shape=[3,608,608] --output_dir inference_model
grep: warning: GREP_OPTIONS is deprecated; please use an alias or script
/jiangjiajun/github/09/PaddleDetection/ppdet/modeling/ops.py:542: DeprecationWarning: invalid escape sequence \_
  """
/jiangjiajun/github/09/PaddleDetection/ppdet/modeling/ops.py:1375: DeprecationWarning: invalid escape sequence \l
  """
/jiangjiajun/anaconda3/envs/py37/lib/python3.7/site-packages/paddle/tensor/creation.py:125: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  if data.dtype == np.object:
[09/21 06:14:49] ppdet.utils.checkpoint INFO: Finish loading model weights: /root/.cache/paddle/weights/yolov3_darknet53_270e_coco.pdparams
[09/21 06:14:49] ppdet.engine INFO: Export inference config file to inference_model/yolov3_darknet53_270e_coco/infer_cfg.yml
/jiangjiajun/anaconda3/envs/py37/lib/python3.7/site-packages/paddle/fluid/layers/utils.py:77: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  return (isinstance(seq, collections.Sequence) and
W0921 06:14:52.870213 19402 device_context.cc:404] Please NOTE: device: 0, GPU Compute Capability: 6.1, Driver API Version: 10.2, Runtime API Version: 10.2
W0921 06:14:52.870265 19402 device_context.cc:422] device: 0, cuDNN Version: 7.6.
[09/21 06:14:56] ppdet.engine INFO: Export model and saved in inference_model/yolov3_darknet53_270e_coco
(py37) λ bjyz-sys-gpu-kongming3 /jiangjiajun/github/09/PaddleDetection {release/2.1} paddle2onnx --model_dir inference_model/yolov3_darknet53_270e_coco/ --model_filename model.pdmodel --params_filename model.pdiparams --save_file model.onnx --opset_version 11
grep: warning: GREP_OPTIONS is deprecated; please use an alias or script
/jiangjiajun/anaconda3/envs/py37/lib/python3.7/site-packages/paddle2onnx/onnx_helper/mapping.py:42: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  int(TensorProto.STRING): np.dtype(np.object)
/jiangjiajun/anaconda3/envs/py37/lib/python3.7/site-packages/paddle2onnx/constant/dtypes.py:43: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  np.bool: core.VarDesc.VarType.BOOL,
/jiangjiajun/anaconda3/envs/py37/lib/python3.7/site-packages/paddle2onnx/constant/dtypes.py:44: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. To silence this warning, use `float` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.float64` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  core.VarDesc.VarType.FP32: np.float,
/jiangjiajun/anaconda3/envs/py37/lib/python3.7/site-packages/paddle2onnx/constant/dtypes.py:49: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  core.VarDesc.VarType.BOOL: np.bool
2021-09-21 06:15:50 [WARNING]	Operator:multiclass_nms3 only supports input[batch_size] == 1.
2021-09-21 06:15:56 [INFO]	ONNX model saved in model.onnx

@wenbbo
Copy link
Author

wenbbo commented Sep 21, 2021

got the same as you showed in the last comment, many tks @jiangjiajun
will the WARNING influence the transfered model?

@jiangjiajun
Copy link
Collaborator

The warning shows the converted model only support batch_size=1 while you use it to inference, but the result will be the same with paddle model

@wenbbo
Copy link
Author

wenbbo commented Sep 21, 2021

OK, tks @jiangjiajun again, and let's close the issue, bye

@Zheng-Bicheng
Copy link
Collaborator

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants