Replies: 1 comment 2 replies
-
Hi @ziw-liu, thank you for your feedback. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Most MONAI transforms that can be used for augmentations expect C(D)HW tensors without the batch dimension. This means that they have to be executed in python loops, and scales poorly on devices that benefit from parallel processing (GPUs or even just multi-threaded CPUs).
I did some benchmarking and found that MONAI's random affine transform can be more than 10x slower on the GPU than a batched implementation. Is there a way to accelerate MONAI's transforms? Or, in other words, am I doing something wrong in this benchmarking script?
Beta Was this translation helpful? Give feedback.
All reactions