-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TOPI] Enhance topi.nn.matmul
#16052
[TOPI] Enhance topi.nn.matmul
#16052
Conversation
We have a separate |
I agree, in this case, moving to use batch_matmul might be easier |
@tqchen Thanks for the comment, but what I need is a matmul where the shape of A is [b, m, k] and B is [n, k], and A or/and B could be transposed. But batch_matmul only supports 3-dim A and B. And I guess having a matmul operator supporting all input dimensions may be helpful. As described in data apis, the additional dimensions will be broadcasted. |
OK, in this case i think we can allow the extended version, provided that it does not break original usage and we have testcases to cover the extended usecase in form of TIR |
@tvm-bot rerun |
|
||
if auto_scheduler_rewritten_layout: | ||
# Infer shape for the rewritten layout | ||
out_dim, red_dim = auto_scheduler.get_shape_from_rewritten_layout( | ||
assert len(tensor_b).shape == 2, "only support 2-dim matmul when using auto-scheduler" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be len(tensor_b.shape)
. (nobody probably tested this)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @Ubospica please followup
This PR enhances
topi.nn.matmul
to support batch matmuls (i.e. inputs with dims larger than 2).This is useful in certain cases where we need to generate a batch matmul kernel.