Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SYCL: SOFTMAX F16 mask support and other fixes #11261

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

qnixsynapse
Copy link
Contributor

Implemented ggml_sycl_op_soft_max() F16 src1(mask) support for which a pragma deprecation warning was added during #5021.
To do this, had to decouple it from ggml_sycl_op_flatten which always considered src1 to be of fp32 type(many OP functions are dependent on it).

Also, replaced std::max with sycl::max in the softmax kernel. There was not a single test with F16 mask in the test-backend-ops so I manually had to add such a test locally and I can confirm that it passed on my machine. This PR did not add that test. Reviewers are requested to test it thoroughly on their machines.

Not sure why this was necessary. The models which I tested do not use F16 mask.
Also did few cleanups.

@github-actions github-actions bot added ggml changes relating to the ggml tensor library for machine learning SYCL https://en.wikipedia.org/wiki/SYCL - GPU programming language labels Jan 16, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning SYCL https://en.wikipedia.org/wiki/SYCL - GPU programming language
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant