Pinned Loading
Repositories
Showing 6 of 6 repositories
- native-sparse-attention Public
🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"
fla-org/native-sparse-attention’s past year of commit activity - flash-linear-attention Public
🚀 Efficient implementations of state-of-the-art linear attention models in Torch and Triton
fla-org/flash-linear-attention’s past year of commit activity - flash-bidirectional-linear-attention Public
Triton implement of bi-directional (non-causal) linear attention
fla-org/flash-bidirectional-linear-attention’s past year of commit activity