Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Sparse] Support Diag sparse format in C++ #5432

Merged
merged 3 commits into from
Mar 9, 2023
Merged

Conversation

czkkkkkk
Copy link
Collaborator

@czkkkkkk czkkkkkk commented Mar 7, 2023

Description

This is one of the PR for the issue #5367 .

Checklist

Please feel free to remove inapplicable items for your PR.

  • The PR title starts with [$CATEGORY] (such as [NN], [Model], [Doc], [Feature]])
  • I've leverage the tools to beautify the python and c++ code.
  • The PR is complete and small, read the Google eng practice (CL equals to PR) to understand more about small PR. In DGL, we consider PRs with less than 200 lines of core code change are small (example, test and documentation could be exempted).
  • All changes have test coverage
  • Code is well-documented
  • To the best of my knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change
  • Related issue is referred in this PR
  • If the PR is for a new model/paper, I've updated the example index here.

Changes

@dgl-bot
Copy link
Collaborator

dgl-bot commented Mar 7, 2023

To trigger regression tests:

  • @dgl-bot run [instance-type] [which tests] [compare-with-branch];
    For example: @dgl-bot run g4dn.4xlarge all dmlc/master or @dgl-bot run c5.9xlarge kernel,api dmlc/master

@dgl-bot
Copy link
Collaborator

dgl-bot commented Mar 7, 2023

Commit ID: b0d2b55

Build ID: 1

Status: ❌ CI test failed in Stage [Lint Check].

Report path: link

Full logs path: link

@@ -90,6 +95,21 @@ std::shared_ptr<CSR> COOToCSC(const std::shared_ptr<COO>& coo);
/** @brief Convert a CSR format to CSC format. */
std::shared_ptr<CSR> CSRToCSC(const std::shared_ptr<CSR>& csr);

/** @brief Convert a Diag format to COO format. */
std::shared_ptr<COO> DiagToCOO(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need this 3 conversion? In which case, they will be used?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are used for operators that do not have implementation on Diag format, e.g., SpMM.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the implication of the performance here if we convert diag to COO/CSR/CSV for operators?
Will it strongly decrease the spmm performance?

This solution looks good to me, but let's make sure we understand the trade-off we are making here.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I simply follow our current implementation to ensure no performance regression. Currently, we also convert the DiagMatrix to SparseMatrix for SpMM on the Python side.

@dgl-bot
Copy link
Collaborator

dgl-bot commented Mar 7, 2023

Commit ID: b6d2f92

Build ID: 2

Status: ✅ CI test succeeded.

Report path: link

Full logs path: link

@dgl-bot
Copy link
Collaborator

dgl-bot commented Mar 8, 2023

Commit ID: f49b4ca6b7ac65fdaa0467c414e4024503003224

Build ID: 3

Status: ✅ CI test succeeded.

Report path: link

Full logs path: link

if (A->HasDiag() && B->HasDiag()) {
return SparseMatrix::FromDiagPointer(
A->DiagPtr(), A->value() + B->value(), A->shape());
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I will usually prefer parallel if-else branch, e.g.,

if (...) {
  ...
} else {
  ...
}

@frozenbugs what would you think?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current implementation is better.
For all functions, it could have multiple early return checks, so it is highly recommended to do:

if (...) {
  return ...
}
if (...) {
  return ...
}
blablabla
blablabla
blablabla
return ...

const std::shared_ptr<Diag>& diag,
const c10::TensorOptions& indices_options) {
int64_t nnz = std::min(diag->num_rows, diag->num_cols);
auto indptr = torch::arange(nnz + 1, indices_options);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will using the inplace operation torch::arange_out be cleaner? This way you can first create an array of length diag->num_rows+1 with all values being nnz, then fill the front part with arange. This also avoids creating multiple intermediate buffers.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated

const c10::intrusive_ptr<SparseMatrix>& lhs_mat,
const c10::intrusive_ptr<SparseMatrix>& rhs_mat) {
if (lhs_mat->HasDiag()) {
if (rhs_mat->HasDiag()) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I usually don't like nested if as it is difficult to read. I recommend change it to:

if (lhs_mat->HasDiag() && rhs_mat->HasDiag()) {
  ...
} else if (lhs_mat->HasDiag() && !rhs_mat->HasDiag()) {
  ...
} else {
  ...
}

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

// Diag @ Sparse
auto row = rhs_mat->Indices().index({0});
auto val = lhs_mat->value().index_select(0, row) * rhs_mat->value();
return SparseMatrix::ValLike(rhs_mat, val);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this implementation correct? The index_select should use A.row according to /~https://github.com/dmlc/dgl/blob/master/python/dgl/sparse/matmul.py#L172.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think so. rhs_mat->Indices().index({0}) returns the row coordinates of the SparseMatrix without data copy.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. rhs_mat->Indices() returns the COO index. How is it different with rhs_mat->COOPointer()->indices? Do we want to have two interfaces?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are the same. But we need Indices() exposed to python to avoid data copy of the COO tensor.

@dgl-bot
Copy link
Collaborator

dgl-bot commented Mar 9, 2023

Commit ID: bd0b8d124df34e8042355ce185b688f1775d56f8

Build ID: 4

Status: ✅ CI test succeeded.

Report path: link

Full logs path: link

@jermainewang jermainewang merged commit a03dec0 into dmlc:master Mar 9, 2023
czkkkkkk added a commit that referenced this pull request Apr 19, 2023
* [Sparse] Support Diag sparse format in C++

* update

* Update
DominikaJedynak pushed a commit to DominikaJedynak/dgl that referenced this pull request Mar 12, 2024
* [Sparse] Support Diag sparse format in C++

* update

* Update
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants