Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Can SparseTIR run GAT end-to-end directly? #100

Open
Ed-gong opened this issue May 20, 2024 · 1 comment
Open

[Bug] Can SparseTIR run GAT end-to-end directly? #100

Ed-gong opened this issue May 20, 2024 · 1 comment

Comments

@Ed-gong
Copy link

Ed-gong commented May 20, 2024

The paper and project look super interesting to me, but there are several questions that confused me and I listed those questions below.

Questions

  1. In example/spmm folder, the Python code evaluated the kernel for unweighted SpMM, which is used in GCN. (The corresponding DGL kernel is “dgl.ops.copy_u_sum(g, x)”.
    Is there any code to test the weighted spMM, which is used in the GAT case? For example, DGL provides weighted SpMM named as: update_all(fn.u_mul_e('ft', 'a', 'm'), fn.sum('m', 'o')) .Do SparseTir provide similar kernel and how can we compare their kernel performance?

  2. Is there any code in this repo that can run GAT end-to-end directly?

  3. For GCN, the papers said that it was integrated into a Framework for end-to-end training. Could you provide more information about this framework? Such as which integrated framework is used, DGL or PyG?

  4. The paper said that format decomposition is applied to SpMM only, Could we apply it to SDDMM also and evaluate its kernel running time result?

Looking forward to your response. Thank you.

@yzh119
Copy link
Member

yzh119 commented May 20, 2024

For Q3: The end-to-end evaluations are available at https://github.com/uwsampl/sparsetir-artifact .
Regarding Q1,Q2, yes the same technique also applies to weighted spmm and we can use SparseTIR for GAT if you use weighted SpMM and SDDMM kernels generated by SparseTIR. However I don't have bandwidth to do them at the moment.

Regarding Q4, yes composable formats should also apply to SDDMM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants