Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some question with CQL #6

Open
dbsxdbsx opened this issue Sep 16, 2022 · 0 comments
Open

Some question with CQL #6

dbsxdbsx opened this issue Sep 16, 2022 · 0 comments

Comments

@dbsxdbsx
Copy link

dbsxdbsx commented Sep 16, 2022

First, thanks your implementation of so many CQL. The below question are some related to your implementation, and some are related to CQL itself.

  1. why the returned value of function _compute_policy_values in CQL-SAC is qs1 - log_pis.detach(), qs2 - log_pis.detach() with detached log_pis, I think it should not be detached.
  2. what is the meaning of self.temp and self.cql_weight in CQL-SAC?I think self.cql_weight is duplicated as cql_alpha has a similar meaning.
  3. Is it essential to use two q states in cql?
  4. In CQL-SAC-Discrete, I think the q1 inside cql1_scaled_loss = torch.logsumexp(q1, dim=1).mean() - q1.mean() should be an expect over all optional q(s,a), but not the best one, am I wrong?
    5.In CQL-SAC, why retain_graph=True for the Lagrange and critic optimizer?
  5. the most important question: according to p29 from paper, for continuous action, to calc the logsumexp object, both q from uniform and q from pi are used, but why also use actions from pi here? I asked also here, but still at a loss.

And I know some CQL question should be ask from the original repo, but the author is no longer active.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant