Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug] race condition between fileserver and pod #700

Closed
pepoviola opened this issue Jan 26, 2023 · 2 comments · Fixed by #728
Closed

[bug] race condition between fileserver and pod #700

pepoviola opened this issue Jan 26, 2023 · 2 comments · Fixed by #728
Labels
bug Something isn't working good first issue Good for newcomers

Comments

@pepoviola
Copy link
Collaborator

For some configurations we have a race condition between the first pod and the fileserver. We should add wait until ready for the fileserver to ensure is ready when the first pod is starting.

see: https://gitlab.parity.io/parity/mirrors/substrate/-/jobs/2314818

cc: @michalkucharczyk

@davxy
Copy link
Member

davxy commented Jan 31, 2023

@pepoviola The comment for an outstanding substrate PR says that tests are not passing due to this.

Indeed I saw a test failing repeatedly there, but now is green. Is this issue something that manifests sporadically? Is ok to merge our PR or we have to wait for this issue to be solved?

@pepoviola
Copy link
Collaborator Author

@pepoviola The comment for an outstanding substrate PR says that tests are not passing due to this.

Indeed I saw a test failing repeatedly there, but now is green. Is this issue something that manifests sporadically? Is ok to merge our PR or we have to wait for this issue to be solved?

Hi @davxy, this race condition happens sporadically (and under heavy load in our cluster). Is safe to merge the pr in substrate :)

Thanks!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working good first issue Good for newcomers
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants