Closed
Description
Overarching tracking issue: #63
#163 allows running the various perf protocol implementations (libp2p/specs#478) on cloud instances.
Ideally we have a CI setup, that allows users to trigger the above. This removes the need for users to run it on their local machine.
Workflow
Starting point: libp2p implementation maintainer wants to test a new release candidate.
- They open a pull request to
libp2p/test-plans
, adding their release candidate toperf/impl
. - They trigger the automation via a GitHub comment.
- CI runs:
- Generate an ssh keypair and set the public key at
perf/terraform/user.pub
. - Run
cd perf/terraform && terraform apply
. - Run
cd perf/runner && npm run start -- --client-public-ip $(terraform output -raw -state ../terraform/terraform.tfstate client_public_ip) --server-public-ip $(terraform output -raw -state ../terraform/terraform.tfstate server_public_ip)
. This will write the benchmark results toperf/runner/benchmark-results.json
. - Run
cd perf/terraform && terraform destroy
. - Push
perf/runner/benchmark-results.json
to the pull request.
- Generate an ssh keypair and set the public key at
- Depending on the benchmark results:
- In case the automation catches a regression, the maintainer can cut another patch release and update the
libp2p/test-plans
pull request and retrigger the automation. - In case the results are fine, the maintainer can cut the release. Once released, they update the
libp2p/test-plans
pull request, retrigger the automation and merge. Thuslibp2p/testplans
master
branch always contains the latest benchmarking results.
- In case the automation catches a regression, the maintainer can cut another patch release and update the
Metadata
Assignees
Labels
No labels