-
Notifications
You must be signed in to change notification settings - Fork 12.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Github][CI] Remove premerge container #123483
[Github][CI] Remove premerge container #123483
Conversation
This patch removes the container from the premerge job. We are moving away from the kubernetes executor back to executing everything in the same container due to reliability issues. This patch updates everything in the premerge job to work.
Mostly just putting this up so I can quickly land it when I update everything in |
@llvm/pr-subscribers-github-workflow Author: Aiden Grossman (boomanaiden154) ChangesThis patch removes the container from the premerge job. We are moving away from the kubernetes executor back to executing everything in the same container due to reliability issues. This patch updates everything in the premerge job to work. Full diff: /~https://github.com/llvm/llvm-project/pull/123483.diff 1 Files Affected:
diff --git a/.github/workflows/premerge.yaml b/.github/workflows/premerge.yaml
index 261dc8bbb97e0a..ddef6206bcae89 100644
--- a/.github/workflows/premerge.yaml
+++ b/.github/workflows/premerge.yaml
@@ -18,11 +18,6 @@ jobs:
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.sha }}
cancel-in-progress: true
- container:
- image: ghcr.io/llvm/ci-ubuntu-22.04:latest
- defaults:
- run:
- shell: bash
steps:
- name: Checkout LLVM
uses: actions/checkout@v4
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So will this be the container used on all the runners?
All the Linux runners, yes. Originally we used one of the official Github Actions images that would then spawn a new container where all the work was actually performed. We had reliability issues with that, so we're going back to running the runner/jobs in a single container. We will try and switch back to the original technique when we can cause it adds some nice flexibility, but we need to get the reliability problems fixed first. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fine with the code change, but the PR description should reference an LLVM-zorg issue which explains the issues and switch, and this issue should link to the upstream konnectivity issue.
(Otherwise we will forget)
This patch removes the container from the premerge job. We are moving away from the kubernetes executor back to executing everything in the same container due to reliability issues. This patch updates everything in the premerge job to work.
This is part of a temp fix to llvm/llvm-zorg#362.