-
Notifications
You must be signed in to change notification settings - Fork 133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run performance tests on all PR against base branch #1630
base: dev
Are you sure you want to change the base?
Conversation
38fc9eb
to
4fef17b
Compare
1f99e84
to
b5f4e63
Compare
963e4cc
to
fa4dc9e
Compare
4e13626
to
fdb9406
Compare
171f783
to
d330f9d
Compare
830c0d6
to
4896468
Compare
Automated performance checks have found a sensible difference on commit Performance tests 1st run output:No significative change in performance for tests:
If you want to skip performance checks for latter commits, add the |
d3c562b
to
773e28d
Compare
Maybe we can have it displayed as a table so it's more readable:
|
OK though to make it also readable as a text file (the intent is that the outputed report markdown file is both intended for humans with a text editor and to scripts outputing optionally richer text formats - like as a Github comment) is more complex when doing a table vs a list. But i'll try |
66d9814
to
23edde8
Compare
Automated performance checks have been performed on commit Tests results✅ Tests have passed. Performance tests 1st run outputNo significative change in performance for tests:
If you want to skip performance checks for latter commits, add the |
Automated performance checks have been performed on commit Tests results✅ Tests have passed. Performance tests 1st run outputNo significative change in performance for tests:
If you want to skip performance checks for latter commits, add the |
628dc3b
to
7458f02
Compare
Automated performance checks have been performed on commit Tests results✅ Tests have passed. Performance tests 1st run outputNo significative change in performance for tests:
If you want to skip performance checks for latter commits, add the |
*/ | ||
export function declareTestGroup(name, code, timeout) { | ||
if (areTestsAlreadyRunning) { | ||
error(`"declareTestGroup" function call not performed at top level.`); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the error message correct with what is being checked?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes the idea is declareTestGroup
is kind of like testing frameworks' describe
functions.
Excepted that for now, I don't think it's possible to nest them in one another.
It is supposed to be called at top level as the library relies on them being all called when its scheduled test-running task 😉 begins (after a low timeout)
Technically you could have a call not at the top-level but still executed on file evaluation (e.g. inside a if
in the top level) and that would be valid but the actual descriptive error message to throw here is hard to come up with so I chose "top level" as a concept that can easily be grasped without needing to have the whole architecture in mind.
tests/performance/src/lib.js
Outdated
} else { | ||
testNumber = Number(location.hash.substring(1)); | ||
} | ||
if (testNumber < 100) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure what 100 is, is it the maximum number of tests that can be run?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to give some context:
-
The nodejs script in
tests/perfomance.run.js
is the test runner executed server-side (similar to vitest'sglobalSetup
): here it runs one of the test page multiple time on the local Chrome binary -
This script is the test library that will run in the browser alongside written tests (like
vitest
's JS lib when we doimport { describe } from "vitest"
), it does the client-side test management logic (including sending HTTP requests to the nodejs script to report results) so test files can just be about the test scenarios
This testNumber
is the number of time we will reload the page in the current browser. What we actually do here is to switch between the current.html
and previous.html
pages each time the tests for the current page are all finished.
Because we're reloading the page, we cannot rely on variables declared in that script to track that count. Here we use the "fragment" part of an URL to store the current iteration (.e.g. /current.html#34
).
Once we've reached #100
, we tell the nodejs script that we're done. It then kills the browser and either re-run another one or finish the tests.
I don't know if that's clear, it's a little complex as we retry the same tests a LOT of time in different processes at different time just to be sure we've some performance issue and it's not just a false positive due to temporary high CPU usage, some browser optimizations when similar code is repeated, or random noise.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does it interact with TEST_ITERATIONS = 30
from performance/run.mjs
?
Does it means the browser test is started 30 times and each test does 100 reloads ? So it's actually 30x100 reloads. Isn't it too many?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes that must be the right count (technically 1500 with the previous and 1500 with the new one).
Too many for who?
That's to give more confidence into the final result.
When I looked at other benchmarking projects, 1500 attempts is in the very low tier.
The real issue I can see is that tests may run for 1h but that's not dramatic to me.
tests/performance/src/lib.js
Outdated
} else { | ||
testNumber = Number(location.hash.substring(1)); | ||
} | ||
if (testNumber < 100) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does it interact with TEST_ITERATIONS = 30
from performance/run.mjs
?
Does it means the browser test is started 30 times and each test does 100 reloads ? So it's actually 30x100 reloads. Isn't it too many?
if (results.better.length > 0) { | ||
console.error( | ||
"\nBetter performance at first attempt for tests:\n\n" + | ||
formatResultInHumanReadableWay(results.better), | ||
); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why logging again the results of the first attempt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When you run the tests in the console, we output a lot of intermediary logs indicating current progress (and current results) because the tests take a lot of time.
At that point the first iteration's results would be lost far in the terminal scrollback buffer, so we just repeat it because we're nice guys.
staticServer = launchStaticServer(currentDirectory, { | ||
httpPort: PERF_TESTS_PORT, | ||
}); | ||
resultServer = createResultServer(onFinished, onError); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
createResultServer
could also take port in argument, and the port can be defined as a constant similarly with CONTENT_SERVER_PORT
or PERF_TESTS_PORT
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes but then we would need to communicate that port to the webpage (we could do it through the URL like through the fragment or a query string) so the lib would know where to send back the results and I was too lazy.
At least hardcoded you know you have to check everywhere it is used :p
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I try something
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added both port and retry attempt to the page's fragment now
Automated performance checks have been performed on commit Tests results✅ Tests have passed. Performance tests 1st run outputNo significative change in performance for tests:
If you want to skip performance checks for latter commits, add the |
This PR does multiple updates with our performance tests (which test for performance regressions):
I simplified performance tests writing by adding a
lib.js
file handling test management - and I added some multithread tests.I now choose to run performance tests in CI for all PR
Performance tests in CI are now run against the base branch (instead of against the last RxPlayer release )
A comment is left on the GitHub PR if performance issues are detected