-
Notifications
You must be signed in to change notification settings - Fork 269
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Re write the result set #1165
Re write the result set #1165
Conversation
At present the result set plays the matches in parallel followed by a (sometimes computationally expensive) single process of reading in and analysing the interactions. TLDR: This changes how the internal result set calculations are being done. They are more efficiently calculated. This happens by doing the following: 1. The various match by match calculations are done by the tournament (the winner of a match, the cooperation count, the score etc...). 2. This calculations are written to file (at present we just write the actual actions of an interaction). 3. The form of this file has also changed: for every match there are 2 rows. One row corresponds to each player. This might seem costly storage wise but is done to enable faster analysis (more on that later). 4. The analysis is now done entirely using `dask`: "Dask is a flexible parallel computing library for analytic computing." (https://dask.pydata.org/). This ensures all calculations are done on disk (so no huge memory problems) but also that they can be done **in parallel**. This is all done using the nice Pandas-like API that dask has so essentially all calculations for the result set are just done by a few `groupby` statements. 5. There is *some* work being done outside of dask but that's just reshaping data. `dask` outputs `pandas.Series` and to be consistent with our current setup these are changes to be lists of list etc... **Some user facing changes** (which is why I suggest this would be for a `v4.0.0` release): - The `result_set.interactions` is no longer possible. This is a choice and not forced by the redesign: I don't think this is ever necessary or helpful. The data file can always be read in. - The ability to `tournament.play(in_memory=True)` has been removed. Again, not entirely a forced change (although it would be a tiny bit of work to allow this). Given the recent work to make everything work on Windows I don't think this is necessary and has allowed for a big deletion of code (which is a good thing re maintenance). - The interactions data file is now in a different format, this is forced by the design choice. - I have made a slight modification to `result_set.cooperation`. Currently this sets all self interactions to be 0 but I think that's not the right way to display it (note that the cooperation rates were all being done correctly). **As well as ensuring the work done in series is reduced and the parallel workers also calculate the scores (which I think is more efficient)** this also seems to be faster: On this branch: ```python import axelrod as axl players = [s() for s in axl.strategies if s.classifier["memory_depth"] == 1] tournament = axl.Tournament(players, turns=200, repetitions=100) results = tournament.play(processes=4) ``` Took: 1min 49s ```python import axelrod as axl players = [s() for s in axl.short_run_time_strategies] tournament = axl.Tournament(players, turns=200, repetitions=20) results = tournament.play(processes=4) ``` Took: 21min 2s On current master; ```python import axelrod as axl players = [s() for s in axl.strategies if s.classifier["memory_depth"] == 1] tournament = axl.Tournament(players, turns=200, repetitions=100) results = tournament.play(processes=4) ``` Took: 2min 36s ```python import axelrod as axl players = [s() for s in axl.short_run_time_strategies] tournament = axl.Tournament(players, turns=200, repetitions=20) results = tournament.play(processes=4) ``` Took: 28min 8s **One final aspect to consider** (I think) is that adding `dask` as a dependency open up the potential to use it's `delayed` decorator to do all our parallel processing. This would have the benefit of being able to use a distributed scheduler that `dask` has. (We'd have to investigate if this actually works based on our parallelisation but at least the possibility is there).
1a63274
to
825d16d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will review more -- github isn't loading the large diffs and I don't want to lose the comments I've already made.
Do we want to add an end user function to assist in reading the interactions file?
axelrod/interaction_utils.py
Outdated
@@ -239,36 +240,27 @@ def compute_sparklines(interactions, c_symbol='█', d_symbol=' '): | |||
sparkline(histories[1], c_symbol, d_symbol)) | |||
|
|||
|
|||
def read_interactions_from_file(filename, progress_bar=True, | |||
num_interactions=False): | |||
def read_interactions_from_file(filename, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one line?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will review more -- github isn't loading the large diffs and I don't want to lose the comments I've already made.
Yeah, no rush with this at all. Small piece meal chunks is probably the way to go. Sorry the diff is so horrible.
Do we want to add an end user function to assist in reading the interactions file?
We've got iu.read_interactions_from_file
which I've documented in
843b008, I think that could be user facing (and in fact we could have another tutorial going over each function in iu
?) but let me know if you were thinking of something different.
axelrod/interaction_utils.py
Outdated
for _, d in tqdm.tqdm(groupby): | ||
key = tuple(d[["Player index", "Opponent index"]].iloc[0]) | ||
value = list(map(str_to_actions, zip(*d["Actions"]))) | ||
try: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could use a defaultdict(list)
here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good call, much nicer 👍
This highlights it as a user facing function but I'm happy to do something further.
Hopefully this now works on windows.
Attempting to make file comparison work on windows.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In principle everything looks ok. There were several places where the spacing seems off / isn't the most readable choice IMO.
axelrod/fingerprint.py
Outdated
else: | ||
cooperation_rates[opponent_index] = [did_c(player_history)] | ||
df = dd.read_csv(filename) | ||
df = df[df["Player index"] == 0][["Opponent index", "Actions"]] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A comment here may be useful in the future, particularly why the need for the filter for player index.
axelrod/result_set.py
Outdated
|
||
from numpy import mean, nanmedian, std | ||
from numpy import mean, nanmedian, std, array, nan_to_num |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minor: I'd prefer to import numpy as np
and then use np.mean
and such because names like mean, std, array are common and there could be collisions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I agree, I'll change this.
axelrod/result_set.py
Outdated
|
||
del self.repetitions_d # Manual garbage collection | ||
return repetitions | ||
(mean_per_reps_player_opponent_df, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason not to pass these in explicitly as parameters? Or to use *args
instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that's way better 👍
axelrod/result_set.py
Outdated
|
||
[pi1, pi2, pi3, ..., pim] | ||
@update_progress_bar | ||
def _build_match_lengths(self, length_series): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like these functions could be abstracted, they are all very similar.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought that and started taking a look at it but they've all got tiny differences that made it a bit painful. I'll take another look :) 👍
for opponent_index in range(self.num_players): | ||
count = cooperation_dict.get((player_index, opponent_index),0) | ||
if player_index == opponent_index: | ||
# Address double count |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Spacing
axelrod/result_set.py
Outdated
else: | ||
payoff_stddevs[player][opponent] = 0 | ||
row.append(good_partner_dict.get((player_index, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minor: In a case like this I think wrapping the entire tuple is more readable:
row.append(good_partner_dict.get(
(player_index, opponent_index), 0))
axelrod/result_set.py
Outdated
@update_progress_bar | ||
def _build_initial_cooperation_count(self, initial_cooperation_count_series): | ||
initial_cooperation_count_dict = initial_cooperation_count_series.to_dict() | ||
initial_cooperation_count = [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Spacing on this one seems weird.
axelrod/result_set.py
Outdated
columns = ["Win", "Score"] | ||
sum_per_player_repetition_task = adf.groupby(groups)[columns].sum() | ||
|
||
normalised_scores_task = adf.groupby(["Player index", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe break up into two lines or use different spacing
axelrod/tournament.py
Outdated
keep_interactions=keep_interactions, in_memory=in_memory | ||
) | ||
|
||
result_set = ResultSet( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Spacing
axelrod/tournament.py
Outdated
|
||
states = [(C, C), (C, D), (D, C), (D, D)] | ||
for state in states: | ||
if index == 1: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pull to the outer context; consider using reverse()
(modifies in place)
Note I did not use `reversed()`, this doesn't return a tuple, I also didn't use `.reverse()` this only works on lists.
Thanks for taking the time to look through.
I think I've got them all but I'm not at all precious here, happy to change things further. I believe I've caught all your other suggestions/requests, I've gotten rid of some of the methods replacing with abstractions. I believe that the ones that are left are probably a good level of "reducible" and further abstraction would be unhelpful but I'm happy to revisit. |
All looks good to me. Nice piece of work! |
Great, thanks for taking the time @meatballs 👍 Once we're happy with this (@marcharper please don't hesitate to point out anything missing), we can merge and then open a PR from the |
Note this PR is to a
possible-v4
branch which is a branch ofmaster
(@marcharper @meatballs Genuinely sorry for the diff on
result_set.py
it's a mess as this kind of guts that file... If we like the idea of what I've done here I'm happy to fix that as best I can.)At present the result set plays the matches in parallel followed by a
(sometimes computationally expensive) single process of reading in and
analysing the interactions.
TLDR: This changes how the internal result set calculations are being
done. They are more efficiently calculated.
This happens by doing the following:
(the winner of a match, the cooperation count, the score etc...). Mainly thanks to
Tournament._calculate_results
actual actions of an interaction) thanks to this line of
Tournament._write_interaction_to_file
rows. One row corresponds to each player. This might seem more costly storage
wise but is done to enable faster analysis (more on that later).
dask
: "Dask is a flexibleparallel computing library for analytic computing."
(https://dask.pydata.org/). This ensures all calculations are done on
disk (so no huge memory problems) but also that they can be done in
parallel. This is all done using the nice Pandas-like API that dask
has so essentially all calculations for the result set are just done
by a few
groupby
statements. The tasks to be run are built withResultSet._build_tasks
and then compute (in parallel) withResultSet._compute_tasks
.reshaping data.
dask
outputspandas.Series
and to be consistentwith our current setup these are changes to be lists of list etc... This is all done in
ResultSet._reshape_out
Some user facing changes (which is why I suggest this would be for a
v4.0.0
release):result_set.interactions
is no longer possible. This is a choiceand not forced by the redesign: I don't think this is ever necessary or
helpful. The data file can always be read in.
tournament.play(in_memory=True)
has been removed.Again, not entirely a forced change (although it would be a tiny bit
of work to allow this). Given the recent work to make everything work
on Windows I don't think this is necessary and has allowed for a big
deletion of code (which is a good thing re maintenance).
forced by the design choice.
result_set.cooperation
.Currently this sets all self interactions to be 0 but I think that's not
the right way to display it (note that the cooperation rates were all
being done correctly).
As well as ensuring the work done in series is reduced and the parallel
workers also calculate the scores (which I think is more efficient)
this also seems to be faster:
On this branch:
Took: 1min 49s
Took: 21min 2s
On current master;
Took: 2min 36s
Took: 28min 8s
One final aspect to consider (I think) is that adding
dask
as adependency open up the potential to use it's
delayed
decorator to doall our parallel processing. This would have the benefit of being able
to use a distributed scheduler that
dask
has. (We'd have toinvestigate if this actually works based on our parallelisation but at
least the possibility is there).