Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docs and such #687

Merged
merged 9 commits into from
Aug 14, 2016
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 16 additions & 8 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,15 +15,24 @@
Axelrod
=======

A repository with the following goals:
A library with the following principles and goals:

1. To enable the reproduction of previous Iterated Prisoner's Dilemma research as easily as possible.
2. To produce the de-facto tool for any future Iterated Prisoner's Dilemma research.
3. To provide as simple a means as possible for anyone to define and contribute
1. Enabling the reproduction of previous Iterated Prisoner's Dilemma research
as easily as possible.
2. Creating the de-facto tool for future Iterated Prisoner's Dilemma
research.
3. Providing as simple a means as possible for anyone to define and contribute
new and original Iterated Prisoner's Dilemma strategies.
4. Emphasizing readability along with an open and welcoming community that
is accommodating for developers and researchers of a variety of skill levels

**Please contribute strategies via pull request (or just get in touch
with us).**
Currently the library contains well over 100 strategies and can perform a
variety of tournament types (RoundRobin, Noisy, Spatially-distributed, and
probabilistically ending) and population dynamics while taking advantage
of multi-core processors.


**Please contribute via pull request (or just get in touch with us).**

For an overview of how to use and contribute to this repository, see the
documentation: http://axelrod.readthedocs.org/
Expand Down Expand Up @@ -91,8 +100,7 @@ at http://axelrod-tournament.readthedocs.org.
Contributing
============

All contributions are welcome, with a particular emphasis on
contributing further strategies.
All contributions are welcome!

You can find helpful instructions about contributing in the
documentation:
Expand Down
4 changes: 2 additions & 2 deletions axelrod/_strategy_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ def detect_cycle(history, min_size=1, offset=0):
Mainly used by hunter strategies.

Parameters

----------
history: sequence of C and D
The sequence to look for cycles within
min_size: int, 1
Expand Down Expand Up @@ -83,7 +83,7 @@ def look_ahead(player_1, player_2, game, rounds=10):


class Memoized(object):
"""Decorator. Caches a function's return value each time it is called.
"""Decorator that caches a function's return value each time it is called.
If called later with the same arguments, the cached value is returned
(not reevaluated). From:
https://wiki.python.org/moin/PythonDecoratorLibrary#Memoize
Expand Down
2 changes: 1 addition & 1 deletion axelrod/actions.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,4 +10,4 @@ def flip_action(action):
elif action == Actions.D:
return Actions.C
else:
raise ValueError("Encountered a invalid action")
raise ValueError("Encountered a invalid action.")
1 change: 0 additions & 1 deletion axelrod/deterministic_cache.py
Original file line number Diff line number Diff line change
Expand Up @@ -144,5 +144,4 @@ def load(self, file_name):
else:
raise ValueError(
'Cache file exists but is not the correct format. Try deleting and re-building the cache file.')

return True
36 changes: 19 additions & 17 deletions axelrod/ecosystem.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,13 +11,13 @@ def __init__(self, results, fitness=None, population=None):
self.payoff_matrix = self.results.payoff_matrix
self.payoff_stddevs = self.results.payoff_stddevs

# Population sizes will be recorded in this nested list, with each internal
# list containing strategy populations for a given turn. The first list,
# representing the starting populations, will by default have all equal
# values, and all population lists will be normalized to one.
# An initial population vector can also be passed. This will be
# normalised, but must be of the correct size and have all
# non-negative values.
# Population sizes will be recorded in this nested list, with each
# internal list containing strategy populations for a given turn. The
# first list, representing the starting populations, will by default
# have all equal values, and all population lists will be normalized to
# one. An initial population vector can also be passed. This will be
# normalised, but must be of the correct size and have all non-negative
# values.
if population:
if min(population) < 0:
raise TypeError("Minimum value of population vector must be non-negative")
Expand All @@ -29,8 +29,8 @@ def __init__(self, results, fitness=None, population=None):
else:
self.population_sizes = [[1.0 / self.nplayers for i in range(self.nplayers)]]

# This function is quite arbitrary and probably only influences the kinetics
# for the current code.
# This function is quite arbitrary and probably only influences the
# kinetics for the current code.
if fitness:
self.fitness = fitness
else:
Expand All @@ -43,11 +43,12 @@ def reproduce(self, turns):
plist = list(range(self.nplayers))
pops = self.population_sizes[-1]

# The unit payoff for each player in this turn is the sum of the payoffs
# obtained from playing with all other players, scaled by the size of the
# opponent's population. Note that we sample the normal distribution
# based on the payoff matrix and its standard deviations obtained from
# the iterated PD tournament run previously.
# The unit payoff for each player in this turn is the sum of the
# payoffs obtained from playing with all other players, scaled by
# the size of the opponent's population. Note that we sample the
# normal distribution based on the payoff matrix and its standard
# deviations obtained from the iterated PD tournament run
# previously.
payoffs = [0 for ip in plist]
for ip in plist:
for jp in plist:
Expand All @@ -56,9 +57,10 @@ def reproduce(self, turns):
p = random.normalvariate(avg, dev)
payoffs[ip] += p * pops[jp]

# The fitness should determine how well a strategy reproduces. The new populations
# should be multiplied by something that is proportional to the fitness, but we are
# normalizing anyway so just multiply times fitness.
# The fitness should determine how well a strategy reproduces. The
# new populations should be multiplied by something that is
# proportional to the fitness, but we are normalizing anyway so
# just multiply times fitness.
fitness = [self.fitness(p) for p in payoffs]
newpops = [p * f for p, f in zip(pops, fitness)]

Expand Down
3 changes: 2 additions & 1 deletion axelrod/game.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,8 @@ def __init__(self, r=3, s=0, t=5, p=1):
}

def RPST(self):
"""Return the values in the game matrix in the Press and Dyson notation."""
"""Return the values in the game matrix in the Press and Dyson
notation."""
R = self.scores[(C, C)][0]
P = self.scores[(D, D)][0]
S = self.scores[(C, D)][0]
Expand Down
1 change: 0 additions & 1 deletion axelrod/interaction_utils.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
"""
Functions to calculate results from interactions. Interactions are lists of the
form:
Expand Down
6 changes: 3 additions & 3 deletions axelrod/match.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,14 +26,14 @@ def __init__(self, players, turns, game=None, deterministic_cache=None,
The number of turns per match
game : axelrod.Game
The game object used to score the match
deterministic_cache : dictionary
deterministic_cache : axelrod.DeterministicCache
A cache of resulting actions for deterministic matches
noise : float
The probability that a player's intended action should be flipped
match_attributes : dict
Mapping attribute names to values which should be passed to players.
The default is to use the correct values for turns, game and noise
but these can be overidden if desired.
but these can be overridden if desired.
"""
self.result = []
self.turns = turns
Expand Down Expand Up @@ -78,7 +78,7 @@ def players(self, players):
def _stochastic(self):
"""
A boolean to show whether a match between two players would be
stochastic
stochastic.
"""
return is_stochastic(self.players, self.noise)

Expand Down
1 change: 0 additions & 1 deletion axelrod/moran.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
from collections import Counter
import random

Expand Down
6 changes: 3 additions & 3 deletions axelrod/random_.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ def randrange(a, b):
return a + int(r)


def seed(seed):
def seed(seed_):
"""Sets a seed"""
random.seed(seed)
numpy.random.seed(seed)
random.seed(seed_)
numpy.random.seed(seed_)
4 changes: 2 additions & 2 deletions axelrod/result_set.py
Original file line number Diff line number Diff line change
Expand Up @@ -296,8 +296,8 @@ def build_payoffs(self):

[uij1, uij2, ..., uijk]

Where k is the number of repetitions and uijk is the list of utilities
obtained by player i against player j in each repetition.
Where k is the number of repetitions and uijk is the list of
utilities obtained by player i against player j in each repetition.
"""
plist = list(range(self.nplayers))
payoffs = [[[] for opponent in plist] for player in plist]
Expand Down
5 changes: 3 additions & 2 deletions axelrod/strategy_transformers.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,13 +66,14 @@ def __call__(self, PlayerClass):
Returns
-------
new_class, class object
A class object that can create instances of the modified PlayerClass
A class object that can create instances of the modified
PlayerClass
"""

args = self.args
kwargs = self.kwargs
try:
#if "name_prefix" in kwargs remove as only want dec arguments
# if "name_prefix" in kwargs remove as only want dec arguments
del kwargs["name_prefix"]
except KeyError:
pass
Expand Down
15 changes: 8 additions & 7 deletions docs/community.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,11 @@ Part of the team
----------------

If you’re reading this you’re probably interested in contributing to and/or
using the Axelrod library! Firstly: **thank you**!
using the Axelrod library! Firstly: **thank you** and **welcome**!

We are not only proud of the library but also of the environment
that surrounds it. Everyone is expected to act in an open and welcoming,
considerate and respectful way.
We are proud of the library and the environment that surrounds it. A primary
goal of the project is to cultivate an open and welcoming community, considerate
and respectful to newcomers to python and game theory.

The Axelrod library has been a first contribution to open source software for
many, and this is in large part due to the fact that we all aim to help and
Expand All @@ -18,7 +18,8 @@ You're very welcome and don't hesitate to ask for help.

**With regards to any contribution**, please do not feel the need to wait until
your contribution is perfectly polished and complete: we're happy to offer
early feedback.
early feedback, help with git, and anything else that you need to have a
positive experience.

**If you are using the library for your own work** and there's anything in the
documentation that is unclear: we want to know so that we can fix it. We also
Expand All @@ -30,11 +31,11 @@ Communication
There are various ways of communicating with the team:

- `Gitter: a web based chat client, you can talk directly to the users and
mantainers of the library. <https://gitter.im/Axelrod-Python/Axelrod>`_
maintainers of the library. <https://gitter.im/Axelrod-Python/Axelrod>`_
- Irc: we have an irc channel. It's #axelrod-python on freenode.
- `Email forum. <https://groups.google.com/forum/#!forum/axelrod-python>`_
- `Issues: you are also very welcome to open an issue on
github </~https://github.com/Axelrod-Python/Axelrod/issues>`_
- `Twitter. <https://twitter.com/AxelrodPython>`_ This account periodically
tweets out random match and tournament results but you're welcome to get in
tweets out random match and tournament results; you're welcome to get in
touch through twitter as well.
30 changes: 29 additions & 1 deletion docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,35 @@
Welcome to the documentation for the Axelrod Python library
===========================================================

Here is quick overview of what can be done with the library.
Here is quick overview of the current capabilities of the library:
* Over 100 strategies from the literature and some exciting original
contributions
* Classic strategies like TiT-For-Tat, WSLS, and variants
* Zero-Determinant and other Memory-One strategies
* Many generic strategies that can be used to define an array of popular
strategies, including finite state machines, strategies that hunt for
patterns in other strategies, and strategies that combine the effects of
many others
* Strategy transformers that augment that abilities of any strategy
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that augment the

* Head-to-Head matches
* Round Robin tournaments with a variety of options, including:
* noisy environments
* spatial games
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

spatial tournaments?

* probabilistically chosen match lengths
* Population dynamics
* The Moran process
* An ecological model
*Multi-processor support, caching for deterministic interactions, automatically
generate figures and statistics

Every strategy is categorized on a number of dimensions, including:
* Deterministic or Stochastic
* How many rounds of history used
* Whether the strategy has access to the game matrix, the length of the
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

whether the strategy makes use of?

The tournament can decide what it has access to right?

match, etc.

Furthermore the library is extensively tested with 99%+ coverage, ensuring
validity and reproducibility of results!


Quick start
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/getting_started/moran.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ initial population of players, the population is iterated in rounds consisting
of:

- matches played between each pair of players, with the cumulative total
scores recored
scores recorded
- a player is chosen to reproduce proportional to the player's score in the
round
- a player is chosen at random to be replaced
Expand Down