Skip to content
Joey Hendricks edited this page Jan 23, 2022 · 31 revisions

The project logo

Contact me - Report Bug or Request Feature - Discussions - Documentation

Welcome to the Python micro-benchmark documentation on the right-hand side you can find the legend which can bring you quickly to the topic of your choice.

Frequently asked questions

What is a microbenchmark?

A microbenchmark is an act of testing the performance of a framework, algorithm, routine, or application to make sure it performs as expected. Python micro-benchmarks is therefore a benchmarking framework for the Python programming language that allows you to create your benchmarks for your code. The scope of what you want to encompass within your benchmark and how you want to define it is completely up to the user of this framework. But the main goal of a benchmark is to stress your logic with a synthetic workload and collect measurements about how well it is performing.

It is recommended however to keep a majority of your benchmark test runs within a short duration so you will be provided with some performance feedback faster and more often this makes debugging problems more comfortable. Keeping your benchmarks relatively short in overall runtime will also allow making it relatively a bit easier to integrate alongside your functional unit tests within a CI/CD pipeline.

What is possible with this framework?

Within this framework, you can create, compare, analyze, save, and visualize the performance of your code in an automated way by integrating it into a unit test framework. Giving you the option to have an automated self-defined "performance test" of your performance-critical parts of the application you are developing.

Allowing you to spot performance degradations more early within the software development life cycle. Helping you to deliver a more performant end product to your users. All of these options packed within this framework are documented within this wiki letting you start leveraging the aid of the performance benchmarks within the Python programming language.

Installation

Installing an official release

The installation for the latest official release of the python-micro-benchmarks package is easy and straightforward. You can download the latest version directly from the package index using pip. This can be done with the following command:

pip install python-micro-benchmarks

Installing a development version

To install the latest version of python-micro-benchmarks, you need first to obtain a copy of the source. You can do that by cloning the git repository:

git clone git://github.com/JoeyHendricks/python-micro-benchmarks.git

Checking the installation

To check that python-micro-benchmarks has been properly installed, type python from your shell. Then at the Python prompt, try to import python-micro-benchmarks, and check the installed version:

>>> import Benchmarking
>>> Benchmarking.__version__
1.0.1