Python - Peterson's Algorithm
Description
An implementation of the multi-process/multi-threading algorithm called "Peterson's Algorithm".
Python Environment
This project is tested using Python3.7. Written using PyCharm. Tested with Pycharm 2017.2.3 and 2019.2 versions.
Creating a New Env
Most installations of Python3 will allow you to simply run python3.7 -m venv ./.venv
to create a new Python3.7
environment called ".venv" at the current directory folder.
Once created, load this environment with source .venv/bin/activate
on Linux or . .venv/Scripts/activate
on Windows.
Third Party Libraries
- If any additional libraries have been used, then they will be found in
requrements.txt
at the project root. - To install, load your local Python Environment and run
pip install -r requirements.txt
when at the project root.
Running the Program
Run the program via python ./main.py
while at the project root.
Expected Output
There are four possible options when running this project.
The project can run as either multi-processing
or multi-threading
. In the context of this program, they both run
identically, and even use the same code, other than the single line where the process/thread pool is created.
Then, for each of those, there is a safe
and unsafe
version. The "safe" versions use Peterson's Algorithm to create
critical sections that are thread safe. The "unsafe" versions use no form of locking, and naively attempt to
access/update shared variables.
In any of the four possible cases, one process/thread will call function "A" and the other will call function "B". Both function A and B will try to 1 to a shared counter, and do so 100 times.
The expected output of this program is that each process/thread will add 100 total to the counter. But the "unsafe" functions will do so in a way that's unlikely to count properly to the expected total of 200.
Intersting Notes
Oddly enough, at least on both my personal desktop and laptop, I noticed a quirk with this code.
As stated, both "multi-processing" and "multi-threading" use identical code, except for the single line where the process/thread pools are instantiated. Normally, you'd think this means that they both behave essentially identically.
However, after running this program repeatedly, I've noticed that "safe multi-processing" seems to have a 50/50 chance of running either function A or B first. Aka, the chance of A's critical section calling first is roughly equal to the chance of B's critical section calling first.
Meanwhile, "safe multi-threading" seems to be heavily skewed toward's function A. There seems to be a roughly 90% chance of A's critical section calling first, and it was exceedingly rare that B's critical section called first.
The only reason I can think of to account for this is Python's "GIL" or "Global Interpreter Lock". See older Python project (linked in references) for more info.
Running Tests
To run all unittests for this project, open a terminal and run the run_unit_tests.sh
bash script.
For more information about testing, see the documents/testing.md
file.
References
See the documents/references.md
file.