Modal Title

Installing MPI for Python on a Raspberry Pi Cluster

Jan 26th, 2015 3:59pm by
Featued image for: Installing MPI for Python on a Raspberry Pi Cluster
Feature image via Flickr Creative Commons.

We set up a computing cluster running five Raspberry Pi’s for a project in Africa. The machines ran on solar power with the idea of supporting 2000 to 10,000 concurrent connections. The cluster also ran Docker.

We then wanted to allow a Python program to exploit the multiple processors of the cluster to perform various tasks. To accomplish this, we needed to install the message passing interface (MPI) for Python, which provides binding of the MPI standard for the Python programming language.

This article describes how to install and test the MPI for Python and assumes that the Raspberry Pi cluster is running the latest Raspbian OS. The MPICH2 interface should also be installed and operational.

The MPI for Python software needs to be loaded individually on each cluster node, so being able to log in using a secure shell (SSH), is required. If you already know how to use SSH, you may skip the next section and just proceed to the MPI for Python installation and testing instructions. Let’s see how to get going with SSH.

Using SSH

SSH stands for secure shell and is an encrypted remote login protocol. Once it has been set up on each node, it’s used to communicate with the other nodes on that network.

The main benefits of SSH are:

  • SSH uses the RSA encryption algorithm to generate public and private keys, making intrusion extremely difficult.

  • Since SSH is a remote login protocol, it can be configured on a laptop allowing connectivity to the raspberry pi cluster, even over WiFi.

  • SCP (Secure Copy) and SFTP (Secure File Transfer Protocol) run on top of SSH, allowing you to transfer files and directories directly from one node to another.

  • SSH supports one-time log in. This means that you only have to enter credentials the first time you log in. From the second log in onwards, you go right to the shell.

To log in to another node from the master node use the following command.

ssh pi@192.168.3.216 (replace the IP address to with one of your worker node addresses)

Enter your password. Once you log in, all the commands you type will run on that node and not on the master.

SSH can also be used to run commands directly on the other nodes. For example, to change the host name on three of the nodes, use these commands:

ssh pi@192.168.3.216 ‘sudo echo “cilent001″ | sudo nano /etc/hostname
ssh pi@192.168.3.217 ‘sudo echo “cilent002″ | sudo nano /etc/hostname
ssh pi@192.168.3.218 ‘sudo echo “cilent003″ | sudo nano /etc/hostname

Here’s another example.

ssh pi@192.168.3.216 ‘sudo poweroff’

This command safely shuts down the node at IP address 192.168.3.216.

The following figure shows how SSH is used to log in to a worker node (192.168.3.216) and from the worker node, get the control terminal back to the master node.

ssh-screenshot

As you can see in the above figure, logging in to a worker node happens directly. But each time the control of the terminal comes back to the master node (192.168.3.215), the login credential has to be re-entered.

So after issuing commands via SSH to other nodes, there might be situations where data has to be sent to multiple nodes over SSH. If the number of nodes are small, then we can manually log in to each node, connect it to a display and keyboard, and send files. This is a highly inefficient way to do it when the size of the cluster is large.

The easy way is to use SCP to send the files. Install SCP using the following command.

sudo apt-get install scp

The general form of the command is as follows:

scp (path of file on local device) pi@192.168.3.215 (path of remote location)

Take a look at this example.

scp /pi/example.c pi@192.168.3.215 /pi/project

Here, the file is sent to the remote device using it’s IP address. Many files in a directory can be sent using the recursive option (-R), such as the following.

scp -R /pi/project pi@192.168.3.216 /pi/project

This command recursively transfers all the files in the /pi/project directory from the local host to the directory on the remote host, identified by the IP address.

With remote login using SSH out of the way, we can now turn our attention to installing MPI for Python on the cluster.

Installing MPI for Python

The conventional way to load MPI for Python, known as mpi4py, will not work. It is usually installed with the following command line.

sudo apt-get install python-mpi4py

This approach will crash when executed because it installs a copy of openMPI, which conflicts with the already installed MPICH2 software. MPICH2 system is designed to run only one interface and when multiple instances are started, the whole cluster fails.

To avoid this grave situation and the tedious task of restoring the operating system back to its previous state, a work-around exists. It’s a relatively straightforward matter to build mpi4py manually, on each of the nodes in the cluster.

Here are the steps.

1) Download the mpi4py package.

curl –k –O https://mpi4py.googlecode.com/files/mpi4py-1.3.1.tar.gz

You can use wget instead of curl but I couldn’t find an option that bypasses the certificate issue that hasn’t been resolved by the website maintenance team.

2) Unpack it and then change into the mpi directory.

tar –zxf mpi4py-1.3.1.tar.gz

cd mpi4py-1.3.1.tar.gz

3) Before starting the build, it is important to make sure that all the Python development tools are available. This ensures that many important header files like Python.h are present and can be used by the build function.

This step can be skipped if the python development tools are already installed.

sudo apt-get update –fix-missing

sudo apt-get install python-dev

4) Now, build the package.

cd mpi4py-1.3.1.tar.gz

sudo python setup.py build –mpicc=/usr/local/mpich2/bin/mpicc

A few things need to be noted here:

  • The option –mpicc is used to provide the build file the location of the MPI compiler.

  • The option –mpicc has to be used only if the location of that compiler doesn’t already exist in the system path.

  • The path /usr/local/mpich2/bin/mpicc is the location on my node, where the mpich2 is built. It might not be the same for everyone and so that has to be replaced with the path, where mpicc is located in that system.

Now we can start the build. Change the working directory to mpi4py:

cd mpi4py

Next, run the following command

sudo python setup.py install

Once the command finishes, repeat the process on every other node in the cluster. Then the demo program helloworld.py can be run to test if mpi4py is installed on all the node successfully and is running correctly.

At the command line execute the following.

python helloworld.py

If the nodes of the cluster aren’t already built, then the easier way to do it would be to perform the above procedure on one node and read the entire image of the OS and write it into the SD cards of each of the other node. This would eliminate building the mpi4py package on each node individually.

Testing mpi4py And Running MPI With Python Programs

We just finished building mpi4py so that we can write and run Python programs using MPICH. We need to test mpi4py and make sure everything is working correctly.

You should have a file named machinefile (example screenshot below) that stores the IP addresses of all the nodes in the network. This will be used by the MPICH to communicate and send/receive messages between various nodes.

machinefile-screenshot

In the extracted directory mpi4py, there is another directory named demo. The demo directory has a variety of Python programs that can be run to test mpi4py.

A good, basic testing program is helloworld.py. The procedure to run it is:

cd ~/mpi4py/demo

mpiexec –np 4 –machinefile ~/mpitest/machinefile python helloworld.py

The output should look something like the following screenshot.

mpi-execution

If the output looks similar to the above and all the nodes have been included, then it works.

Note that ~/mpi4py/demo is the path to mpi4py on my system and should be replaced with the one under your username. The same goes for the path to your machinefile.

There are other programs in the demo directory that can also be used.

A few benchmark programs have been created by Ohio State University.

  • osu_bw.py – This program calculates bandwidth. The master node sends out a series of fixed size messages to other nodes, and the receiver sends a reply only after all the messages are received. So the master node calculates the bandwidth based on the time elapsed and bytes sent by the user.

  • osu_bibw.py – This program is similar to the above, except both nodes are involved in sending and receiving a series of messages.

  • osu_latency.py – This program sends messages to various nodes and waits for a reply. This repeats a number of times and then the latency is calculated.

Many of the other programs in the demo directory can also be used for testing. All of these can be run in a similar way to the helloworld.py program.

The following screenshot shows the output of the osu_latency.py program.

osu-latency-screenshot

Once testing is done, programs compatible with MPI can be written using Python. All the nodes in the cluster should run the same program and depending on conditions, executes only a part of the program, thereby allowing parallel execution.

This also means that we can write two different programs and give it the same name on different nodes. You could use this technique to create a server program and store it on the master node. You could then write a client program and store it with the same name, on worker nodes.

The following screenshot shows a sample MPI program.

mpi-sample-program

The file is a communicator application that contains several methods and gathers some process information. I called it MPI.COMM_WORLD. Its features include the following.

  • comm.rank – It gives the rank of the process running on that processor or node.

  • comm.size – It provides the number of nodes in the cluster

  • comm.get_processor_name() – It gives the name of the processor on which a particular process is running.

  • com.send() – is used to send data to a node, indicated by the dest parameter.

  • comm.receive() – is used to receive some data from source node received from the node indicated by the source parameter.

These are the basic functions and many other functions are available, to create MPI compliant Python programs.

Note that if edge conditions exist and the number of processes exceed the number of nodes, then the execution of the program will fail. To avoid this situation, the processes can be given a loop around by using the %size operation as shown in the screenshot above. The process would then wrap around to the first processor to do the task.

Next Steps

We’ve taken a quick tour through how to use SSH, install MPI for Python, and how to test the finished installation. A lot more can be done with this cluster, including adding more nodes. Give it a try and be sure to share your experiences in the comments.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Docker.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.