Creating a Pioreactor cluster
Pioreactors are able to be used as individual units, or in concert with other Pioreactors. In either case, a Pioreactor needs to be assigned as a leader. The leader unit controls other Pioreactors (and that may include itself), stores the database, hosts the web interface, and is the interface between users and the hardware.
When you set up your first Pioreactor using our software installation guide, your Pioreactor was set up to be a leader already. You only need one leader in a Pioreactor cluster.
A leader will communicate and control all the workers (non-leader Pioreactors) in the inventory. The inventory is a list of workers in your cluster, defined in the section cluster.inventory
in the config.ini
.
Workers can be active (available for running activities and housing cultures), or inactive. This is set with 1
or 0
respectively in the cluster.inventory
section.
When you want to remove a Pioreactor from your cluster, you can remove it from the list in available inventory in cluster.inventory
section in config.ini
.
Possible cluster topologies
A cluster can be made up of a single Pioreactor, or can be scaled to as many Pioreactors as you have. This gives us a few different possible topologies of what your cluster of Pioreactor(s) might look like.
Single Pioreactor
The simplest topology is when you have a single Pioreactor, and so by default the Pioreactor is both the leader and the only worker.

Cluster, and leader is a worker
When you have multiple Pioreactors, you can nominate one to be the leader, and retain it as a worker, too:
Cluster, and leader is not a worker
You can also choose not to have the leader be a worker. This is useful if you have a spare Raspberry Pi without the Pioreactor hardware, or the number of Pioreactors grows large and you wish to keep one out of the inventory to focus on being a leader only.
How to edit roles
To tell the cluster which computer is the leader, you edit the config.ini
's leader_hostname
section (under cluster.topology
):
Inventory is assigned in config.ini
under cluster.inventory
:
Adding new workers
See the instructions here to add new workers to your cluster.