OpenMPI is now configured on the new cluster. There was an issue with the installation, in that the package was pre-configured to expect Infiniband which we do not have (yet). However after several hours spent battling with it we found the configuration parameter to bypass this requirement and MPI jobs are now running.
Our newest HPC user is currently submitting jobs on the cluster using MPI compiled C code. Things seem to be running smoothly and we'll continue to monitor the job progress over the weekend. While we've been running live user jobs for the last two months this is actually a major step for us as it represents a maturation in our ability to provision an independent cluster from the ground up, with user and software support in under 48 hours.
We are also anticipating increasing the CPU count by 8 early next week with the addition of two extra servers. We will use the new kit in a proof of concept arrangement to test partitioning the cluster to segregate resources for specific user groups.