Our cluster is a heterogenous mixture of 3 blade types and 2 operating sytems, the latter being Scientific Linux 5.4 and 5.5. Unfortunately these two versions of OS come with slightly differing versions of openmpi. In order to allow jobs to span all blade architectures we have bypassed SL's install of openmpi and upgraded it manually to 1.4-4.el5.
Pros: Users can now create a hostfile referencing all hosts in the 200, 300 and 400 series. This will allow jobs to span up to 96 cores. Run mpi-selector-menu from the CLI to select the installed version of openmpi.
Cons: Memory size and CPU speeds differ between series which will cause a discrepancy in the completion times depending on the critical resource (RAM or MHz) defined by a job's algorithm. Until all threads are finished all nodes will be marked as in use.