{"version":"1.0","provider_name":"UCT HPC","provider_url":"https:\/\/ucthpc.uct.ac.za","author_name":"Andrew Lewis","author_url":"https:\/\/ucthpc.uct.ac.za\/index.php\/author\/andrew-lewis\/","title":"MPI standardization - UCT HPC","type":"rich","width":600,"height":338,"html":"<blockquote class=\"wp-embedded-content\" data-secret=\"kaXiatv6Ez\"><a href=\"https:\/\/ucthpc.uct.ac.za\/index.php\/2011\/07\/28\/mpi-standardization\/\">MPI standardization<\/a><\/blockquote><iframe sandbox=\"allow-scripts\" security=\"restricted\" src=\"https:\/\/ucthpc.uct.ac.za\/index.php\/2011\/07\/28\/mpi-standardization\/embed\/#?secret=kaXiatv6Ez\" width=\"600\" height=\"338\" title=\"&#8220;MPI standardization&#8221; &#8212; UCT HPC\" data-secret=\"kaXiatv6Ez\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" class=\"wp-embedded-content\"><\/iframe><script type=\"text\/javascript\">\n\/*! This file is auto-generated *\/\n!function(c,d){\"use strict\";var e=!1,o=!1;if(d.querySelector)if(c.addEventListener)e=!0;if(c.wp=c.wp||{},c.wp.receiveEmbedMessage);else if(c.wp.receiveEmbedMessage=function(e){var t=e.data;if(!t);else if(!(t.secret||t.message||t.value));else if(\/[^a-zA-Z0-9]\/.test(t.secret));else{for(var r,s,a,i=d.querySelectorAll('iframe[data-secret=\"'+t.secret+'\"]'),n=d.querySelectorAll('blockquote[data-secret=\"'+t.secret+'\"]'),o=new RegExp(\"^https?:$\",\"i\"),l=0;l<n.length;l++)n[l].style.display=\"none\";for(l=0;l<i.length;l++)if(r=i[l],e.source!==r.contentWindow);else{if(r.removeAttribute(\"style\"),\"height\"===t.message){if(1e3<(s=parseInt(t.value,10)))s=1e3;else if(~~s<200)s=200;r.height=s}if(\"link\"===t.message)if(s=d.createElement(\"a\"),a=d.createElement(\"a\"),s.href=r.getAttribute(\"src\"),a.href=t.value,!o.test(a.protocol));else if(a.host===s.host)if(d.activeElement===r)c.top.location.href=t.value}}},e)c.addEventListener(\"message\",c.wp.receiveEmbedMessage,!1),d.addEventListener(\"DOMContentLoaded\",t,!1),c.addEventListener(\"load\",t,!1);function t(){if(o);else{o=!0;for(var e,t,r,s=-1!==navigator.appVersion.indexOf(\"MSIE 10\"),a=!!navigator.userAgent.match(\/Trident.*rv:11\\.\/),i=d.querySelectorAll(\"iframe.wp-embedded-content\"),n=0;n<i.length;n++){if(!(r=(t=i[n]).getAttribute(\"data-secret\")))r=Math.random().toString(36).substr(2,10),t.src+=\"#?secret=\"+r,t.setAttribute(\"data-secret\",r);if(s||a)(e=t.cloneNode(!0)).removeAttribute(\"security\"),t.parentNode.replaceChild(e,t);t.contentWindow.postMessage({message:\"ready\",secret:r},\"*\")}}}}(window,document);\n<\/script>\n","description":"Our cluster is a heterogenous mixture of 3 blade types and 2 operating sytems, the latter being Scientific Linux 5.4 and 5.5.&nbsp; Unfortunately these two versions of OS come with slightly differing versions of openmpi.&nbsp; In order to allow jobs to span all blade architectures we have bypassed SL's install of openmpi and upgraded it manually to 1.4-4.el5.Pros: Users can now create a hostfile referencing all hosts in the 200, 300 and 400 series.&nbsp; This will allow jobs to span up to 96 cores.&nbsp; Run mpi-selector-menu from the CLI to select the installed version of openmpi.Cons: Memory size and CPU speeds differ between series which will cause a discrepancy in the completion times depending on the critical resource (RAM or MHz) defined by a job's algorithm.&nbsp; Until all threads are finished all nodes will be marked as in use.&nbsp;"}