{"id":925,"date":"2012-08-21T12:45:39","date_gmt":"2012-08-21T10:45:39","guid":{"rendered":"http:\/\/oldblogs.uct.ac.za\/blog\/big-bytes\/2012\/08\/21\/molecular-visualization"},"modified":"2019-01-03T08:43:13","modified_gmt":"2019-01-03T06:43:13","slug":"molecular-visualization","status":"publish","type":"post","link":"https:\/\/ucthpc.uct.ac.za\/index.php\/2012\/08\/21\/molecular-visualization\/","title":{"rendered":"Molecular visualization"},"content":{"rendered":"Why is molecular visualization cool?\u00a0 Take a look at this animation from <a href=\"http:\/\/www.wehi.edu.au\/education\/wehitv\/\">WEHI<\/a>:\r\n<br>\r\n<br>\r\n<object id=\"ltVideoYouTube\" data=\"http:\/\/www.youtube.com\/v\/OjPcT1uUZiE\" type=\"application\/x-shockwave-flash\" width=\"395\" height=\"323\"><param name=\"movie\" value=\"http:\/\/www.youtube.com\/v\/OjPcT1uUZiE\" \/><param name=\"wmode\" value=\"transparent\" \/><param name=\"allowScriptAcess\" value=\"sameDomain\" \/><param name=\"quality\" value=\"best\" \/><param name=\"bgcolor\" value=\"#FFFFFF\" \/><param name=\"FlashVars\" value=\"playerMode=embedded\" \/><\/object>\r\n<br>\r\n<br>\r\nOne of the molecular visualization tools we support is NAMD.\u00a0 We recently updated our version from 2.7 to 2.9 and compiled it to run with OpenMPI rather than NAMD's own multi processor controler, CHARM.\u00a0 This makes resource requesting and job submission much easier.\u00a0 Below is the native NAMD method of addressing multiple cores in a distributed architecture:\r\n<br>\r\n<br>\r\n<span style=\"font-size: xx-small;\">#PBS -N NAMD-CHARMRUN\r\n#PBS -l nodes=node1:ppn=4+node2:ppn=4+node3:ppn=4+node4:ppn=4+node5:ppn=4\r\ncharmrun namd2 +p20 ++remote-shell ssh ++nodelist machine.file test.namd &gt; charmout.txt\r\n<\/span>\r\n<br>\r\n<br>\r\nNow instead of using the above cumbersome method the standard mpirun executable can be called:\r\n<br>\r\n<br>\r\n<span style=\"font-size: xx-small;\">#PBS -N NAMD=MPIRUN\r\n#PBS -l nodes=5:ppn=4:series200\r\nmpirun -hostfile $PBS_NODEFILE namd2 test.namd &gt; mpiout.txt<\/span>\r\n<br>\r\n<br>\r\nThis also functions much better with PBStorque as it's designed to work hand in glove with OpenMPI.\u00a0 In the 2nd example you'll note that it's no longer necessary to tell mpirun or namd how many processors are required, it works that out from the job requirements, 5 nodes x 4 cores = 20 cores total.\r\n<br>\r\n<br>\r\n<strong>Below is the method we used to compile NAMD with OpenMPI support:<\/strong>\r\n<br>\r\n<br>\r\n<span style=\"font-size: xx-small;\">Unpack NAMD and matching Charm++ source code and enter directory:\r\ntar xzf NAMD_2.9_Source.tar.gz\r\ncd NAMD_2.9_Source\r\ntar xf charm-6.4.0.tar\r\ncd charm-6.4.0<\/span>\r\n<br>\r\n<br>\r\n<span style=\"font-size: xx-small;\">Build and test the Charm++\/Converse library (MPI version):\r\nenv MPICXX=mpicxx .\/build charm++ mpi-linux-x86_64 --with-production\r\ncd mpi-linux-x86_64\/tests\/charm++\/megatest\r\nmake pgm\r\nmpirun -n 4 .\/pgm\u00a0\u00a0\u00a0 (run as any other MPI program on your cluster)\r\ncd ..\/..\/..\/..\/..<\/span>\r\n<br>\r\n<br>\r\n<span style=\"font-size: xx-small;\">Download and install TCL and FFTW libraries:\r\n(cd to NAMD_2.9_Source if you're not already there)\r\nwget <\/span><a href=\"http:\/\/www.ks.uiuc.edu\/Research\/namd\/libraries\/fftw-linux-x86_64.tar.gz\"><span style=\"font-size: xx-small;\">http:\/\/www.ks.uiuc.edu\/Research\/namd\/libraries\/fftw-linux-x86_64.tar.gz<\/span><\/a>\r\n<span style=\"font-size: xx-small;\">\u00a0 tar xzf fftw-linux-x86_64.tar.gz\r\nmv linux-x86_64 fftw\r\nwget <\/span><a href=\"http:\/\/www.ks.uiuc.edu\/Research\/namd\/libraries\/tcl8.5.9-linux-x86_64.tar.gz\"><span style=\"font-size: xx-small;\">http:\/\/www.ks.uiuc.edu\/Research\/namd\/libraries\/tcl8.5.9-linux-x86_64.tar.gz<\/span><\/a>\r\n<span style=\"font-size: xx-small;\">\u00a0 wget <\/span><a href=\"http:\/\/www.ks.uiuc.edu\/Research\/namd\/libraries\/tcl8.5.9-linux-x86_64-threaded.tar.gz\"><span style=\"font-size: xx-small;\">http:\/\/www.ks.uiuc.edu\/Research\/namd\/libraries\/tcl8.5.9-linux-x86_64-threaded.tar.gz<\/span><\/a>\r\n<span style=\"font-size: xx-small;\">\u00a0 tar xzf tcl8.5.9-linux-x86_64.tar.gz\r\ntar xzf tcl8.5.9-linux-x86_64-threaded.tar.gz\r\nmv tcl8.5.9-linux-x86_64 tcl\r\nmv tcl8.5.9-linux-x86_64-threaded tcl-threaded<\/span>\r\n<br>\r\n<br>\r\n<span style=\"font-size: xx-small;\">Set up build directory and compile:\r\nMPI version: .\/config Linux-x86_64-g++ --charm-arch mpi-linux-x86_64\r\ncd Linux-x86_64-g++\r\nmake<\/span>\r\n<br>\r\n<br>\r\n<span style=\"font-size: xx-small;\">Quick tests using one and two processes (network version):\r\n(this is a 66-atom simulation so don't expect any speedup)\r\n.\/namd2\r\n.\/namd2 src\/alanin\r\n.\/charmrun ++local +p2 .\/namd2\r\n.\/charmrun ++local +p2 .\/namd2 src\/alanin\r\n(for MPI version, run namd2 binary as any other MPI executable)<\/span>\r\n<br>\r\n<br>\r\n<span style=\"font-size: xx-small;\">Longer test using four processes:\r\nwget <\/span><a href=\"http:\/\/www.ks.uiuc.edu\/Research\/namd\/utilities\/apoa1.tar.gz\"><span style=\"font-size: xx-small;\">http:\/\/www.ks.uiuc.edu\/Research\/namd\/utilities\/apoa1.tar.gz<\/span><\/a>\r\n<span style=\"font-size: xx-small;\">\u00a0 tar xzf apoa1.tar.gz\r\n.\/charmrun ++local +p4 .\/namd2 apoa1\/apoa1.namd\r\n(FFT optimization will take a several seconds during the first run.)<\/span>\r\n\r\n&nbsp;","protected":false},"excerpt":{"rendered":"<p>Why is molecular visualization cool?&nbsp; Take a look at this animation from <a href=\"http:\/\/www.wehi.edu.au\/education\/wehitv\/\">WEHI<\/a>: <\/p>\n<\/p>\n<p>&nbsp;<\/p>\n<p>One of the molecular visualization tools we support is NAMD.&nbsp; We recently updated our version from 2.7 to 2.9 and compiled it to run with OpenMPI rather than NAMD&#8217;s own multi processor controler, CHARM.&nbsp; This makes resource requesting and job submission much easier.&nbsp; Below is the native NAMD method of addressing multiple cores in a distributed architecture:<\/p>\n<p><span>#PBS -N NAMD-CHARMRUN<br \/>#PBS -l nodes=node1:ppn=4+node2:ppn=4+node3:ppn=4+node4:ppn=4+node5:ppn=4<br \/>charmrun namd2 +p20 ++remote-shell ssh ++nodelist machine.file test.namd &gt; charmout.txt<br \/><\/span><\/p>\n<p>Now instead of using the above cumbersome method the standard mpirun executable can be called:<\/p>\n<p><span>#PBS -N NAMD=MPIRUN<br \/>#PBS -l nodes=5:ppn=4:series200<br \/>mpirun -hostfile $PBS_NODEFILE namd2 test.namd &gt; mpiout.txt<\/span><\/p>\n<p>This also functions much better with PBStorque as it&#8217;s designed to work hand in glove with OpenMPI.&nbsp; In the 2nd example you&#8217;ll note that it&#8217;s no longer necessary to tell mpirun or namd how many processors are required, it works that out from the job requirements, 5 nodes x 4 cores = 20 cores total.<\/p>\n<p><strong>Below is the method we used to compile NAMD with OpenMPI support:<\/strong><\/p>\n<p><span>Unpack NAMD and matching Charm++ source code and enter directory:<br \/>&nbsp; tar xzf NAMD_2.9_Source.tar.gz<br \/>&nbsp; cd NAMD_2.9_Source<br \/>&nbsp; tar xf charm-6.4.0.tar<br \/>&nbsp; cd charm-6.4.0<\/span><\/p>\n<p><span>Build and test the Charm++\/Converse library (MPI version):<br \/>&nbsp; env MPICXX=mpicxx .\/build charm++ mpi-linux-x86_64 &#8211;with-production<br \/>&nbsp; cd mpi-linux-x86_64\/tests\/charm++\/megatest<br \/>&nbsp; make pgm<br \/>&nbsp; mpirun -n 4 .\/pgm&nbsp;&nbsp;&nbsp; (run as any other MPI program on your cluster)<br \/>&nbsp; cd ..\/..\/..\/..\/..<\/span><\/p>\n<p><span>Download and install TCL and FFTW libraries:<br \/>&nbsp; (cd to NAMD_2.9_Source if you&#8217;re not already there)<br \/>&nbsp; wget <\/span><a href=\"http:\/\/www.ks.uiuc.edu\/Research\/namd\/libraries\/fftw-linux-x86_64.tar.gz\"><span>http:\/\/www.ks.uiuc.edu\/Research\/namd\/libraries\/fftw-linux-x86_64.tar.gz<\/span><\/a><br \/><span>&nbsp; tar xzf fftw-linux-x86_64.tar.gz<br \/>&nbsp; mv linux-x86_64 fftw<br \/>&nbsp; wget <\/span><a href=\"http:\/\/www.ks.uiuc.edu\/Research\/namd\/libraries\/tcl8.5.9-linux-x86_64.tar.gz\"><span>http:\/\/www.ks.uiuc.edu\/Research\/namd\/libraries\/tcl8.5.9-linux-x86_64.tar.gz<\/span><\/a><br \/><span>&nbsp; wget <\/span><a href=\"http:\/\/www.ks.uiuc.edu\/Research\/namd\/libraries\/tcl8.5.9-linux-x86_64-threaded.tar.gz\"><span>http:\/\/www.ks.uiuc.edu\/Research\/namd\/libraries\/tcl8.5.9-linux-x86_64-threaded.tar.gz<\/span><\/a><br \/><span>&nbsp; tar xzf tcl8.5.9-linux-x86_64.tar.gz<br \/>&nbsp; tar xzf tcl8.5.9-linux-x86_64-threaded.tar.gz<br \/>&nbsp; mv tcl8.5.9-linux-x86_64 tcl<br \/>&nbsp; mv tcl8.5.9-linux-x86_64-threaded tcl-threaded<\/span><\/p>\n<p><span>Set up build directory and compile:<br \/>&nbsp; MPI version: .\/config Linux-x86_64-g++ &#8211;charm-arch mpi-linux-x86_64<br \/>&nbsp; cd Linux-x86_64-g++<br \/>&nbsp; make<\/span><\/p>\n<p><span>Quick tests using one and two processes (network version):<br \/>&nbsp; (this is a 66-atom simulation so don&#8217;t expect any speedup)<br \/>&nbsp; .\/namd2<br \/>&nbsp; .\/namd2 src\/alanin<br \/>&nbsp; .\/charmrun ++local +p2 .\/namd2<br \/>&nbsp; .\/charmrun ++local +p2 .\/namd2 src\/alanin<br \/>&nbsp; (for MPI version, run namd2 binary as any other MPI executable)<\/span><\/p>\n<p><span>Longer test using four processes:<br \/>&nbsp; wget <\/span><a href=\"http:\/\/www.ks.uiuc.edu\/Research\/namd\/utilities\/apoa1.tar.gz\"><span>http:\/\/www.ks.uiuc.edu\/Research\/namd\/utilities\/apoa1.tar.gz<\/span><\/a><br \/><span>&nbsp; tar xzf apoa1.tar.gz<br \/>&nbsp; .\/charmrun ++local +p4 .\/namd2 apoa1\/apoa1.namd<br \/>&nbsp; (FFT optimization will take a several seconds during the first run.)<\/span><\/p>\n<p>&nbsp;<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[4],"tags":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v21.4 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Molecular visualization - UCT HPC<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/ucthpc.uct.ac.za\/index.php\/2012\/08\/21\/molecular-visualization\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Molecular visualization - UCT HPC\" \/>\n<meta property=\"og:description\" content=\"Why is molecular visualization cool?&nbsp; Take a look at this animation from WEHI: &nbsp;One of the molecular visualization tools we support is NAMD.&nbsp; We recently updated our version from 2.7 to 2.9 and compiled it to run with OpenMPI rather than NAMD&#039;s own multi processor controler, CHARM.&nbsp; This makes resource requesting and job submission much easier.&nbsp; Below is the native NAMD method of addressing multiple cores in a distributed architecture:#PBS -N NAMD-CHARMRUN#PBS -l nodes=node1:ppn=4+node2:ppn=4+node3:ppn=4+node4:ppn=4+node5:ppn=4charmrun namd2 +p20 ++remote-shell ssh ++nodelist machine.file test.namd &gt; charmout.txtNow instead of using the above cumbersome method the standard mpirun executable can be called:#PBS -N NAMD=MPIRUN#PBS -l nodes=5:ppn=4:series200mpirun -hostfile $PBS_NODEFILE namd2 test.namd &gt; mpiout.txtThis also functions much better with PBStorque as it&#039;s designed to work hand in glove with OpenMPI.&nbsp; In the 2nd example you&#039;ll note that it&#039;s no longer necessary to tell mpirun or namd how many processors are required, it works that out from the job requirements, 5 nodes x 4 cores = 20 cores total.Below is the method we used to compile NAMD with OpenMPI support:Unpack NAMD and matching Charm++ source code and enter directory:&nbsp; tar xzf NAMD_2.9_Source.tar.gz&nbsp; cd NAMD_2.9_Source&nbsp; tar xf charm-6.4.0.tar&nbsp; cd charm-6.4.0Build and test the Charm++\/Converse library (MPI version):&nbsp; env MPICXX=mpicxx .\/build charm++ mpi-linux-x86_64 --with-production&nbsp; cd mpi-linux-x86_64\/tests\/charm++\/megatest&nbsp; make pgm&nbsp; mpirun -n 4 .\/pgm&nbsp;&nbsp;&nbsp; (run as any other MPI program on your cluster)&nbsp; cd ..\/..\/..\/..\/..Download and install TCL and FFTW libraries:&nbsp; (cd to NAMD_2.9_Source if you&#039;re not already there)&nbsp; wget http:\/\/www.ks.uiuc.edu\/Research\/namd\/libraries\/fftw-linux-x86_64.tar.gz&nbsp; tar xzf fftw-linux-x86_64.tar.gz&nbsp; mv linux-x86_64 fftw&nbsp; wget http:\/\/www.ks.uiuc.edu\/Research\/namd\/libraries\/tcl8.5.9-linux-x86_64.tar.gz&nbsp; wget http:\/\/www.ks.uiuc.edu\/Research\/namd\/libraries\/tcl8.5.9-linux-x86_64-threaded.tar.gz&nbsp; tar xzf tcl8.5.9-linux-x86_64.tar.gz&nbsp; tar xzf tcl8.5.9-linux-x86_64-threaded.tar.gz&nbsp; mv tcl8.5.9-linux-x86_64 tcl&nbsp; mv tcl8.5.9-linux-x86_64-threaded tcl-threadedSet up build directory and compile:&nbsp; MPI version: .\/config Linux-x86_64-g++ --charm-arch mpi-linux-x86_64&nbsp; cd Linux-x86_64-g++&nbsp; makeQuick tests using one and two processes (network version):&nbsp; (this is a 66-atom simulation so don&#039;t expect any speedup)&nbsp; .\/namd2&nbsp; .\/namd2 src\/alanin&nbsp; .\/charmrun ++local +p2 .\/namd2&nbsp; .\/charmrun ++local +p2 .\/namd2 src\/alanin&nbsp; (for MPI version, run namd2 binary as any other MPI executable)Longer test using four processes:&nbsp; wget http:\/\/www.ks.uiuc.edu\/Research\/namd\/utilities\/apoa1.tar.gz&nbsp; tar xzf apoa1.tar.gz&nbsp; .\/charmrun ++local +p4 .\/namd2 apoa1\/apoa1.namd&nbsp; (FFT optimization will take a several seconds during the first run.)&nbsp;\" \/>\n<meta property=\"og:url\" content=\"https:\/\/ucthpc.uct.ac.za\/index.php\/2012\/08\/21\/molecular-visualization\/\" \/>\n<meta property=\"og:site_name\" content=\"UCT HPC\" \/>\n<meta property=\"article:published_time\" content=\"2012-08-21T10:45:39+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2019-01-03T06:43:13+00:00\" \/>\n<meta name=\"author\" content=\"Andrew Lewis\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Andrew Lewis\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"2 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/index.php\/2012\/08\/21\/molecular-visualization\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/ucthpc.uct.ac.za\/index.php\/2012\/08\/21\/molecular-visualization\/\"},\"author\":{\"name\":\"Andrew Lewis\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#\/schema\/person\/c183ad1c0a1063124a72d63963ae9c7e\"},\"headline\":\"Molecular visualization\",\"datePublished\":\"2012-08-21T10:45:39+00:00\",\"dateModified\":\"2019-01-03T06:43:13+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/ucthpc.uct.ac.za\/index.php\/2012\/08\/21\/molecular-visualization\/\"},\"wordCount\":444,\"publisher\":{\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#organization\"},\"articleSection\":[\"hpc\"],\"inLanguage\":\"en-ZA\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/index.php\/2012\/08\/21\/molecular-visualization\/\",\"url\":\"https:\/\/ucthpc.uct.ac.za\/index.php\/2012\/08\/21\/molecular-visualization\/\",\"name\":\"Molecular visualization - UCT HPC\",\"isPartOf\":{\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#website\"},\"datePublished\":\"2012-08-21T10:45:39+00:00\",\"dateModified\":\"2019-01-03T06:43:13+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/ucthpc.uct.ac.za\/index.php\/2012\/08\/21\/molecular-visualization\/#breadcrumb\"},\"inLanguage\":\"en-ZA\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/ucthpc.uct.ac.za\/index.php\/2012\/08\/21\/molecular-visualization\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/index.php\/2012\/08\/21\/molecular-visualization\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/ucthpc.uct.ac.za\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Molecular visualization\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#website\",\"url\":\"https:\/\/ucthpc.uct.ac.za\/\",\"name\":\"UCT HPC\",\"description\":\"University of Cape Town High Performance Computing\",\"publisher\":{\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/ucthpc.uct.ac.za\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-ZA\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#organization\",\"name\":\"University of Cape Town High Performance Computing\",\"url\":\"https:\/\/ucthpc.uct.ac.za\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-ZA\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/ucthpc.uct.ac.za\/wp-content\/uploads\/2015\/09\/logocircless.png\",\"contentUrl\":\"https:\/\/ucthpc.uct.ac.za\/wp-content\/uploads\/2015\/09\/logocircless.png\",\"width\":450,\"height\":423,\"caption\":\"University of Cape Town High Performance Computing\"},\"image\":{\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#\/schema\/person\/c183ad1c0a1063124a72d63963ae9c7e\",\"name\":\"Andrew Lewis\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-ZA\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/9652c9c73beeab594b8dc2383a880048?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/9652c9c73beeab594b8dc2383a880048?s=96&d=mm&r=g\",\"caption\":\"Andrew Lewis\"},\"sameAs\":[\"http:\/\/blogs.uct.ac.za\/blog\/big-bytes\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Molecular visualization - UCT HPC","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/ucthpc.uct.ac.za\/index.php\/2012\/08\/21\/molecular-visualization\/","og_locale":"en_US","og_type":"article","og_title":"Molecular visualization - UCT HPC","og_description":"Why is molecular visualization cool?&nbsp; Take a look at this animation from WEHI: &nbsp;One of the molecular visualization tools we support is NAMD.&nbsp; We recently updated our version from 2.7 to 2.9 and compiled it to run with OpenMPI rather than NAMD's own multi processor controler, CHARM.&nbsp; This makes resource requesting and job submission much easier.&nbsp; Below is the native NAMD method of addressing multiple cores in a distributed architecture:#PBS -N NAMD-CHARMRUN#PBS -l nodes=node1:ppn=4+node2:ppn=4+node3:ppn=4+node4:ppn=4+node5:ppn=4charmrun namd2 +p20 ++remote-shell ssh ++nodelist machine.file test.namd &gt; charmout.txtNow instead of using the above cumbersome method the standard mpirun executable can be called:#PBS -N NAMD=MPIRUN#PBS -l nodes=5:ppn=4:series200mpirun -hostfile $PBS_NODEFILE namd2 test.namd &gt; mpiout.txtThis also functions much better with PBStorque as it's designed to work hand in glove with OpenMPI.&nbsp; In the 2nd example you'll note that it's no longer necessary to tell mpirun or namd how many processors are required, it works that out from the job requirements, 5 nodes x 4 cores = 20 cores total.Below is the method we used to compile NAMD with OpenMPI support:Unpack NAMD and matching Charm++ source code and enter directory:&nbsp; tar xzf NAMD_2.9_Source.tar.gz&nbsp; cd NAMD_2.9_Source&nbsp; tar xf charm-6.4.0.tar&nbsp; cd charm-6.4.0Build and test the Charm++\/Converse library (MPI version):&nbsp; env MPICXX=mpicxx .\/build charm++ mpi-linux-x86_64 --with-production&nbsp; cd mpi-linux-x86_64\/tests\/charm++\/megatest&nbsp; make pgm&nbsp; mpirun -n 4 .\/pgm&nbsp;&nbsp;&nbsp; (run as any other MPI program on your cluster)&nbsp; cd ..\/..\/..\/..\/..Download and install TCL and FFTW libraries:&nbsp; (cd to NAMD_2.9_Source if you're not already there)&nbsp; wget http:\/\/www.ks.uiuc.edu\/Research\/namd\/libraries\/fftw-linux-x86_64.tar.gz&nbsp; tar xzf fftw-linux-x86_64.tar.gz&nbsp; mv linux-x86_64 fftw&nbsp; wget http:\/\/www.ks.uiuc.edu\/Research\/namd\/libraries\/tcl8.5.9-linux-x86_64.tar.gz&nbsp; wget http:\/\/www.ks.uiuc.edu\/Research\/namd\/libraries\/tcl8.5.9-linux-x86_64-threaded.tar.gz&nbsp; tar xzf tcl8.5.9-linux-x86_64.tar.gz&nbsp; tar xzf tcl8.5.9-linux-x86_64-threaded.tar.gz&nbsp; mv tcl8.5.9-linux-x86_64 tcl&nbsp; mv tcl8.5.9-linux-x86_64-threaded tcl-threadedSet up build directory and compile:&nbsp; MPI version: .\/config Linux-x86_64-g++ --charm-arch mpi-linux-x86_64&nbsp; cd Linux-x86_64-g++&nbsp; makeQuick tests using one and two processes (network version):&nbsp; (this is a 66-atom simulation so don't expect any speedup)&nbsp; .\/namd2&nbsp; .\/namd2 src\/alanin&nbsp; .\/charmrun ++local +p2 .\/namd2&nbsp; .\/charmrun ++local +p2 .\/namd2 src\/alanin&nbsp; (for MPI version, run namd2 binary as any other MPI executable)Longer test using four processes:&nbsp; wget http:\/\/www.ks.uiuc.edu\/Research\/namd\/utilities\/apoa1.tar.gz&nbsp; tar xzf apoa1.tar.gz&nbsp; .\/charmrun ++local +p4 .\/namd2 apoa1\/apoa1.namd&nbsp; (FFT optimization will take a several seconds during the first run.)&nbsp;","og_url":"https:\/\/ucthpc.uct.ac.za\/index.php\/2012\/08\/21\/molecular-visualization\/","og_site_name":"UCT HPC","article_published_time":"2012-08-21T10:45:39+00:00","article_modified_time":"2019-01-03T06:43:13+00:00","author":"Andrew Lewis","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Andrew Lewis","Est. reading time":"2 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/ucthpc.uct.ac.za\/index.php\/2012\/08\/21\/molecular-visualization\/#article","isPartOf":{"@id":"https:\/\/ucthpc.uct.ac.za\/index.php\/2012\/08\/21\/molecular-visualization\/"},"author":{"name":"Andrew Lewis","@id":"https:\/\/ucthpc.uct.ac.za\/#\/schema\/person\/c183ad1c0a1063124a72d63963ae9c7e"},"headline":"Molecular visualization","datePublished":"2012-08-21T10:45:39+00:00","dateModified":"2019-01-03T06:43:13+00:00","mainEntityOfPage":{"@id":"https:\/\/ucthpc.uct.ac.za\/index.php\/2012\/08\/21\/molecular-visualization\/"},"wordCount":444,"publisher":{"@id":"https:\/\/ucthpc.uct.ac.za\/#organization"},"articleSection":["hpc"],"inLanguage":"en-ZA"},{"@type":"WebPage","@id":"https:\/\/ucthpc.uct.ac.za\/index.php\/2012\/08\/21\/molecular-visualization\/","url":"https:\/\/ucthpc.uct.ac.za\/index.php\/2012\/08\/21\/molecular-visualization\/","name":"Molecular visualization - UCT HPC","isPartOf":{"@id":"https:\/\/ucthpc.uct.ac.za\/#website"},"datePublished":"2012-08-21T10:45:39+00:00","dateModified":"2019-01-03T06:43:13+00:00","breadcrumb":{"@id":"https:\/\/ucthpc.uct.ac.za\/index.php\/2012\/08\/21\/molecular-visualization\/#breadcrumb"},"inLanguage":"en-ZA","potentialAction":[{"@type":"ReadAction","target":["https:\/\/ucthpc.uct.ac.za\/index.php\/2012\/08\/21\/molecular-visualization\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/ucthpc.uct.ac.za\/index.php\/2012\/08\/21\/molecular-visualization\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/ucthpc.uct.ac.za\/"},{"@type":"ListItem","position":2,"name":"Molecular visualization"}]},{"@type":"WebSite","@id":"https:\/\/ucthpc.uct.ac.za\/#website","url":"https:\/\/ucthpc.uct.ac.za\/","name":"UCT HPC","description":"University of Cape Town High Performance Computing","publisher":{"@id":"https:\/\/ucthpc.uct.ac.za\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/ucthpc.uct.ac.za\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-ZA"},{"@type":"Organization","@id":"https:\/\/ucthpc.uct.ac.za\/#organization","name":"University of Cape Town High Performance Computing","url":"https:\/\/ucthpc.uct.ac.za\/","logo":{"@type":"ImageObject","inLanguage":"en-ZA","@id":"https:\/\/ucthpc.uct.ac.za\/#\/schema\/logo\/image\/","url":"https:\/\/ucthpc.uct.ac.za\/wp-content\/uploads\/2015\/09\/logocircless.png","contentUrl":"https:\/\/ucthpc.uct.ac.za\/wp-content\/uploads\/2015\/09\/logocircless.png","width":450,"height":423,"caption":"University of Cape Town High Performance Computing"},"image":{"@id":"https:\/\/ucthpc.uct.ac.za\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/ucthpc.uct.ac.za\/#\/schema\/person\/c183ad1c0a1063124a72d63963ae9c7e","name":"Andrew Lewis","image":{"@type":"ImageObject","inLanguage":"en-ZA","@id":"https:\/\/ucthpc.uct.ac.za\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/9652c9c73beeab594b8dc2383a880048?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/9652c9c73beeab594b8dc2383a880048?s=96&d=mm&r=g","caption":"Andrew Lewis"},"sameAs":["http:\/\/blogs.uct.ac.za\/blog\/big-bytes"]}]}},"_links":{"self":[{"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/posts\/925"}],"collection":[{"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/comments?post=925"}],"version-history":[{"count":5,"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/posts\/925\/revisions"}],"predecessor-version":[{"id":3594,"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/posts\/925\/revisions\/3594"}],"wp:attachment":[{"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/media?parent=925"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/categories?post=925"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/tags?post=925"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}