Why is molecular visualization cool? Take a look at this animation from WEHI:
One of the molecular visualization tools we support is NAMD. We recently updated our version from 2.7 to 2.9 and compiled it to run with OpenMPI rather than NAMD's own multi processor controler, CHARM. This makes resource requesting and job submission much easier. Below is the native NAMD method of addressing multiple cores in a distributed architecture:
#PBS -N NAMD-CHARMRUN #PBS -l nodes=node1:ppn=4+node2:ppn=4+node3:ppn=4+node4:ppn=4+node5:ppn=4 charmrun namd2 +p20 ++remote-shell ssh ++nodelist machine.file test.namd > charmout.txt
Now instead of using the above cumbersome method the standard mpirun executable can be called:
#PBS -N NAMD=MPIRUN #PBS -l nodes=5:ppn=4:series200 mpirun -hostfile $PBS_NODEFILE namd2 test.namd > mpiout.txt
This also functions much better with PBStorque as it's designed to work hand in glove with OpenMPI. In the 2nd example you'll note that it's no longer necessary to tell mpirun or namd how many processors are required, it works that out from the job requirements, 5 nodes x 4 cores = 20 cores total.
Below is the method we used to compile NAMD with OpenMPI support:
Unpack NAMD and matching Charm++ source code and enter directory: tar xzf NAMD_2.9_Source.tar.gz cd NAMD_2.9_Source tar xf charm-6.4.0.tar cd charm-6.4.0
Build and test the Charm++/Converse library (MPI version): env MPICXX=mpicxx ./build charm++ mpi-linux-x86_64 --with-production cd mpi-linux-x86_64/tests/charm++/megatest make pgm mpirun -n 4 ./pgm (run as any other MPI program on your cluster) cd ../../../../..
Download and install TCL and FFTW libraries: (cd to NAMD_2.9_Source if you're not already there) wget http://www.ks.uiuc.edu/Research/namd/libraries/fftw-linux-x86_64.tar.gz tar xzf fftw-linux-x86_64.tar.gz mv linux-x86_64 fftw wget http://www.ks.uiuc.edu/Research/namd/libraries/tcl8.5.9-linux-x86_64.tar.gz wget http://www.ks.uiuc.edu/Research/namd/libraries/tcl8.5.9-linux-x86_64-threaded.tar.gz tar xzf tcl8.5.9-linux-x86_64.tar.gz tar xzf tcl8.5.9-linux-x86_64-threaded.tar.gz mv tcl8.5.9-linux-x86_64 tcl mv tcl8.5.9-linux-x86_64-threaded tcl-threaded
Set up build directory and compile: MPI version: ./config Linux-x86_64-g++ --charm-arch mpi-linux-x86_64 cd Linux-x86_64-g++ make
Quick tests using one and two processes (network version): (this is a 66-atom simulation so don't expect any speedup) ./namd2 ./namd2 src/alanin ./charmrun ++local +p2 ./namd2 ./charmrun ++local +p2 ./namd2 src/alanin (for MPI version, run namd2 binary as any other MPI executable)
Longer test using four processes: wget http://www.ks.uiuc.edu/Research/namd/utilities/apoa1.tar.gz tar xzf apoa1.tar.gz ./charmrun ++local +p4 ./namd2 apoa1/apoa1.namd (FFT optimization will take a several seconds during the first run.)
One of the molecular visualization tools we support is NAMD. We recently updated our version from 2.7 to 2.9 and compiled it to run with OpenMPI rather than NAMD's own multi processor controler, CHARM. This makes resource requesting and job submission much easier. Below is the native NAMD method of addressing multiple cores in a distributed architecture:
#PBS -N NAMD-CHARMRUN #PBS -l nodes=node1:ppn=4+node2:ppn=4+node3:ppn=4+node4:ppn=4+node5:ppn=4 charmrun namd2 +p20 ++remote-shell ssh ++nodelist machine.file test.namd > charmout.txt
Now instead of using the above cumbersome method the standard mpirun executable can be called:
#PBS -N NAMD=MPIRUN #PBS -l nodes=5:ppn=4:series200 mpirun -hostfile $PBS_NODEFILE namd2 test.namd > mpiout.txt
This also functions much better with PBStorque as it's designed to work hand in glove with OpenMPI. In the 2nd example you'll note that it's no longer necessary to tell mpirun or namd how many processors are required, it works that out from the job requirements, 5 nodes x 4 cores = 20 cores total.
Below is the method we used to compile NAMD with OpenMPI support:
Unpack NAMD and matching Charm++ source code and enter directory: tar xzf NAMD_2.9_Source.tar.gz cd NAMD_2.9_Source tar xf charm-6.4.0.tar cd charm-6.4.0
Build and test the Charm++/Converse library (MPI version): env MPICXX=mpicxx ./build charm++ mpi-linux-x86_64 --with-production cd mpi-linux-x86_64/tests/charm++/megatest make pgm mpirun -n 4 ./pgm (run as any other MPI program on your cluster) cd ../../../../..
Download and install TCL and FFTW libraries: (cd to NAMD_2.9_Source if you're not already there) wget http://www.ks.uiuc.edu/Research/namd/libraries/fftw-linux-x86_64.tar.gz tar xzf fftw-linux-x86_64.tar.gz mv linux-x86_64 fftw wget http://www.ks.uiuc.edu/Research/namd/libraries/tcl8.5.9-linux-x86_64.tar.gz wget http://www.ks.uiuc.edu/Research/namd/libraries/tcl8.5.9-linux-x86_64-threaded.tar.gz tar xzf tcl8.5.9-linux-x86_64.tar.gz tar xzf tcl8.5.9-linux-x86_64-threaded.tar.gz mv tcl8.5.9-linux-x86_64 tcl mv tcl8.5.9-linux-x86_64-threaded tcl-threaded
Set up build directory and compile: MPI version: ./config Linux-x86_64-g++ --charm-arch mpi-linux-x86_64 cd Linux-x86_64-g++ make
Quick tests using one and two processes (network version): (this is a 66-atom simulation so don't expect any speedup) ./namd2 ./namd2 src/alanin ./charmrun ++local +p2 ./namd2 ./charmrun ++local +p2 ./namd2 src/alanin (for MPI version, run namd2 binary as any other MPI executable)
Longer test using four processes: wget http://www.ks.uiuc.edu/Research/namd/utilities/apoa1.tar.gz tar xzf apoa1.tar.gz ./charmrun ++local +p4 ./namd2 apoa1/apoa1.namd (FFT optimization will take a several seconds during the first run.)