{"id":1067,"date":"2011-08-04T16:06:01","date_gmt":"2011-08-04T14:06:01","guid":{"rendered":"http:\/\/oldblogs.uct.ac.za\/blog\/big-bytes\/2011\/08\/04\/musings-on-torque-pbs-and-openmpi"},"modified":"2022-09-26T20:00:17","modified_gmt":"2022-09-26T18:00:17","slug":"musings-on-torquepbs-and-openmpi","status":"publish","type":"post","link":"https:\/\/ucthpc.uct.ac.za\/index.php\/2011\/08\/04\/musings-on-torquepbs-and-openmpi\/","title":{"rendered":"Musings on Torque\/PBS and OpenMPI"},"content":{"rendered":"We've spent most of this week working on mpiBlast, and have bumped our heads against a few problems.\u00a0 Fortunately we have a very patient user in computational biology who's assisted in running test jobs.\u00a0 It's been a learning experience so we thought we'd jot down a few notes...\r\n\r\nRunning OpenMPI jobs in Torque\/PBS is not quite the same as running them directly from a head node.\u00a0 Firstly the initial worker node that the job is launched from is considered a 'head node' from OpenMPI's perspective.\u00a0 This means that when setting up key sharing in the cluster a many-to-many relationship is required between worker nodes.\r\n\r\nAdditionally the way that PBS and mpirun are invoked are slightly different.\u00a0 When dealing with OpenMPI jobs it's best to specify only the number of cores the job needs.\u00a0 However in order to do this the PBS nodes argument to the -l parameter is considered to be CPUs, not servers.\r\n\r\nThere are two other crucial elements to bare in mind.\u00a0 Firstly the machine or host file should be referenced from PBS, rather than user-created.\u00a0 This is done by using the $PBS_NODEFILE variable.\u00a0 Secondly PBS should be allowed to supply the cores, rather then request them via mpirun's -np argument. The number of nodes versus threads that users can consume can be controled via the maui.cfg file\r\n\r\nBelow is a screenshot of multiple MPI jobs seeking 5 CPUs each on any series worker node.\u00a0 The starting node for the initial job was unspecified and turned out to be 300.\u00a0 Nodes 300, 206, 205 and 204 have high CPU but no threads advertised as they're just winding down from 3 completed jobs, totaling 15 cores.\u00a0 The 1 thread on 204 is the first of 5 spread \"left\" into 203.\r\n\r\n<img src=\"https:\/\/ucthpc.uct.ac.za\/wp-content\/uploads\/2015\/07\/mpi-dist.png\" alt=\"Distributed MPI jobs\" border=\"0\" \/>\r\n\r\nAnother item to consider is heterogenous environments.\u00a0 Not all clusters are composed of identical equipment, hence allowing auto-assignment of resources in MPI jobs can produce unpredictable results.\u00a0 In the image above the 300 series CPUs are taking longer to spool down than the 200 series.\u00a0 In order to constrain job runs use can be made of the free form node_spec tag in the nodes file.\u00a0 However here you should remember that once again nodes = servers so you'll need the ppn directive.\r\n\r\nSo to reserve 20 cores on the BL460 servers use the directive: <span style=\"font-size: xx-small; font-family: arial,helvetica,sans-serif;\">#PBS -l nodes=4:series400:ppn=5<\/span>\r\n\r\nIf there are any inaccuracies in the above please feel free to point them out.\u00a0 Here are two article we found useful running <a href=\"http:\/\/www.adaptivecomputing.com\/resources\/docs\/torque\/7.1mpi.php\">OpenMPI<\/a> under <a href=\"http:\/\/www.adaptivecomputing.com\/resources\/docs\/torque\/7.1mpi.php\">Torque<\/a>.","protected":false},"excerpt":{"rendered":"<p>We&#8217;ve spent most of this week working on mpiBlast, and have bumped our heads against a few problems.&nbsp; Fortunately we have a very patient user in computational biology who&#8217;s assisted in running test jobs.&nbsp; It&#8217;s been a learning experience so we thought we&#8217;d jot down a few notes&#8230;&nbsp; <\/p>\n<p>Running OpenMPI jobs in Torque\/PBS is not quite the same as running them directly from a head node.&nbsp; Firstly the initial worker node that the job is launched from is considered a &#8216;head node&#8217; from OpenMPI&#8217;s perspective.&nbsp; This means that when setting up key sharing in the cluster a many-to-many relationship is required between worker nodes.<\/p>\n<p>Additionally the way that PBS and mpirun are invoked are slightly different.&nbsp; When dealing with OpenMPI jobs it&#8217;s best to specify only the number of cores the job needs.&nbsp; However in order to do this the PBS nodes argument to the -l parameter is considered to be CPUs, not servers.<\/p>\n<p>There are two other crucial elements to bare in mind.&nbsp; Firstly the machine or host file should be referenced from PBS, rather than user-created.&nbsp; This is done by using the $PBS_NODEFILE variable.&nbsp; Secondly PBS should be allowed to supply the cores, rather then request them via mpirun&#8217;s -np argument. The number of nodes versus threads that users can consume can be controled via the maui.cfg file<\/p>\n<p>Below is a screenshot of multiple MPI jobs seeking 5 CPUs each on any series worker node.&nbsp; The starting node for the initial job was unspecified and turned out to be 300.&nbsp; Nodes 300, 206, 205 and 204 have high CPU but no threads advertised as they&#8217;re just winding down from 3 completed jobs, totaling 15 cores.&nbsp; The 1 thread on 204 is the first of 5 spread &#8220;left&#8221; into 203. <\/p>\n<p><img decoding=\"async\" src=\"http:\/\/blogs.uct.ac.za\/gallery\/1253\/mpi-dist.png\" border=\"0\" alt=\"Distributed MPI jobs\"><\/p>\n<p>Another item to consider is heterogenous environments.&nbsp; Not all clusters are composed of identical equipment, hence allowing auto-assignment of resources in MPI jobs can produce unpredictable results.&nbsp; In the image above the 300 series CPUs are taking longer to spool down than the 200 series.&nbsp; In order to constrain job runs use can be made of the free form node_spec tag in the nodes file.&nbsp; However here you should remember that once again nodes = servers so you&#8217;ll need the ppn directive.<\/p>\n<p>So to reserve 20 cores on the BL460 servers use the directive: <span>#PBS -l nodes=4:series400:ppn=5<\/span><\/p>\n<p>If there are any inaccuracies in the above please feel free to point them out.&nbsp; Here are two article we found useful running <a href=\"http:\/\/www.adaptivecomputing.com\/resources\/docs\/torque\/7.1mpi.php\">OpenMPI<\/a> under <a href=\"http:\/\/www.adaptivecomputing.com\/resources\/docs\/torque\/7.1mpi.php\">Torque<\/a>.<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[4,13,14],"tags":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v21.4 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Musings on Torque\/PBS and OpenMPI - UCT HPC<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/ucthpc.uct.ac.za\/index.php\/2011\/08\/04\/musings-on-torquepbs-and-openmpi\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Musings on Torque\/PBS and OpenMPI - UCT HPC\" \/>\n<meta property=\"og:description\" content=\"We&#039;ve spent most of this week working on mpiBlast, and have bumped our heads against a few problems.&nbsp; Fortunately we have a very patient user in computational biology who&#039;s assisted in running test jobs.&nbsp; It&#039;s been a learning experience so we thought we&#039;d jot down a few notes...&nbsp; Running OpenMPI jobs in Torque\/PBS is not quite the same as running them directly from a head node.&nbsp; Firstly the initial worker node that the job is launched from is considered a &#039;head node&#039; from OpenMPI&#039;s perspective.&nbsp; This means that when setting up key sharing in the cluster a many-to-many relationship is required between worker nodes.Additionally the way that PBS and mpirun are invoked are slightly different.&nbsp; When dealing with OpenMPI jobs it&#039;s best to specify only the number of cores the job needs.&nbsp; However in order to do this the PBS nodes argument to the -l parameter is considered to be CPUs, not servers.There are two other crucial elements to bare in mind.&nbsp; Firstly the machine or host file should be referenced from PBS, rather than user-created.&nbsp; This is done by using the $PBS_NODEFILE variable.&nbsp; Secondly PBS should be allowed to supply the cores, rather then request them via mpirun&#039;s -np argument. The number of nodes versus threads that users can consume can be controled via the maui.cfg fileBelow is a screenshot of multiple MPI jobs seeking 5 CPUs each on any series worker node.&nbsp; The starting node for the initial job was unspecified and turned out to be 300.&nbsp; Nodes 300, 206, 205 and 204 have high CPU but no threads advertised as they&#039;re just winding down from 3 completed jobs, totaling 15 cores.&nbsp; The 1 thread on 204 is the first of 5 spread &quot;left&quot; into 203. Another item to consider is heterogenous environments.&nbsp; Not all clusters are composed of identical equipment, hence allowing auto-assignment of resources in MPI jobs can produce unpredictable results.&nbsp; In the image above the 300 series CPUs are taking longer to spool down than the 200 series.&nbsp; In order to constrain job runs use can be made of the free form node_spec tag in the nodes file.&nbsp; However here you should remember that once again nodes = servers so you&#039;ll need the ppn directive.So to reserve 20 cores on the BL460 servers use the directive: #PBS -l nodes=4:series400:ppn=5If there are any inaccuracies in the above please feel free to point them out.&nbsp; Here are two article we found useful running OpenMPI under Torque.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/ucthpc.uct.ac.za\/index.php\/2011\/08\/04\/musings-on-torquepbs-and-openmpi\/\" \/>\n<meta property=\"og:site_name\" content=\"UCT HPC\" \/>\n<meta property=\"article:published_time\" content=\"2011-08-04T14:06:01+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2022-09-26T18:00:17+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/ucthpc.uct.ac.za\/wp-content\/uploads\/2015\/07\/mpi-dist.png\" \/>\n<meta name=\"author\" content=\"Andrew Lewis\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Andrew Lewis\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"2 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/index.php\/2011\/08\/04\/musings-on-torquepbs-and-openmpi\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/ucthpc.uct.ac.za\/index.php\/2011\/08\/04\/musings-on-torquepbs-and-openmpi\/\"},\"author\":{\"name\":\"Andrew Lewis\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#\/schema\/person\/c183ad1c0a1063124a72d63963ae9c7e\"},\"headline\":\"Musings on Torque\/PBS and OpenMPI\",\"datePublished\":\"2011-08-04T14:06:01+00:00\",\"dateModified\":\"2022-09-26T18:00:17+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/ucthpc.uct.ac.za\/index.php\/2011\/08\/04\/musings-on-torquepbs-and-openmpi\/\"},\"wordCount\":410,\"publisher\":{\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#organization\"},\"articleSection\":[\"hpc\",\"maui\",\"torque\"],\"inLanguage\":\"en-ZA\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/index.php\/2011\/08\/04\/musings-on-torquepbs-and-openmpi\/\",\"url\":\"https:\/\/ucthpc.uct.ac.za\/index.php\/2011\/08\/04\/musings-on-torquepbs-and-openmpi\/\",\"name\":\"Musings on Torque\/PBS and OpenMPI - UCT HPC\",\"isPartOf\":{\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#website\"},\"datePublished\":\"2011-08-04T14:06:01+00:00\",\"dateModified\":\"2022-09-26T18:00:17+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/ucthpc.uct.ac.za\/index.php\/2011\/08\/04\/musings-on-torquepbs-and-openmpi\/#breadcrumb\"},\"inLanguage\":\"en-ZA\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/ucthpc.uct.ac.za\/index.php\/2011\/08\/04\/musings-on-torquepbs-and-openmpi\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/index.php\/2011\/08\/04\/musings-on-torquepbs-and-openmpi\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/ucthpc.uct.ac.za\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Musings on Torque\/PBS and OpenMPI\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#website\",\"url\":\"https:\/\/ucthpc.uct.ac.za\/\",\"name\":\"UCT HPC\",\"description\":\"University of Cape Town High Performance Computing\",\"publisher\":{\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/ucthpc.uct.ac.za\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-ZA\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#organization\",\"name\":\"University of Cape Town High Performance Computing\",\"url\":\"https:\/\/ucthpc.uct.ac.za\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-ZA\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/ucthpc.uct.ac.za\/wp-content\/uploads\/2015\/09\/logocircless.png\",\"contentUrl\":\"https:\/\/ucthpc.uct.ac.za\/wp-content\/uploads\/2015\/09\/logocircless.png\",\"width\":450,\"height\":423,\"caption\":\"University of Cape Town High Performance Computing\"},\"image\":{\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#\/schema\/person\/c183ad1c0a1063124a72d63963ae9c7e\",\"name\":\"Andrew Lewis\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-ZA\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/9652c9c73beeab594b8dc2383a880048?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/9652c9c73beeab594b8dc2383a880048?s=96&d=mm&r=g\",\"caption\":\"Andrew Lewis\"},\"sameAs\":[\"http:\/\/blogs.uct.ac.za\/blog\/big-bytes\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Musings on Torque\/PBS and OpenMPI - UCT HPC","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/ucthpc.uct.ac.za\/index.php\/2011\/08\/04\/musings-on-torquepbs-and-openmpi\/","og_locale":"en_US","og_type":"article","og_title":"Musings on Torque\/PBS and OpenMPI - UCT HPC","og_description":"We've spent most of this week working on mpiBlast, and have bumped our heads against a few problems.&nbsp; Fortunately we have a very patient user in computational biology who's assisted in running test jobs.&nbsp; It's been a learning experience so we thought we'd jot down a few notes...&nbsp; Running OpenMPI jobs in Torque\/PBS is not quite the same as running them directly from a head node.&nbsp; Firstly the initial worker node that the job is launched from is considered a 'head node' from OpenMPI's perspective.&nbsp; This means that when setting up key sharing in the cluster a many-to-many relationship is required between worker nodes.Additionally the way that PBS and mpirun are invoked are slightly different.&nbsp; When dealing with OpenMPI jobs it's best to specify only the number of cores the job needs.&nbsp; However in order to do this the PBS nodes argument to the -l parameter is considered to be CPUs, not servers.There are two other crucial elements to bare in mind.&nbsp; Firstly the machine or host file should be referenced from PBS, rather than user-created.&nbsp; This is done by using the $PBS_NODEFILE variable.&nbsp; Secondly PBS should be allowed to supply the cores, rather then request them via mpirun's -np argument. The number of nodes versus threads that users can consume can be controled via the maui.cfg fileBelow is a screenshot of multiple MPI jobs seeking 5 CPUs each on any series worker node.&nbsp; The starting node for the initial job was unspecified and turned out to be 300.&nbsp; Nodes 300, 206, 205 and 204 have high CPU but no threads advertised as they're just winding down from 3 completed jobs, totaling 15 cores.&nbsp; The 1 thread on 204 is the first of 5 spread \"left\" into 203. Another item to consider is heterogenous environments.&nbsp; Not all clusters are composed of identical equipment, hence allowing auto-assignment of resources in MPI jobs can produce unpredictable results.&nbsp; In the image above the 300 series CPUs are taking longer to spool down than the 200 series.&nbsp; In order to constrain job runs use can be made of the free form node_spec tag in the nodes file.&nbsp; However here you should remember that once again nodes = servers so you'll need the ppn directive.So to reserve 20 cores on the BL460 servers use the directive: #PBS -l nodes=4:series400:ppn=5If there are any inaccuracies in the above please feel free to point them out.&nbsp; Here are two article we found useful running OpenMPI under Torque.","og_url":"https:\/\/ucthpc.uct.ac.za\/index.php\/2011\/08\/04\/musings-on-torquepbs-and-openmpi\/","og_site_name":"UCT HPC","article_published_time":"2011-08-04T14:06:01+00:00","article_modified_time":"2022-09-26T18:00:17+00:00","og_image":[{"url":"https:\/\/ucthpc.uct.ac.za\/wp-content\/uploads\/2015\/07\/mpi-dist.png"}],"author":"Andrew Lewis","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Andrew Lewis","Est. reading time":"2 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/ucthpc.uct.ac.za\/index.php\/2011\/08\/04\/musings-on-torquepbs-and-openmpi\/#article","isPartOf":{"@id":"https:\/\/ucthpc.uct.ac.za\/index.php\/2011\/08\/04\/musings-on-torquepbs-and-openmpi\/"},"author":{"name":"Andrew Lewis","@id":"https:\/\/ucthpc.uct.ac.za\/#\/schema\/person\/c183ad1c0a1063124a72d63963ae9c7e"},"headline":"Musings on Torque\/PBS and OpenMPI","datePublished":"2011-08-04T14:06:01+00:00","dateModified":"2022-09-26T18:00:17+00:00","mainEntityOfPage":{"@id":"https:\/\/ucthpc.uct.ac.za\/index.php\/2011\/08\/04\/musings-on-torquepbs-and-openmpi\/"},"wordCount":410,"publisher":{"@id":"https:\/\/ucthpc.uct.ac.za\/#organization"},"articleSection":["hpc","maui","torque"],"inLanguage":"en-ZA"},{"@type":"WebPage","@id":"https:\/\/ucthpc.uct.ac.za\/index.php\/2011\/08\/04\/musings-on-torquepbs-and-openmpi\/","url":"https:\/\/ucthpc.uct.ac.za\/index.php\/2011\/08\/04\/musings-on-torquepbs-and-openmpi\/","name":"Musings on Torque\/PBS and OpenMPI - UCT HPC","isPartOf":{"@id":"https:\/\/ucthpc.uct.ac.za\/#website"},"datePublished":"2011-08-04T14:06:01+00:00","dateModified":"2022-09-26T18:00:17+00:00","breadcrumb":{"@id":"https:\/\/ucthpc.uct.ac.za\/index.php\/2011\/08\/04\/musings-on-torquepbs-and-openmpi\/#breadcrumb"},"inLanguage":"en-ZA","potentialAction":[{"@type":"ReadAction","target":["https:\/\/ucthpc.uct.ac.za\/index.php\/2011\/08\/04\/musings-on-torquepbs-and-openmpi\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/ucthpc.uct.ac.za\/index.php\/2011\/08\/04\/musings-on-torquepbs-and-openmpi\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/ucthpc.uct.ac.za\/"},{"@type":"ListItem","position":2,"name":"Musings on Torque\/PBS and OpenMPI"}]},{"@type":"WebSite","@id":"https:\/\/ucthpc.uct.ac.za\/#website","url":"https:\/\/ucthpc.uct.ac.za\/","name":"UCT HPC","description":"University of Cape Town High Performance Computing","publisher":{"@id":"https:\/\/ucthpc.uct.ac.za\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/ucthpc.uct.ac.za\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-ZA"},{"@type":"Organization","@id":"https:\/\/ucthpc.uct.ac.za\/#organization","name":"University of Cape Town High Performance Computing","url":"https:\/\/ucthpc.uct.ac.za\/","logo":{"@type":"ImageObject","inLanguage":"en-ZA","@id":"https:\/\/ucthpc.uct.ac.za\/#\/schema\/logo\/image\/","url":"https:\/\/ucthpc.uct.ac.za\/wp-content\/uploads\/2015\/09\/logocircless.png","contentUrl":"https:\/\/ucthpc.uct.ac.za\/wp-content\/uploads\/2015\/09\/logocircless.png","width":450,"height":423,"caption":"University of Cape Town High Performance Computing"},"image":{"@id":"https:\/\/ucthpc.uct.ac.za\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/ucthpc.uct.ac.za\/#\/schema\/person\/c183ad1c0a1063124a72d63963ae9c7e","name":"Andrew Lewis","image":{"@type":"ImageObject","inLanguage":"en-ZA","@id":"https:\/\/ucthpc.uct.ac.za\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/9652c9c73beeab594b8dc2383a880048?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/9652c9c73beeab594b8dc2383a880048?s=96&d=mm&r=g","caption":"Andrew Lewis"},"sameAs":["http:\/\/blogs.uct.ac.za\/blog\/big-bytes"]}]}},"_links":{"self":[{"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/posts\/1067"}],"collection":[{"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/comments?post=1067"}],"version-history":[{"count":4,"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/posts\/1067\/revisions"}],"predecessor-version":[{"id":4279,"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/posts\/1067\/revisions\/4279"}],"wp:attachment":[{"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/media?parent=1067"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/categories?post=1067"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/tags?post=1067"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}