{"id":333,"date":"2015-06-22T10:53:01","date_gmt":"2015-06-22T10:53:01","guid":{"rendered":"http:\/\/blogs.uct.ac.za\/blog\/big-bytes\/2015\/06\/22\/gpus-and-gres"},"modified":"2015-08-14T10:01:13","modified_gmt":"2015-08-14T08:01:13","slug":"gpus-and-gres","status":"publish","type":"post","link":"https:\/\/ucthpc.uct.ac.za\/index.php\/2015\/06\/22\/gpus-and-gres\/","title":{"rendered":"GPUs and GRES"},"content":{"rendered":"<p>Our current cluster, hex, runs Torque with MAUI as the scheduler. While MAUI is GPU aware it does not allow GPUs to be scheduled. In other words you can list the nodes with GPUs but you cannot submit a job based on these resources, nor can you lock a GPU. The MOAB scheduler for Torque can do this, but the license costs several hundred thousand dollars. Fortunately SLURM has this functionality built into it, and what&#8217;s more is it&#8217;s free. GPU cards are defined as generic resource (GRES) objects and are listed by type and number. Each GPU card is assigned to a certain number of cores in a server. One needs to enter the following line:<\/p>\n<pre>GresTypes=gpu\r\n<\/pre>\n<p>in the slurm.conf file and also add Gres information to the node configurations, for example:<\/p>\n<pre>NodeName=hpc406 ... Gres=gpu:kepler:2\r\n<\/pre>\n<p>One must also create a gres.conf file on the nodes that actually house the GPU cards:<\/p>\n<pre>Name=gpu Type=kepler File=\/dev\/nvidia0 CPUs=0,1,2,3\r\nName=gpu Type=kepler File=\/dev\/nvidia1 CPUs=4,5,6,7\r\n<\/pre>\n<p>This indicates which cores are assigned to which card. To request a GPU resource one enters the following requirement in sbatch, salloc or srun:<\/p>\n<pre>#SBATCH --gres=gpu:2 --nodes=1 --ntasks=1\r\n<\/pre>\n<p>When the job runs an environment variable is set:<\/p>\n<pre>CUDA_VISIBLE_DEVICES=0,1\r\n<\/pre>\n<p>depending on how many GPU cards have been requested. Below are three jobs all requesting 2 cores and a single GPU card. Only two are running even though there are cores free:<\/p>\n<pre>JOBID PARTITION     NAME   USER  ST   TIME     NODELIST\r\n 1937  ucthimem GresTest   andy  PD   0:00  (Resources)\r\n 1935  ucthimem GresTest   andy   R   1:08       hpc406\r\n 1936  ucthimem GresTest   andy   R   1:08       hpc406\r\n<\/pre>\n<p>Examining 1935 shows us that cores are set to CPU_IDs=1-2 while 1936&#8217;s cores are set to CPU_IDs=4-5. Additionally CUDA_VISIBLE_DEVICES=0 and CUDA_VISIBLE_DEVICES=1 are set for jobs 1935 and 1936 respectively.<\/p>\n","protected":false},"excerpt":{"rendered":"<div>Our current cluster, hex, runs Torque with MAUI as the scheduler. While MAUI is GPU aware it does not allow GPUs&nbsp;to be scheduled. In other words you can list the nodes with GPUs but you cannot submit a job based on these resources, nor can you lock a&nbsp;GPU. The MOAB scheduler for Torque can do this, but the license costs several hundred thousand dollars. Fortunately SLURM has this functionality built into it, and what&#8217;s more is it&#8217;s free. GPU cards are defined as generic resource (GRES) objects and are listed by type and number. Each GPU card is assigned to a certain number of cores in a server. One needs to enter the following line:<\/div>\n<div>GresTypes=gpu<\/div>\n<div>in the slurm.conf file and also add Gres information to the node configurations, for example:<\/div>\n<div><span>NodeName=hpc406 &#8230; Gres=gpu:kepler:2<\/span><\/div>\n<div>One must also create a gres.conf file on the nodes that actually house the GPU cards:<\/div>\n<div><span>Name=gpu Type=kepler File=\/dev\/nvidia0 CPUs=0,1,2,3<\/span><\/div>\n<div><span>Name=gpu Type=kepler File=\/dev\/nvidia1 CPUs=4,5,6,7<\/span><\/div>\n<div>This indicates which cores are assigned to which card.&nbsp;<\/div>\n<div>To request a GPU resource one enters the following requirement in sbatch, salloc or srun:<\/div>\n<div><span>#SBATCH &#8211;gres=gpu:2 &#8211;nodes=1 &#8211;ntasks=1<\/span><\/div>\n<div>When the job runs an environment variable is set:<\/div>\n<div>CUDA_VISIBLE_DEVICES=0,1<\/div>\n<div>depending on how many GPU cards have been requested. Below are three jobs all requesting 2 cores and a single GPU card. Only two are running even though there are cores free:<\/div>\n<div>JOBID PARTITION &nbsp; &nbsp; NAME &nbsp; &nbsp; USER ST &nbsp; &nbsp;TIME &nbsp;NODELIST<\/div>\n<div>&nbsp;1937 &nbsp;ucthimem GresTest &nbsp; andy PD &nbsp; &nbsp; 0:00 &nbsp; &nbsp; &nbsp;(Resources)<\/div>\n<div>&nbsp;1935 &nbsp;ucthimem GresTest &nbsp;&nbsp;andy&nbsp;R &nbsp; &nbsp; &nbsp; 1:08 &nbsp; &nbsp; &nbsp;hpc406<\/div>\n<div>&nbsp;1936 &nbsp;ucthimem GresTest &nbsp;&nbsp;andy&nbsp;R &nbsp; &nbsp; &nbsp; 1:08 &nbsp; &nbsp; &nbsp;hpc406<\/div>\n<div>Examining 1935 shows us that cores are set to CPU_IDs=1-2 while 1936&#8217;s cores are set to CPU_IDs=4-5. Additionally CUDA_VISIBLE_DEVICES=0 and CUDA_VISIBLE_DEVICES=1 are set for jobs 1935 and 1936 respectively.<\/div>\n<div><\/div>\n<div><\/div>\n<div><\/div>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[7,6,4,5],"tags":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v21.4 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>GPUs and GRES - UCT HPC<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/ucthpc.uct.ac.za\/index.php\/2015\/06\/22\/gpus-and-gres\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"GPUs and GRES - UCT HPC\" \/>\n<meta property=\"og:description\" content=\"Our current cluster, hex, runs Torque with MAUI as the scheduler. While MAUI is GPU aware it does not allow GPUs&nbsp;to be scheduled. In other words you can list the nodes with GPUs but you cannot submit a job based on these resources, nor can you lock a&nbsp;GPU. The MOAB scheduler for Torque can do this, but the license costs several hundred thousand dollars. Fortunately SLURM has this functionality built into it, and what&#039;s more is it&#039;s free. GPU cards are defined as generic resource (GRES) objects and are listed by type and number. Each GPU card is assigned to a certain number of cores in a server. One needs to enter the following line:GresTypes=gpuin the slurm.conf file and also add Gres information to the node configurations, for example:NodeName=hpc406 ... Gres=gpu:kepler:2One must also create a gres.conf file on the nodes that actually house the GPU cards:Name=gpu Type=kepler File=\/dev\/nvidia0 CPUs=0,1,2,3Name=gpu Type=kepler File=\/dev\/nvidia1 CPUs=4,5,6,7This indicates which cores are assigned to which card.&nbsp;To request a GPU resource one enters the following requirement in sbatch, salloc or srun:#SBATCH --gres=gpu:2 --nodes=1 --ntasks=1When the job runs an environment variable is set:CUDA_VISIBLE_DEVICES=0,1depending on how many GPU cards have been requested. Below are three jobs all requesting 2 cores and a single GPU card. Only two are running even though there are cores free:JOBID PARTITION &nbsp; &nbsp; NAME &nbsp; &nbsp; USER ST &nbsp; &nbsp;TIME &nbsp;NODELIST&nbsp;1937 &nbsp;ucthimem GresTest &nbsp; andy PD &nbsp; &nbsp; 0:00 &nbsp; &nbsp; &nbsp;(Resources)&nbsp;1935 &nbsp;ucthimem GresTest &nbsp;&nbsp;andy&nbsp;R &nbsp; &nbsp; &nbsp; 1:08 &nbsp; &nbsp; &nbsp;hpc406&nbsp;1936 &nbsp;ucthimem GresTest &nbsp;&nbsp;andy&nbsp;R &nbsp; &nbsp; &nbsp; 1:08 &nbsp; &nbsp; &nbsp;hpc406Examining 1935 shows us that cores are set to CPU_IDs=1-2 while 1936&#039;s cores are set to CPU_IDs=4-5. Additionally CUDA_VISIBLE_DEVICES=0 and CUDA_VISIBLE_DEVICES=1 are set for jobs 1935 and 1936 respectively.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/ucthpc.uct.ac.za\/index.php\/2015\/06\/22\/gpus-and-gres\/\" \/>\n<meta property=\"og:site_name\" content=\"UCT HPC\" \/>\n<meta property=\"article:published_time\" content=\"2015-06-22T10:53:01+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2015-08-14T08:01:13+00:00\" \/>\n<meta name=\"author\" content=\"Andrew Lewis\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Andrew Lewis\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"2 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/index.php\/2015\/06\/22\/gpus-and-gres\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/ucthpc.uct.ac.za\/index.php\/2015\/06\/22\/gpus-and-gres\/\"},\"author\":{\"name\":\"Andrew Lewis\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#\/schema\/person\/c183ad1c0a1063124a72d63963ae9c7e\"},\"headline\":\"GPUs and GRES\",\"datePublished\":\"2015-06-22T10:53:01+00:00\",\"dateModified\":\"2015-08-14T08:01:13+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/ucthpc.uct.ac.za\/index.php\/2015\/06\/22\/gpus-and-gres\/\"},\"wordCount\":251,\"publisher\":{\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#organization\"},\"articleSection\":[\"GPU\",\"hardware\",\"hpc\",\"SLURM\"],\"inLanguage\":\"en-ZA\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/index.php\/2015\/06\/22\/gpus-and-gres\/\",\"url\":\"https:\/\/ucthpc.uct.ac.za\/index.php\/2015\/06\/22\/gpus-and-gres\/\",\"name\":\"GPUs and GRES - UCT HPC\",\"isPartOf\":{\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#website\"},\"datePublished\":\"2015-06-22T10:53:01+00:00\",\"dateModified\":\"2015-08-14T08:01:13+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/ucthpc.uct.ac.za\/index.php\/2015\/06\/22\/gpus-and-gres\/#breadcrumb\"},\"inLanguage\":\"en-ZA\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/ucthpc.uct.ac.za\/index.php\/2015\/06\/22\/gpus-and-gres\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/index.php\/2015\/06\/22\/gpus-and-gres\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/ucthpc.uct.ac.za\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"GPUs and GRES\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#website\",\"url\":\"https:\/\/ucthpc.uct.ac.za\/\",\"name\":\"UCT HPC\",\"description\":\"University of Cape Town High Performance Computing\",\"publisher\":{\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/ucthpc.uct.ac.za\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-ZA\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#organization\",\"name\":\"University of Cape Town High Performance Computing\",\"url\":\"https:\/\/ucthpc.uct.ac.za\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-ZA\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/ucthpc.uct.ac.za\/wp-content\/uploads\/2015\/09\/logocircless.png\",\"contentUrl\":\"https:\/\/ucthpc.uct.ac.za\/wp-content\/uploads\/2015\/09\/logocircless.png\",\"width\":450,\"height\":423,\"caption\":\"University of Cape Town High Performance Computing\"},\"image\":{\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#\/schema\/person\/c183ad1c0a1063124a72d63963ae9c7e\",\"name\":\"Andrew Lewis\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-ZA\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/9652c9c73beeab594b8dc2383a880048?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/9652c9c73beeab594b8dc2383a880048?s=96&d=mm&r=g\",\"caption\":\"Andrew Lewis\"},\"sameAs\":[\"http:\/\/blogs.uct.ac.za\/blog\/big-bytes\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"GPUs and GRES - UCT HPC","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/ucthpc.uct.ac.za\/index.php\/2015\/06\/22\/gpus-and-gres\/","og_locale":"en_US","og_type":"article","og_title":"GPUs and GRES - UCT HPC","og_description":"Our current cluster, hex, runs Torque with MAUI as the scheduler. While MAUI is GPU aware it does not allow GPUs&nbsp;to be scheduled. In other words you can list the nodes with GPUs but you cannot submit a job based on these resources, nor can you lock a&nbsp;GPU. The MOAB scheduler for Torque can do this, but the license costs several hundred thousand dollars. Fortunately SLURM has this functionality built into it, and what's more is it's free. GPU cards are defined as generic resource (GRES) objects and are listed by type and number. Each GPU card is assigned to a certain number of cores in a server. One needs to enter the following line:GresTypes=gpuin the slurm.conf file and also add Gres information to the node configurations, for example:NodeName=hpc406 ... Gres=gpu:kepler:2One must also create a gres.conf file on the nodes that actually house the GPU cards:Name=gpu Type=kepler File=\/dev\/nvidia0 CPUs=0,1,2,3Name=gpu Type=kepler File=\/dev\/nvidia1 CPUs=4,5,6,7This indicates which cores are assigned to which card.&nbsp;To request a GPU resource one enters the following requirement in sbatch, salloc or srun:#SBATCH --gres=gpu:2 --nodes=1 --ntasks=1When the job runs an environment variable is set:CUDA_VISIBLE_DEVICES=0,1depending on how many GPU cards have been requested. Below are three jobs all requesting 2 cores and a single GPU card. Only two are running even though there are cores free:JOBID PARTITION &nbsp; &nbsp; NAME &nbsp; &nbsp; USER ST &nbsp; &nbsp;TIME &nbsp;NODELIST&nbsp;1937 &nbsp;ucthimem GresTest &nbsp; andy PD &nbsp; &nbsp; 0:00 &nbsp; &nbsp; &nbsp;(Resources)&nbsp;1935 &nbsp;ucthimem GresTest &nbsp;&nbsp;andy&nbsp;R &nbsp; &nbsp; &nbsp; 1:08 &nbsp; &nbsp; &nbsp;hpc406&nbsp;1936 &nbsp;ucthimem GresTest &nbsp;&nbsp;andy&nbsp;R &nbsp; &nbsp; &nbsp; 1:08 &nbsp; &nbsp; &nbsp;hpc406Examining 1935 shows us that cores are set to CPU_IDs=1-2 while 1936's cores are set to CPU_IDs=4-5. Additionally CUDA_VISIBLE_DEVICES=0 and CUDA_VISIBLE_DEVICES=1 are set for jobs 1935 and 1936 respectively.","og_url":"https:\/\/ucthpc.uct.ac.za\/index.php\/2015\/06\/22\/gpus-and-gres\/","og_site_name":"UCT HPC","article_published_time":"2015-06-22T10:53:01+00:00","article_modified_time":"2015-08-14T08:01:13+00:00","author":"Andrew Lewis","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Andrew Lewis","Est. reading time":"2 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/ucthpc.uct.ac.za\/index.php\/2015\/06\/22\/gpus-and-gres\/#article","isPartOf":{"@id":"https:\/\/ucthpc.uct.ac.za\/index.php\/2015\/06\/22\/gpus-and-gres\/"},"author":{"name":"Andrew Lewis","@id":"https:\/\/ucthpc.uct.ac.za\/#\/schema\/person\/c183ad1c0a1063124a72d63963ae9c7e"},"headline":"GPUs and GRES","datePublished":"2015-06-22T10:53:01+00:00","dateModified":"2015-08-14T08:01:13+00:00","mainEntityOfPage":{"@id":"https:\/\/ucthpc.uct.ac.za\/index.php\/2015\/06\/22\/gpus-and-gres\/"},"wordCount":251,"publisher":{"@id":"https:\/\/ucthpc.uct.ac.za\/#organization"},"articleSection":["GPU","hardware","hpc","SLURM"],"inLanguage":"en-ZA"},{"@type":"WebPage","@id":"https:\/\/ucthpc.uct.ac.za\/index.php\/2015\/06\/22\/gpus-and-gres\/","url":"https:\/\/ucthpc.uct.ac.za\/index.php\/2015\/06\/22\/gpus-and-gres\/","name":"GPUs and GRES - UCT HPC","isPartOf":{"@id":"https:\/\/ucthpc.uct.ac.za\/#website"},"datePublished":"2015-06-22T10:53:01+00:00","dateModified":"2015-08-14T08:01:13+00:00","breadcrumb":{"@id":"https:\/\/ucthpc.uct.ac.za\/index.php\/2015\/06\/22\/gpus-and-gres\/#breadcrumb"},"inLanguage":"en-ZA","potentialAction":[{"@type":"ReadAction","target":["https:\/\/ucthpc.uct.ac.za\/index.php\/2015\/06\/22\/gpus-and-gres\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/ucthpc.uct.ac.za\/index.php\/2015\/06\/22\/gpus-and-gres\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/ucthpc.uct.ac.za\/"},{"@type":"ListItem","position":2,"name":"GPUs and GRES"}]},{"@type":"WebSite","@id":"https:\/\/ucthpc.uct.ac.za\/#website","url":"https:\/\/ucthpc.uct.ac.za\/","name":"UCT HPC","description":"University of Cape Town High Performance Computing","publisher":{"@id":"https:\/\/ucthpc.uct.ac.za\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/ucthpc.uct.ac.za\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-ZA"},{"@type":"Organization","@id":"https:\/\/ucthpc.uct.ac.za\/#organization","name":"University of Cape Town High Performance Computing","url":"https:\/\/ucthpc.uct.ac.za\/","logo":{"@type":"ImageObject","inLanguage":"en-ZA","@id":"https:\/\/ucthpc.uct.ac.za\/#\/schema\/logo\/image\/","url":"https:\/\/ucthpc.uct.ac.za\/wp-content\/uploads\/2015\/09\/logocircless.png","contentUrl":"https:\/\/ucthpc.uct.ac.za\/wp-content\/uploads\/2015\/09\/logocircless.png","width":450,"height":423,"caption":"University of Cape Town High Performance Computing"},"image":{"@id":"https:\/\/ucthpc.uct.ac.za\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/ucthpc.uct.ac.za\/#\/schema\/person\/c183ad1c0a1063124a72d63963ae9c7e","name":"Andrew Lewis","image":{"@type":"ImageObject","inLanguage":"en-ZA","@id":"https:\/\/ucthpc.uct.ac.za\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/9652c9c73beeab594b8dc2383a880048?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/9652c9c73beeab594b8dc2383a880048?s=96&d=mm&r=g","caption":"Andrew Lewis"},"sameAs":["http:\/\/blogs.uct.ac.za\/blog\/big-bytes"]}]}},"_links":{"self":[{"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/posts\/333"}],"collection":[{"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/comments?post=333"}],"version-history":[{"count":4,"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/posts\/333\/revisions"}],"predecessor-version":[{"id":2018,"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/posts\/333\/revisions\/2018"}],"wp:attachment":[{"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/media?parent=333"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/categories?post=333"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/tags?post=333"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}