{"id":713,"date":"2014-12-20T01:53:15","date_gmt":"2014-12-19T23:53:15","guid":{"rendered":"http:\/\/oldblogs.uct.ac.za\/blog\/big-bytes\/2014\/12\/20\/fhgfs-metadata-server-migration"},"modified":"2017-03-03T10:00:34","modified_gmt":"2017-03-03T08:00:34","slug":"fhgfs-metadata-server-migration","status":"publish","type":"post","link":"https:\/\/ucthpc.uct.ac.za\/index.php\/2014\/12\/20\/fhgfs-metadata-server-migration\/","title":{"rendered":"FhGFS Metadata server migration"},"content":{"rendered":"FhGFS is a awesome distributed parallel filesystem. Its simple and so powerful with a RDMA backend that knocks the performance socks off other distributed filesystems, IMHO. Today however I needed to migrate the metadata server from one host ( a machine on loan from Dell ) to another server. FhGFS provides a tool on the Management node called \" fhgfs-ctl \" which manages your FHGFS environment. The migration options which are listed however only pertain to the storage node types and not the metadata node types. FhGFS also do not provide usable, warm fuzzy feeling documentation for the migration of the metadata server. So I plan to document my migration plan step by step for my own sanity and others. Fraunhofer are welcome to critize my guide. <strong>&lt;disclaimer&gt; The step by step guide listed here is my own work and not that from Fraunhofer. I am also not affiliated with Fraunhofer and suggest that all work is performed in a sandpit environment first.&lt;\/disclaimer&gt;<\/strong>\r\n\r\nNB: You will require a maintenance window to perform the migration.\r\n\r\n1. Stop all clients \/ storage nodes \/ metadata services. Do not shutdown the Management service as we need to make use of the \"fhgfs-ctl \" tool to manage the environment.\r\n\r\n2. Backup your metadata environment - http:\/\/www.fhgfs.com\/wiki\/FAQ#ea_backup\r\n- \/beegfs_meta ( This contains your inodes\/dentries\/ )\r\n- \/etc\/fhgfs\/fhgfs-meta.conf\r\n- Make sure that you are backing up the Extended Attributes. Please read the URL above.\r\n\r\n2. Install the FhGFS Metadata RPM and FhGFS Client RPM on the Metadata server.\r\n\r\n3. Restore the metadata (inodes\/dentries\/) and configuration files from the previous server. Do not forget to change the hostname in the \/etc\/fhgfs\/fhgfs-meta.conf file to the new server.\r\n\r\n4. Once you have successfully restored the file structure its time to decommission the current metadata server. In order todo this we need to determine what the NodeID and TargetID is.\r\n\r\nExecute the following to obtain the NodeID\u00a0\u00a0\u00a0 - \" fhgfs-ctl --listnodes --nodetype=meta\u00a0 \"\r\nExecute the following to obtain the TargetID\u00a0 - \" fhgfs-ctl --listtargets \"\r\n\r\nMatch the NodeID to the TargetID\r\n\r\n5. Remove the node now from the FhGFS cluster via the management tool\r\n\r\nExecute the following to remove the NodeID\u00a0\u00a0\u00a0 - \" fhgfs-ctl --removenode --nodetype=meta &lt;NodeID&gt; \"\r\nExecute the following to remove the TargetID\u00a0 - \" fhgfs-ctl --unmaptarget &lt;TargetID&gt; \"\r\n<strong>\r\nNB: Set the following options in the fhgfs-mgmt.conf. This ensure that new servers are allowed to register. You are welcome to return it to your preferred setting once your FhGFS clients are mounted. <\/strong>\r\n\r\nstoreAllowFirstRunInit = true\r\nsysAllowNewServers = true\r\n\r\n6. Since we already populated \/etc\/fhgfs\/fhgfs-meta.conf with the new hostname of the server, restarting the management service will commission the new metadata service. Confirm that the Management service is running by checking its status. On confirmation, start the Metadata server. Executing \" fhgfs-ctl --listnodes --nodetype=meta \" should list the new MD server.\r\n\r\n7. Start up all storage servers and ensure that they are all running.\r\n\r\n8. Start the helperd and client services.\r\n\r\nThanks to Dell (Marc \/ Lyle ) for negotiating the loan of the equipment for our FhGFS POC. We are almost ready to expand and rubber stamp as a production environment.","protected":false},"excerpt":{"rendered":"<p>FhGFS is a awesome distributed parallel filesystem. Its simple and so powerful with a RDMA backend that knocks the performance socks off other distributed filesystems, IMHO. Today however I needed to migrate the metadata server from one host ( a machine on loan from Dell ) to another server. FhGFS provides a tool on the Management node called &#8221; fhgfs-ctl &#8221; which manages your FHGFS environment. The migration options which are listed however only pertain to the storage node types and not the metadata node types. FhGFS also do not provide usable, warm fuzzy feeling documentation for the migration of the metadata server. So I plan to document my migration plan step by step for my own sanity and others. Fraunhofer are welcome to critize my guide. <strong>&lt;disclaimer&gt; The step by step guide listed here is my own work and not that from Fraunhofer. I am also not affiliated with Fraunhofer and suggest that all work is performed in a sandpit environment first.&lt;\/disclaimer&gt;<\/strong><\/p>\n<p>NB: You will require a maintenance window to perform the migration. <\/p>\n<p>1. Stop all clients \/ storage nodes \/ metadata services. Do not shutdown the Management service as we need to make use of the &#8220;fhgfs-ctl &#8221; tool to manage the environment. <\/p>\n<p>2. Backup your metadata environment &#8211; http:\/\/www.fhgfs.com\/wiki\/FAQ#ea_backup<br \/>&nbsp;&nbsp; &nbsp;&#8211; \/beegfs_meta ( This contains your inodes\/dentries\/ ) <br \/>&nbsp;&nbsp; &nbsp;&#8211; \/etc\/fhgfs\/fhgfs-meta.conf<br \/>&nbsp;&nbsp; &nbsp;&#8211; Make sure that you are backing up the Extended Attributes. Please read the URL above.<\/p>\n<p>2. Install the FhGFS Metadata RPM and FhGFS Client RPM on the Metadata server.<\/p>\n<p>3. Restore the metadata (inodes\/dentries\/) and configuration files from the previous server. Do not forget to change the hostname in the \/etc\/fhgfs\/fhgfs-meta.conf file to the new server. <\/p>\n<p>4. Once you have successfully restored the file structure its time to decommission the current metadata server. In order todo this we need to determine what the NodeID and TargetID is. <\/p>\n<p>&nbsp;&nbsp; &nbsp;Execute the following to obtain the NodeID&nbsp;&nbsp;&nbsp; &#8211; &#8221; fhgfs-ctl &#8211;listnodes &#8211;nodetype=meta&nbsp; &#8221; <br \/>&nbsp;&nbsp; &nbsp;Execute the following to obtain the TargetID&nbsp; &#8211; &#8221; fhgfs-ctl &#8211;listtargets &#8221; <br \/>&nbsp;&nbsp; &nbsp;<br \/>&nbsp;&nbsp; &nbsp;Match the NodeID to the TargetID<\/p>\n<p>5. Remove the node now from the FhGFS cluster via the management tool <\/p>\n<p>&nbsp;&nbsp; &nbsp;Execute the following to remove the NodeID&nbsp;&nbsp;&nbsp; &#8211; &#8221; fhgfs-ctl &#8211;removenode &#8211;nodetype=meta &lt;NodeID&gt; &#8220;<br \/>&nbsp;&nbsp;&nbsp; Execute the following to remove the TargetID&nbsp; &#8211; &#8221; fhgfs-ctl &#8211;unmaptarget &lt;TargetID&gt; &#8220;<br \/><strong><br \/>&nbsp;&nbsp; NB: Set the following options in the fhgfs-mgmt.conf. This ensure that new servers are allowed to register. You are welcome to return it to your preferred setting once your FhGFS clients are mounted. <\/strong><br \/>&nbsp;&nbsp; &nbsp;<br \/>&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;storeAllowFirstRunInit = true<br \/>&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;sysAllowNewServers = true <\/p>\n<p>6. Since we already populated \/etc\/fhgfs\/fhgfs-meta.conf with the new hostname of the server, restarting the management service will commission the new metadata service. Confirm that the Management service is running by checking its status. On confirmation, start the Metadata server. Executing &#8221; fhgfs-ctl &#8211;listnodes &#8211;nodetype=meta &#8221; should list the new MD server. <\/p>\n<p>7. Start up all storage servers and ensure that they are all running. <\/p>\n<p>8. Start the helperd and client services. <\/p>\n<p>Thanks to Dell (Marc \/ Lyle ) for negotiating the loan of the equipment for our FhGFS POC. We are almost ready to expand and rubber stamp as a production environment.<\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[25,4],"tags":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v21.4 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>FhGFS Metadata server migration - UCT HPC<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/ucthpc.uct.ac.za\/index.php\/2014\/12\/20\/fhgfs-metadata-server-migration\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"FhGFS Metadata server migration - UCT HPC\" \/>\n<meta property=\"og:description\" content=\"FhGFS is a awesome distributed parallel filesystem. Its simple and so powerful with a RDMA backend that knocks the performance socks off other distributed filesystems, IMHO. Today however I needed to migrate the metadata server from one host ( a machine on loan from Dell ) to another server. FhGFS provides a tool on the Management node called &quot; fhgfs-ctl &quot; which manages your FHGFS environment. The migration options which are listed however only pertain to the storage node types and not the metadata node types. FhGFS also do not provide usable, warm fuzzy feeling documentation for the migration of the metadata server. So I plan to document my migration plan step by step for my own sanity and others. Fraunhofer are welcome to critize my guide. &lt;disclaimer&gt; The step by step guide listed here is my own work and not that from Fraunhofer. I am also not affiliated with Fraunhofer and suggest that all work is performed in a sandpit environment first.&lt;\/disclaimer&gt;NB: You will require a maintenance window to perform the migration. 1. Stop all clients \/ storage nodes \/ metadata services. Do not shutdown the Management service as we need to make use of the &quot;fhgfs-ctl &quot; tool to manage the environment. 2. Backup your metadata environment - http:\/\/www.fhgfs.com\/wiki\/FAQ#ea_backup&nbsp;&nbsp; &nbsp;- \/beegfs_meta ( This contains your inodes\/dentries\/ ) &nbsp;&nbsp; &nbsp;- \/etc\/fhgfs\/fhgfs-meta.conf&nbsp;&nbsp; &nbsp;- Make sure that you are backing up the Extended Attributes. Please read the URL above.2. Install the FhGFS Metadata RPM and FhGFS Client RPM on the Metadata server.3. Restore the metadata (inodes\/dentries\/) and configuration files from the previous server. Do not forget to change the hostname in the \/etc\/fhgfs\/fhgfs-meta.conf file to the new server. 4. Once you have successfully restored the file structure its time to decommission the current metadata server. In order todo this we need to determine what the NodeID and TargetID is. &nbsp;&nbsp; &nbsp;Execute the following to obtain the NodeID&nbsp;&nbsp;&nbsp; - &quot; fhgfs-ctl --listnodes --nodetype=meta&nbsp; &quot; &nbsp;&nbsp; &nbsp;Execute the following to obtain the TargetID&nbsp; - &quot; fhgfs-ctl --listtargets &quot; &nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;Match the NodeID to the TargetID5. Remove the node now from the FhGFS cluster via the management tool &nbsp;&nbsp; &nbsp;Execute the following to remove the NodeID&nbsp;&nbsp;&nbsp; - &quot; fhgfs-ctl --removenode --nodetype=meta &lt;NodeID&gt; &quot;&nbsp;&nbsp;&nbsp; Execute the following to remove the TargetID&nbsp; - &quot; fhgfs-ctl --unmaptarget &lt;TargetID&gt; &quot;&nbsp;&nbsp; NB: Set the following options in the fhgfs-mgmt.conf. This ensure that new servers are allowed to register. You are welcome to return it to your preferred setting once your FhGFS clients are mounted. &nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;storeAllowFirstRunInit = true&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;sysAllowNewServers = true 6. Since we already populated \/etc\/fhgfs\/fhgfs-meta.conf with the new hostname of the server, restarting the management service will commission the new metadata service. Confirm that the Management service is running by checking its status. On confirmation, start the Metadata server. Executing &quot; fhgfs-ctl --listnodes --nodetype=meta &quot; should list the new MD server. 7. Start up all storage servers and ensure that they are all running. 8. Start the helperd and client services. Thanks to Dell (Marc \/ Lyle ) for negotiating the loan of the equipment for our FhGFS POC. We are almost ready to expand and rubber stamp as a production environment.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/ucthpc.uct.ac.za\/index.php\/2014\/12\/20\/fhgfs-metadata-server-migration\/\" \/>\n<meta property=\"og:site_name\" content=\"UCT HPC\" \/>\n<meta property=\"article:published_time\" content=\"2014-12-19T23:53:15+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2017-03-03T08:00:34+00:00\" \/>\n<meta name=\"author\" content=\"Timothy Carr\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Timothy Carr\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/index.php\/2014\/12\/20\/fhgfs-metadata-server-migration\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/ucthpc.uct.ac.za\/index.php\/2014\/12\/20\/fhgfs-metadata-server-migration\/\"},\"author\":{\"name\":\"Timothy Carr\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#\/schema\/person\/41f6cd039836d7741f2b82a7b7cfe8d0\"},\"headline\":\"FhGFS Metadata server migration\",\"datePublished\":\"2014-12-19T23:53:15+00:00\",\"dateModified\":\"2017-03-03T08:00:34+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/ucthpc.uct.ac.za\/index.php\/2014\/12\/20\/fhgfs-metadata-server-migration\/\"},\"wordCount\":527,\"publisher\":{\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#organization\"},\"articleSection\":[\"BeeGFS\",\"hpc\"],\"inLanguage\":\"en-ZA\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/index.php\/2014\/12\/20\/fhgfs-metadata-server-migration\/\",\"url\":\"https:\/\/ucthpc.uct.ac.za\/index.php\/2014\/12\/20\/fhgfs-metadata-server-migration\/\",\"name\":\"FhGFS Metadata server migration - UCT HPC\",\"isPartOf\":{\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#website\"},\"datePublished\":\"2014-12-19T23:53:15+00:00\",\"dateModified\":\"2017-03-03T08:00:34+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/ucthpc.uct.ac.za\/index.php\/2014\/12\/20\/fhgfs-metadata-server-migration\/#breadcrumb\"},\"inLanguage\":\"en-ZA\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/ucthpc.uct.ac.za\/index.php\/2014\/12\/20\/fhgfs-metadata-server-migration\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/index.php\/2014\/12\/20\/fhgfs-metadata-server-migration\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/ucthpc.uct.ac.za\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"FhGFS Metadata server migration\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#website\",\"url\":\"https:\/\/ucthpc.uct.ac.za\/\",\"name\":\"UCT HPC\",\"description\":\"University of Cape Town High Performance Computing\",\"publisher\":{\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/ucthpc.uct.ac.za\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-ZA\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#organization\",\"name\":\"University of Cape Town High Performance Computing\",\"url\":\"https:\/\/ucthpc.uct.ac.za\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-ZA\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/ucthpc.uct.ac.za\/wp-content\/uploads\/2015\/09\/logocircless.png\",\"contentUrl\":\"https:\/\/ucthpc.uct.ac.za\/wp-content\/uploads\/2015\/09\/logocircless.png\",\"width\":450,\"height\":423,\"caption\":\"University of Cape Town High Performance Computing\"},\"image\":{\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#\/schema\/person\/41f6cd039836d7741f2b82a7b7cfe8d0\",\"name\":\"Timothy Carr\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-ZA\",\"@id\":\"https:\/\/ucthpc.uct.ac.za\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/7e94dcf3a408e6ada008042fc29d4b15?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/7e94dcf3a408e6ada008042fc29d4b15?s=96&d=mm&r=g\",\"caption\":\"Timothy Carr\"},\"sameAs\":[\"http:\/\/ucthpc.uct.ac.za\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"FhGFS Metadata server migration - UCT HPC","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/ucthpc.uct.ac.za\/index.php\/2014\/12\/20\/fhgfs-metadata-server-migration\/","og_locale":"en_US","og_type":"article","og_title":"FhGFS Metadata server migration - UCT HPC","og_description":"FhGFS is a awesome distributed parallel filesystem. Its simple and so powerful with a RDMA backend that knocks the performance socks off other distributed filesystems, IMHO. Today however I needed to migrate the metadata server from one host ( a machine on loan from Dell ) to another server. FhGFS provides a tool on the Management node called \" fhgfs-ctl \" which manages your FHGFS environment. The migration options which are listed however only pertain to the storage node types and not the metadata node types. FhGFS also do not provide usable, warm fuzzy feeling documentation for the migration of the metadata server. So I plan to document my migration plan step by step for my own sanity and others. Fraunhofer are welcome to critize my guide. &lt;disclaimer&gt; The step by step guide listed here is my own work and not that from Fraunhofer. I am also not affiliated with Fraunhofer and suggest that all work is performed in a sandpit environment first.&lt;\/disclaimer&gt;NB: You will require a maintenance window to perform the migration. 1. Stop all clients \/ storage nodes \/ metadata services. Do not shutdown the Management service as we need to make use of the \"fhgfs-ctl \" tool to manage the environment. 2. Backup your metadata environment - http:\/\/www.fhgfs.com\/wiki\/FAQ#ea_backup&nbsp;&nbsp; &nbsp;- \/beegfs_meta ( This contains your inodes\/dentries\/ ) &nbsp;&nbsp; &nbsp;- \/etc\/fhgfs\/fhgfs-meta.conf&nbsp;&nbsp; &nbsp;- Make sure that you are backing up the Extended Attributes. Please read the URL above.2. Install the FhGFS Metadata RPM and FhGFS Client RPM on the Metadata server.3. Restore the metadata (inodes\/dentries\/) and configuration files from the previous server. Do not forget to change the hostname in the \/etc\/fhgfs\/fhgfs-meta.conf file to the new server. 4. Once you have successfully restored the file structure its time to decommission the current metadata server. In order todo this we need to determine what the NodeID and TargetID is. &nbsp;&nbsp; &nbsp;Execute the following to obtain the NodeID&nbsp;&nbsp;&nbsp; - \" fhgfs-ctl --listnodes --nodetype=meta&nbsp; \" &nbsp;&nbsp; &nbsp;Execute the following to obtain the TargetID&nbsp; - \" fhgfs-ctl --listtargets \" &nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;Match the NodeID to the TargetID5. Remove the node now from the FhGFS cluster via the management tool &nbsp;&nbsp; &nbsp;Execute the following to remove the NodeID&nbsp;&nbsp;&nbsp; - \" fhgfs-ctl --removenode --nodetype=meta &lt;NodeID&gt; \"&nbsp;&nbsp;&nbsp; Execute the following to remove the TargetID&nbsp; - \" fhgfs-ctl --unmaptarget &lt;TargetID&gt; \"&nbsp;&nbsp; NB: Set the following options in the fhgfs-mgmt.conf. This ensure that new servers are allowed to register. You are welcome to return it to your preferred setting once your FhGFS clients are mounted. &nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;storeAllowFirstRunInit = true&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;sysAllowNewServers = true 6. Since we already populated \/etc\/fhgfs\/fhgfs-meta.conf with the new hostname of the server, restarting the management service will commission the new metadata service. Confirm that the Management service is running by checking its status. On confirmation, start the Metadata server. Executing \" fhgfs-ctl --listnodes --nodetype=meta \" should list the new MD server. 7. Start up all storage servers and ensure that they are all running. 8. Start the helperd and client services. Thanks to Dell (Marc \/ Lyle ) for negotiating the loan of the equipment for our FhGFS POC. We are almost ready to expand and rubber stamp as a production environment.","og_url":"https:\/\/ucthpc.uct.ac.za\/index.php\/2014\/12\/20\/fhgfs-metadata-server-migration\/","og_site_name":"UCT HPC","article_published_time":"2014-12-19T23:53:15+00:00","article_modified_time":"2017-03-03T08:00:34+00:00","author":"Timothy Carr","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Timothy Carr","Est. reading time":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/ucthpc.uct.ac.za\/index.php\/2014\/12\/20\/fhgfs-metadata-server-migration\/#article","isPartOf":{"@id":"https:\/\/ucthpc.uct.ac.za\/index.php\/2014\/12\/20\/fhgfs-metadata-server-migration\/"},"author":{"name":"Timothy Carr","@id":"https:\/\/ucthpc.uct.ac.za\/#\/schema\/person\/41f6cd039836d7741f2b82a7b7cfe8d0"},"headline":"FhGFS Metadata server migration","datePublished":"2014-12-19T23:53:15+00:00","dateModified":"2017-03-03T08:00:34+00:00","mainEntityOfPage":{"@id":"https:\/\/ucthpc.uct.ac.za\/index.php\/2014\/12\/20\/fhgfs-metadata-server-migration\/"},"wordCount":527,"publisher":{"@id":"https:\/\/ucthpc.uct.ac.za\/#organization"},"articleSection":["BeeGFS","hpc"],"inLanguage":"en-ZA"},{"@type":"WebPage","@id":"https:\/\/ucthpc.uct.ac.za\/index.php\/2014\/12\/20\/fhgfs-metadata-server-migration\/","url":"https:\/\/ucthpc.uct.ac.za\/index.php\/2014\/12\/20\/fhgfs-metadata-server-migration\/","name":"FhGFS Metadata server migration - UCT HPC","isPartOf":{"@id":"https:\/\/ucthpc.uct.ac.za\/#website"},"datePublished":"2014-12-19T23:53:15+00:00","dateModified":"2017-03-03T08:00:34+00:00","breadcrumb":{"@id":"https:\/\/ucthpc.uct.ac.za\/index.php\/2014\/12\/20\/fhgfs-metadata-server-migration\/#breadcrumb"},"inLanguage":"en-ZA","potentialAction":[{"@type":"ReadAction","target":["https:\/\/ucthpc.uct.ac.za\/index.php\/2014\/12\/20\/fhgfs-metadata-server-migration\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/ucthpc.uct.ac.za\/index.php\/2014\/12\/20\/fhgfs-metadata-server-migration\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/ucthpc.uct.ac.za\/"},{"@type":"ListItem","position":2,"name":"FhGFS Metadata server migration"}]},{"@type":"WebSite","@id":"https:\/\/ucthpc.uct.ac.za\/#website","url":"https:\/\/ucthpc.uct.ac.za\/","name":"UCT HPC","description":"University of Cape Town High Performance Computing","publisher":{"@id":"https:\/\/ucthpc.uct.ac.za\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/ucthpc.uct.ac.za\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-ZA"},{"@type":"Organization","@id":"https:\/\/ucthpc.uct.ac.za\/#organization","name":"University of Cape Town High Performance Computing","url":"https:\/\/ucthpc.uct.ac.za\/","logo":{"@type":"ImageObject","inLanguage":"en-ZA","@id":"https:\/\/ucthpc.uct.ac.za\/#\/schema\/logo\/image\/","url":"https:\/\/ucthpc.uct.ac.za\/wp-content\/uploads\/2015\/09\/logocircless.png","contentUrl":"https:\/\/ucthpc.uct.ac.za\/wp-content\/uploads\/2015\/09\/logocircless.png","width":450,"height":423,"caption":"University of Cape Town High Performance Computing"},"image":{"@id":"https:\/\/ucthpc.uct.ac.za\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/ucthpc.uct.ac.za\/#\/schema\/person\/41f6cd039836d7741f2b82a7b7cfe8d0","name":"Timothy Carr","image":{"@type":"ImageObject","inLanguage":"en-ZA","@id":"https:\/\/ucthpc.uct.ac.za\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/7e94dcf3a408e6ada008042fc29d4b15?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/7e94dcf3a408e6ada008042fc29d4b15?s=96&d=mm&r=g","caption":"Timothy Carr"},"sameAs":["http:\/\/ucthpc.uct.ac.za"]}]}},"_links":{"self":[{"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/posts\/713"}],"collection":[{"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/comments?post=713"}],"version-history":[{"count":3,"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/posts\/713\/revisions"}],"predecessor-version":[{"id":2545,"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/posts\/713\/revisions\/2545"}],"wp:attachment":[{"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/media?parent=713"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/categories?post=713"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ucthpc.uct.ac.za\/index.php\/wp-json\/wp\/v2\/tags?post=713"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}