Target Management

In contrast to the metadata nodes, storage nodes can manage multiple storage targets. It is possible to add, remove, and move targets.

See also

Node Management

Adding Storage Targets

Adding a new storage target to BeeGFS is very simple and does not require a downtime. You just have to take the following steps. Please, read the whole procedure before taking any action.

  1. Make sure that the following property is set to true in file /etc/beegfs/beegfs-mgmtd.conf, in order for the management service to allow new targets to be added to the system. You will find detailed documentation of this property on the bottom of the file.

    sysAllowNewTargets = true
    
  2. If you change the property above, please restart the management service afterward.

    # systemctl restart beegfs-mgmtd
    
  3. Add the storage target on the storage server, as described in Manual Installation.

  4. Restart the storage service.

    # systemctl restart beegfs-storage
    
  5. Use the command beegfs-df to list all nodes and targets in your system and check if your new target is listed. If not, check the log files of the added service and the management service to see if any error message informs you why the node could not be added.

    $ less /var/log/beegfs-mgmtd.log
    $ less /var/log/beegfs-storage.log
    
  6. Set the management property back to false, to prevent accidental registrations of new targets in your system. Restart the management service afterward.

    sysAllowNewTargets = false
    

Removing Storage Targets

Note

Targets which are a member of a mirror group can not be removed.

  1. Stop (unmount) all clients to prevent filesystem access during this procedure.

  2. Migrate all data off the target you want to remove.

    • See Data Migration for details.

    • You can parallelize the migrate job by running several instances of beegfs-ctl on different directories. It is also helpful to distribute this over several client nodes. The BeeGFS client needs to be running on these nodes.

  3. Check one last time that the target is empty:

    # find -type f /data/storage_target/chunks
    
  4. Stop the corresponding storage service:

    # systemctl stop beegfs-storage
    
  5. Stop all metadata services.

  6. De-register the target from the system:

    # beegfs-ctl --removetarget <targetID>
    
  7. Remove the path of the target from the configuration file of the storage service.

  8. You can now remove the target from the underlying file system. It is advisable to not outright delete the directory, but move it to a different location. This way it can be restored if anything goes wrong (e.g., a wrong path was accidentally entered).

  9. If the storage service also serves other targets, restart it:

    # systemctl start beegfs-storage
    

    If the removed target was the only one on that storage service, the storage service can also be removed:

    # beegfs-ctl --removenode --nodetype=storage <nodeID>
    
  10. Check that everything is back online:

    # beegfs-ctl --listtargets
    # beegfs-ctl --listnodes --nodetype=storage
    
  11. Start all metadata services.

  12. Restart all clients.

Moving a Storage Target to another node

Take the following steps to move a storage target to a different storage server instance. Please, read the whole procedure before taking any action. We suggest downtime to move a storage target to a new server!

  1. Edit the file /etc/beegfs/beegfs-storage.conf on the current machine and remove the path of the moving storage target from the comma-separated list of storage target directories, defined by option storeStorageDirectory. If the service is configured to run in Multi Mode, be careful to use the configuration file and directory of the right instance.

  2. Stop the storage service on the current machine and unmount the storage target device.

    # service beegfs-storage stop
    # umount /data/mydisk
    
  3. Start the storage service afterward if it has some remaining storage targets. If you don’t need the storage service running on the machine, please uninstall it.

    # service beegfs-storage start
    
  4. Check if the storage service is already installed on the new machine. If not, please install it. Do not start the service at this point.

    If the storage service is already installed on the new server machine and before the move you had multiple instances of the service running on different machines, and now you want to have multiple instances of the service running on the same machine, each using a different node ID, see Multi Mode.

    In case you don’t mind having the storage target associated to the node ID used by the new server, you don’t need to configure the storage service to be multi-mode. Later on, in a future step of this procedure, you will be able to simply add the storage targets to the existing instance of the storage service. Only configure the service to be multi-mode if you really want to keep the moved storage target associated to the previous node ID.

  5. Check if the storage target device can be moved or connected to the new machine. If so, mount it on a directory of the new machine and make sure it is configured to be mounted at boot time.

  6. Otherwise, if the storage target device cannot be moved or connected to the new machine, copy the data from a backup (or from the device, remounted somewhere else) to a storage device of the new machine.

  7. Edit the file /etc/beegfs/beegfs-storage.conf on the new machine and add the path of the moved storage target to the comma-separated list of storage target directories, defined by option storeStorageDirectory. If the service is configured to run in multi-mode, be careful to use the configuration file and directory of the right instance.

  8. Make sure that the service directory contains the right nodeID file.

    If you are moving a storage target to a machine that already has a storage service running and this service is not in multi-mode, remove all files whose names match the patterns *node*ID and *Node*ID located at the storage target directory being moved. This will cause the storage service to recreate those files with the node ID used by the existing storage service daemon.

    In any other case, make sure that the file nodeID exists on the service directory and that it contains the ID that the storage service daemon should use to identify itself with the management service. If the file does not exist, just create it with the same content of the originalNodeID file.

  9. Start or restart the storage service.

    # systemctl restart beegfs-storage
    
  10. Test if the target is working properly.

    • Check if log files contain any error message.

      $ less /var/log/beegfs-mgmtd.log
      $ less /var/log/beegfs-meta.log
      $ less /var/log/beegfs-storage.log
      $ less /var/log/beegfs-client.log
      
    • List all storage targets of your system and check if the moved one appears online.

      $ beegfs-ctl --listtargets --longnodes --state
      
    • If the storage node instance the target was moved from has no targets left it should be removed from the system. Stop the daemon, remove it from systemd startup, and run

      # beegfs-ctl --removenode <NodeID>
      

When I add new storage targets to BeeGFS, what happens with the data stored in the old targets?

Initially, the data remains stored on their current storage targets. With time, as files are continuously overwritten, truncated, copied, removed, and created, the space of the new targets gradually becomes more occupied and data becomes more evenly distributed.

In addition, when choosing storage targets for new files, BeeGFS always picks the ones classified as least used in your system, according with the usage limits defined by the options below in the configuration file /etc/beegfs/beegfs-mgmtd.conf:

tuneStorageSpaceLowLimit
tuneStorageSpaceEmergencyLimit
tuneStorageInodesLowLimit
tuneStorageInodesEmergencyLimit

These options define 3 levels of usage: normal, low, and emergency. Initially, all targets are classified as part of the normal pool. Later, when a storage target crosses the low limit and is classified as being part of the low pool, it stops being chosen for new files, until some of its data is deleted (and it becomes part of the normal pool again) or all the other storage targets are also classified as low. The same thing happens for the emergency limit.

Storage targets can also be assigned to the low and emergency pools if the option tuneStorageDynamicPools is set to true and the targets hit the maximum difference in free space defined by the spread threshold options below.

tuneStorageSpaceNormalSpreadThreshold
tuneStorageSpaceLowSpreadThreshold

When that happens, the pools’ limits are temporarily raised to new values, defined by the options below. This makes it more likely that the targets with less free space will be moved to a lower pool, if they ever cross the new limits.

tuneStorageSpaceLowDynamicLimit
tuneStorageSpaceEmergencyDynamicLimit

These options are documented at the bottom of the config file /etc/beegfs/beegfs-mgmtd.conf. Changing these limits and thresholds (and restarting the management service) may help you redistribute data among the targets. The default values of these options may be too low for the capacity of your storage devices, and must be adjusted accordingly.

In case you want to speed up the data redistribution, you can take the following steps. Please consider that this procedure may not be necessary and may not have a significant impact on performance.

  • Prevent files from being created on the old storage targets, by creating a file named free_space.override on their storage target directories, containing the value 0 (zero), as seen in the example below. This will tell the BeeGFS management service that the old storage targets have no space left. Consequently, they will be included in the emergency pool, causing the new targets from the normal pool (with free space) to be chosen to store new files.

    $ echo 0 > /mnt/myraid1/beegfs_storage/free_space.override
    
  • Copy files and directories to temporary locations (in the BeeGFS file system) and then, move them back to their original locations. This will force other nodes to be used for storing the new files and free space on the original nodes.

  • Check how balanced the data is distributed among storage targets, by executing the command beegfs-df and repeat the previous step until it reaches an acceptable level.

  • Remove the files free_space.override created earlier.