Target Management

In contrast to the metadata nodes, storage nodes can manage multiple storage targets. It is possible to add, remove, and move targets.

See also

Node Management

Adding Storage Targets

Adding a new storage target to BeeGFS is very simple and does not require a downtime. You just have to take the following steps. Please read the whole procedure before taking any action.

  1. Make sure that the following parameter is set to false in file /etc/beegfs/beegfs-mgmtd.toml, in order for the management service to allow new targets to be added to the system.

    registration-disable = false
    
  2. If you change the parameter above, please restart the management service afterward.

    $ systemctl restart beegfs-mgmtd
    
  3. Add the storage target on the storage server, as described in Manual Installation.

  4. Restart the storage service.

    $ systemctl restart beegfs-storage
    
  5. Use the command beegfs health capacity to list all nodes and targets in your system and check if your new target is listed. If not, check the logs on the server where you added the target and the server running the management service to see if there are any errors indicating why the target could not be added.

    $ journalctl -u beegfs-mgmtd
    $ journalctl -u beegfs-storage
    
  6. Reset the management configuration and restart the management service to prevent accidental registrations of new targets in your system.

    registration-disable = false
    

Removing Storage Targets

Note

Targets which are a member of a mirror group can not be removed.

  1. Stop (unmount) all clients to prevent filesystem access during this procedure.

  2. Migrate all data off the target you want to remove.

    • See Data Migration for details.

    • You can further parallelize the migrate job by running several instances of beegfs entry migrate on different directories. It is also helpful to distribute this over several client nodes. The BeeGFS client needs to be running on these nodes.

  3. Check one last time that the target is empty:

    $ find -type f /data/storage_target/chunks
    
  4. Stop the corresponding storage service:

    $ systemctl stop beegfs-storage
    
  5. Stop all metadata services.

  6. De-register the target from the system:

    $ beegfs target delete <target>
    
  7. Remove the path of the target from the configuration file of the storage service.

  8. You can now remove the target from the underlying file system. It is advisable to not outright delete the directory, but move it to a different location. This way it can be restored if anything goes wrong (for example if the wrong path was accidentally entered).

  9. If the storage service also serves other targets, restart it:

    $ systemctl start beegfs-storage
    

    If the removed target was the only one on that storage service, the storage service can also be removed:

    $ beegfs node delete <node>
    
  10. Check the storage nodes are targets are back online:

    $ beegfs node list
    $ beegfs target list
    
  11. Start all metadata services.

  12. Restart all clients.

Moving a Storage Target to another node

Take the following steps to move a storage target to a different storage server instance. Please read the whole procedure before taking any action. Downtime is recommended when moving a storage target to a new server.

  1. Edit the file /etc/beegfs/beegfs-storage.conf on the current machine and remove the path of the moving storage target from the comma-separated list of storage target directories, defined by option storeStorageDirectory. If the service is configured to run in Multi Mode, be careful to use the configuration file and directory of the right instance.

  2. Stop the storage service on the current machine and unmount the storage target device.

    $ systemctl stop beegfs-storage
    $ umount /data/mydisk
    
  3. Start the storage service afterward if it has some remaining storage targets. If you don’t need the storage service running on the machine, please uninstall it.

    $ systemctl start beegfs-storage
    
  4. Check if the storage service is already installed on the new machine. If not, please install it. Do not start the service at this point.

    If the storage service is already installed on the new server machine and before the move you had multiple instances of the service running on different machines, and now you want to have multiple instances of the service running on the same machine, each using a different node ID, see Multi Mode.

    In case you don’t mind having the storage target associated to the node ID used by the new server, you don’t need to configure the storage service to be multi-mode. Later on, in a future step of this procedure, you will be able to simply add the storage targets to the existing instance of the storage service. Only configure the service to be multi-mode if you really want to keep the moved storage target associated to the previous node ID.

  5. Check if the storage target device can be moved or connected to the new machine. If so, mount it on a directory of the new machine and make sure it is configured to be mounted at boot time.

  6. Otherwise, if the storage target device cannot be moved or connected to the new machine, copy the data from a backup (or from the device, remounted somewhere else) to a storage device of the new machine.

  7. Edit the file /etc/beegfs/beegfs-storage.conf on the new machine and add the path of the moved storage target to the comma-separated list of storage target directories, defined by option storeStorageDirectory. If the service is configured to run in multi-mode, be careful to use the configuration file and directory of the right instance.

  8. Make sure that the service directory contains the right node ID file.

    If you are moving a storage target to a machine that already has a storage service running and this service is not in multi-mode, remove the nodeNumID file located at the storage target directory being moved. This will cause the storage service to recreate the file with its node ID and change the storage service associated with the moved storage target on the management node.

    1. Caution: Do not remove the targetNumID file or the moved target will loose it’s identity and likely register with a new ID which will cause problems.

  9. Start or restart the storage service.

    $ systemctl restart beegfs-storage
    
  10. Test if the target is working properly.

    • Check if log files contain any error message.

      $ journalctl -u beegfs-mgmtd
      $ journalctl -u beegfs-meta
      $ journalctl -u beegfs-storage
      $ journalctl -u beegfs-client
      $ journalctl -k // For client kernel module logs.
      
    • List all storage targets of your system and check if the moved one appears online.

      $beegfs target list --debug
      
    • If the storage node instance the target was moved from has no targets left it should be removed from the system. Stop the daemon, remove it from systemd startup, and run

      $ beegfs node delete <node>
      

When I add new storage targets to BeeGFS, what happens with the data stored in the old targets?

Initially, the data remains stored on their current storage targets. With time, as files are continuously overwritten, truncated, copied, removed, and created, the space of the new targets gradually becomes more occupied and data becomes more evenly distributed.

In addition, when choosing storage targets for new files, BeeGFS always picks the ones classified as least used in your system, based on the capacity pool settings in the management configuration file /etc/beegfs/beegfs-mgmtd.toml:

[cap-pool-storage-limits]
inodes-low = "10M"
inodes-emergency = "1M"
space-low = "512GiB"
space-emergency = "10GiB"

These options define 3 levels of usage: normal, low, and emergency and targets are placed in each pool based on the limits defined below. If a storage target crosses the low limit and is classified as being part of the low pool, it stops being chosen for new files, until some of its data is deleted (and it becomes part of the normal pool again) or all the other storage targets are also classified as low. The same thing happens for the emergency limit.

If the [cap-pool-dynamic-storage-limits] block in the beegfs-mgmtd.toml is defined, then targets will be dynamically assigned to the low and emergency pools if the targets hit the maximum difference in free space based on the defined spread threshold options:

[cap-pool-dynamic-storage-limits]
inodes-normal-threshold = "10M"
inodes-low-threshold = "1M"
space-normal-threshold = "512GiB"
space-low-threshold = "10GiB"

When that happens, the pools’ limits are temporarily raised to new values, based on the following configuration options (also defined under [cap-pool-dynamic-storage-limits]):

inodes-low = "20M"
inodes-emergency = "2M"
space-low = "1024GiB"
space-emergency = "20GiB"

This makes it more likely that the targets with less free space will be moved to a lower pool, if they ever cross the new limits. For more details refer to the management configuration file /etc/beegfs/beegfs-mgmtd.toml.

Note

The default values of these options may be too low/high depending on the capacity of your storage devices and preferences, and should be adjusted accordingly.

Enabling/changing these limits and thresholds (and restarting the management service) may help you redistribute data among the targets. In case you want to speed up the data redistribution, you can take the following steps. Please consider that this procedure may not be necessary and may not have a significant impact on performance.

  • Prevent files from being created on the old storage targets, by creating a file named free_space.override on their storage target directories, containing the value 0 (zero), as seen in the example below. This will tell the BeeGFS management service that the old storage targets have no space left. Consequently, they will be included in the emergency pool, causing the new targets from the normal pool (with free space) to be chosen to store new files.

    $ echo 0 > /mnt/myraid1/beegfs_storage/free_space.override
    
  • Copy files and directories to temporary locations (in the BeeGFS file system) and then, move them back to their original locations. This will force other nodes to be used for storing the new files and free space on the original nodes.

  • Check how balanced the data is distributed among storage targets, by executing the command beegfs health capacity and repeat the previous step until it reaches an acceptable level.

  • Remove the files free_space.override created earlier.