System Requirements¶
Supported Distributions and Kernels¶
List of Supported Linux Distributions and Kernels
Kernel support¶
Officially supported are:
The latest kernels that come with the distributions mentioned above
The latest mainline LTS kernels
In case you want to use a different kernel, we provide a toolset to compile the BeeGFS Client module yourself.
Self-compiled kernels are not officially supported.
Disk Space Requirements¶
Metadata nodes¶
This depends on things like the average file size that you have or how many files you want to be able to create in total.
In general, we recommend having about 0.3% to 0.5% of the total storage capacity for metadata. However, this number is based on gathered statistics of file systems at different cluster sites and thus might or might not fit for your case. Sometimes, it also is way too much, so you might want to start with a smaller metadata capacity and simply be prepared to add more metadata capacity if it should actually be needed later.
As a rule of thumb, 500GB of metadata capacity are sufficient for about 150 million files, if the underlying metadata storage is formatted with ext4 according to the recommendations in the metadata server tuning guide: Metadata Node Tuning.
More specifically, for every file that a user creates, one metadata file is created on one of the metadata servers. For every directory that a user creates, two directories and two metadata files are created on one of the metadata servers.
For file metadata, if the underlying local metadata server file system (e.g., ext4) is formatted
according to our recommendations with large inodes (e.g., mkfs.ext4 -I 512
, as described here:
Metadata Server Tuning) then the BeeGFS metadata as extended attribute fits completely into the
inode of the underlying local file system and does not use any additional disk space. If the
underlying metadata server file system is not formatted with large inodes, then the underlying local
file system will need to allocate a full block (usually 4KB) to store the BeeGFS metadata
information in addition to using up one inode.
Access Control Lists (ACLs) and user extended attributes are also stored as extended attributes in the corresponding metadata files and thus add to the required disk space, depending on whether they still fit into the inode or whether a new block needs to be allocated.
For each directory, one inode and one directory contents block (usually 4KB) are used on the underlying local file system until there are so many sub-entries in the directory that another directory contents block needs to be allocated by the underlying file system. How many entries fit into one block depends on things like user file name length, but usually 10+ entries fit into one directory block. So if a user creates e.g. many directories with only one file inside them, this will significantly increase the used number of inodes and disk space on the underlying local file system compared to the case where the same number of files is stored in fewer directories.
Note that while ext4 is generally recommended for metadata storage because of its performance
advantages for BeeGFS metadata workloads compared to other local Linux file systems, XFS has the
advantage of using a dynamic number of inodes, meaning new inodes can be created as long as there is
free disk space. Ext4 on the other is based on a static number maximum number of inodes that is
defined when the file system is formatted (e.g., mkfs.ext4 -i <number>
). So it can happen
with ext4 that you have disk space left but run out of available inodes or vice versa. Use df
-ih
to see information on available inodes for mounted file systems.
Storage targets¶
The disk space of the storage targets is completely usable for file contents. Chunks of buddy mirrored files are written to two targets and thus consumed disk space is twice their size. Efficient handling of sparse files is supported. Empty blocks are not allocated on the targets and do not consume any space.
Other requirements¶
Synchronized system clocks¶
The system time of all BeeGFS client and server nodes needs to be synchronized for various reasons,
e.g. to provide consistent file modification timestamps. Make sure all server clocks are set to the
correct time and date (e.g., with date
or ntpdate
) before starting up BeeGFS services. A
service like ntp
can then be used to keep the clocks of all machines in sync.