A vendor-neutral high performance parallel filesystem for Linux HPC clusters.
High performance parallel storage.
Lustre works by considering your data files as a number of blocks of data. Each block may be stored on one or more object-storage servers (OSS) with file metadata recorded by a dedicated metadata server (MDS). Compute nodes can mount the Lustre filesystem and use it like any other POSIX compliant filesystem under Linux, providing compatibility with most Linux compatible applications.
As users write files to the filesystem, the compute nodes send blocks of data to the OSSs which write them to disk. By utilising multiple OSSs, performance gains are observed as the I/O data is served by multiple storage devices in parallel. By adding more OSSs, it is possible to achieve near linear scaling in aggregate storage performance across the HPC cluster.
Lustre supports a wide range of communication networks including both ethernet and InfiniBand. Typically, servers providing MDS and OSS services are connected to the cluster interconnect with all head and compute nodes. Files written by any node can be read and updated by any other node on the cluster, similar to traditional NFS or CIFS storage solutions.
To expand the capacity or performance of the Lustre filesystem, additional OSS machines may be dynamically added when required – the filesystem simply expands into the new space as it is attached.
Our Lustre solutions
Industry leading scalability, performance and reliability.
We provide Lustre solutions with:
- Vendor neutral storage and server support
- Native InfiniBand and 40/10-gigabit transport
- No single-point-of-failure deployment options
- Scale from TB to PB in a single filesystem
- Expand to grow performance and capacity
- Packaged solutions available
- Non-Linux support via NFS/CIFS gateway
- Commercial support and training packages