Digital File Storage
HPCC provides a variety of secure file storage options for research data and fast connections for high-speed file communication (I/O). Users have access to replicated high capacity storage that can be shared among group members or remain private to each user. Additionally, high performance storage is available for temporarily staging data that needs to be access quickly. Moreover, each compute node also has an attached local disk that can be used within a Hadoop on Demand for data intensive jobs. Storage options include:
- Home directory
- Research space
- Local disk
- Local RAM disks
- Scratch space
- Small I/O Scratch space
- Additional Storage Options: Storage Quota Increase Request
See the User Documentation for more information.
Personal data should be stored in /mnt/home/[MSU NetID]. Here, each user has their own 50 GB of space (more available upon request), which is backed up daily; a zfs snapshot is also taken hourly.
A 1 TB block of space to be shared among members of a research group may be obtained if the group can provide adequate justification. Additional space may also be purchased. Files on this system, located at /mnt/research/[groupname], are also backed up daily. Both /mnt/home and /mnt/research have compression enabled. This allows better performance and enables significant space savings.
Between 100-250+ GB of local scratch space is available on each cluster node. This is native to each node and is not accessible from other nodes. This space is transient, volatile storage optimized for smaller-scale I/O. Files may be stored for a maximum of 8 days on /mnt/local. This space is regularly and routinely erased to ensure a maximum amount of free space for users.
Scratch space is used for fast temporary storage while jobs are running. Using the scratch space allows users to exceed their disk quota for jobs that require lots of disk space to run. We recommend using the scratch space when a job is running and copying the results back to the users home directory. A user should store files here when a job requires those files to be accessible from all nodes during a computation.Users have a 1 million file quota and 50 TB quota on the /mnt/scratch and /mnt/ls15 space. Users needing higher limits may request increases via the contact form.
Unlike the home and research directories, the scratch space is not intended for long-term storage, and thus is not backed up. Files are automatically purged after 45 days. The parallel file system used for the scratch space is Lustre.
ffs17 is intended for fast temporary storage when running jobs/programs that generate or read many small files. This space is not backed up. Like /mnt/local or $TMPDIR, it is optimized for high I/O on a large number of small files but it can also be accessed from multiple nodes at the same time. ffs17 has a maximum storage capacity of 15 TB and users have a hard limit of 100 GB to store files on /mnt/ffs17/users or /mnt/ffs17/groups. Users needing higher limits may request increases via the contact form. Unlike scratch and home directories, you must create your own directory before running jobs. See Flash File System for more information on creating a space, running jobs and changing file permissions on ffs17.
Additional Storage Options
For MSU researchers, up to 1TB of secure storage can be allocated per PI for free. Backup of data files are stored off-site. Additional storage can be purchased at an annual rate of $125/TB. External buyers may be required to pay an addititional overhead charge. Please contact iCER about current rates.
To increase your storage quota up to 1TB, please complete the Quota Increase Request Form.
To make a storage increase request beyond the first terabyte, please complete the Large Quota Increase Request Form.