黑料网 HPC Facilities and Resources (Fall 2023)

The 黑料网 HPC computing resources represent the University鈥檚 commitment to research computing. The KSU HPC is a RedHat Linux based cluster that offers a total capacity of over 50 Teraflops to KSU faculty researchers and their teams. The cluster consists of around 50 nodes with 118 processors having 1,704 cores (excluding GPU cores) and 16.7 TB RAM and has both CPU and GPU capabilities. There are queues available for standard, high memory and GPU jobs. The HPC is built on a fast network for data and interconnect traffic. A large storage array is provided for user home directories and a fast storage is available for use during job runtime. Power for Cooling and Servers is backed by battery systems and natural gas generators. On and off campus access to the cluster is allowed only through secure protocols.  
 
Software is provided through environment modules to help provide versions of the same software and avoid conflicts with dependencies. There are 150 software programs available that include titles for Astronomy, Biology, Chemistry, Math, Statistics, Engineering and programming languages. Some popular titles include: Gaussian, MATLAB, Mathematica, R, TensorFlow, COMSOL, HH-Suite, MAFFT, LAMMPS, OpenFoam, PHYLIP and Trinity. There is cluster management and job scheduling software used to provide free access to this shared resource.

KSU has established a high-speed pathway to Internet2 and other heavily used commercial content providers.  Kennesaw and Marietta campuses are now directly connected through SoX to Internet2 and have established connections for both the Regional Research and Education Networks (R&E) routes and Internet2 Peer Exchange (I2PX) routes. The current connection speed is 10Gb/s. This connection will allow for rapid sharing of large amounts of data between KSU and other participating research institutions worldwide. This implementation is now available to on-campus researchers and traffic that can be routed through this connection will be done automatically. 

黑料网 recommends that users of the university-level HPC include
the following acknowledgement statement: 鈥淭his work was supported in part by research
computing resources and technical expertise via a partnership between Kennesaw State
University鈥檚 Office of the Vice President for Research and the Office of the CIO and Vice
President for Information Technology [1].鈥 and cite using the appropriate citation format.

  • QUEUE CPUS CORES RAM(GB)
    batch 34-51 2 Xeon Gold 6148 (2.4 GHz)   40 192
    batch 52-70 2 Xeon Gold 6126 (2.6 GHz)   24 192
    batch 71-77 4 Xeon Gold 6226 (2.70 GHz)  48 768
    himem 78 4 Xeon Gold 6226 (2.70 GHz) 48 1,537
    gpu 79-82 GPU: 4 NVidia V100S 5,120 cores each 768
    Total (47 nodes) 118 1,704 16,705

 

Previous Facilities Statements

  • Authors:

    Tom Boyle, Data Compliance and Computing Operations, Center for Research Computing, 黑料网, tboyle@kennesaw.edu.

    Dr. Ramazan Aygun, Director of Center for Research Computing, Associate Professor, Department of Computer Science, College of Computing and Software Engineering, Kennesaw State University, raygun@kennesaw.edu.

    The 黑料网 HPC computing resources represent the University鈥檚 commitment to research computing. The KSU HPC is a RedHat Linux based cluster that offers a total capacity of over 50 Teraflops to KSU faculty researchers and their teams. The cluster consists of around 50 nodes with 120 processors having 1768 cores (excluding GPU cores) and 12.8TB RAM and has both CPU and GPU capabilities. There are queues available for standard, high memory and GPU jobs. The HPC is built on a fast network for data and interconnect traffic. A large storage array is provided for user home directories and a fast storage is available for use by each node during job runtime. Power for Cooling and Servers is backed by battery systems and natural gas generators. On and off campus access to the cluster is allowed only through secure protocols and utilizes Duo Authentication.

    Software is provided through environment modules to help provide versions of the same software and avoid conflicts with dependencies. There are around 200 software programs available that include titles for Astronomy, Biology, Chemistry, Math, Statistics, Physics, Engineering and programming languages. Some popular titles include: Gaussian, MATLAB, Mathematica, R, TensorFlow, COMSOL, LS-DYNA, HH-Suite, MAFFT, LAMMPS, OpenFoam, PHYLIP and Trinity. There is cluster management and job scheduling software used to provide free access to this shared resource.

    黑料网 recommends that users of the university-level HPC include the following acknowledgement statement: 鈥淭his work was supported in part by research computing resources and technical expertise via a partnership between 黑料网鈥檚 Office of the Vice President for Research and the Office of the CIO and Vice".

  • The 黑料网 HPC computing resources represent the University鈥檚 commitment to research computing. The KSU HPC is a RedHat Linux based cluster that offers a total capacity of over 50 Teraflops to KSU faculty researchers and their teams. The cluster consists of over 50 nodes with 110 processors having 1512 cores (excluding GPU cores) and 10.3TB RAM and has both CPU and GPU capabilities. There are queues available for standard, high memory and GPU jobs. The HPC is built on a fast Infiniband network for data and interconnect traffic. A large storage array is provided for user home directories and a fast storage is available for use during job runtime. Power for Cooling and Servers is backed by battery systems and natural gas generators. On and off campus access to the cluster is allowed only through secure protocols.

    Software is provided through environment modules to help provide versions of the same software and avoid conflicts with dependencies. There are 150 software programs available that include titles for Astronomy, Biology, Chemistry, Math, Statistics, Engineering and programming languages. Some popular titles include: Gaussian, MATLAB, Mathematica, R, TensorFlow, COMSOL, LS-DYNA, HH-Suite, MAFFT, LAMMPS, OpenFoam, PHYLIP and Trinity. There is cluster management and job scheduling software used to provide free access to this shared resource.

    黑料网 recommends that users of the university-level HPC include the following acknowledgement statement: 鈥淭his work was supported in part by research computing resources and technical expertise via a partnership between 黑料网鈥檚 Office of the Vice President for Research and the Office of the CIO and Vice President for Information Technology [1].鈥 and cite using the appropriate citation format.