-
Campus Services
- Budget Office
- Campus Controller
- Campus Planning and Design
- Career and Professional Development
- Community Relations
- Event Operations
- Facilities Management
- Fire & Life Safety
- Information Systems & Technology
- Institutional Compliance and Internal Audit
- Institutional Research and Decision Support
- Legal Affairs
- Mail Services
- Parking Services
- Public Safety
- Strategic Marketing and Communications
- Sustainability
- Copy Jobs
ยป Research Hosts & Clusters
Keck Computational Research Cluster
The Keck Computational Research Cluster is the premier computational resource available to Chapman researchers. Located in the Keck Center for Science and Engineering and extending into multiple public clouds, it provides CPU and GPU processing for research workloads of all kinds.
Cluster Summary (for grant applications):
The Keck High Performance Computing (HPC) cluster is available for researchers to run compute intensive, large memory, and high-bandwidth workloads quickly and efficiently. Located in Chapman University's Keck Center for Science & Engineering, the cluster contains over 1800 Intel Xeon and AMD Epyc compute cores, 6 TiB of RAM, and direct access to all-flash and disk-storage SAN arrays over 10 and 40 Gb/s Ethernet and Infiniband networks. 16 Nvidia TESLA A100 and 16 Nvidia TESLA V100 GPUs are available for workloads that benefit from GPU-accelerated processing. Access and management of the cluster and its associated applications and services are supported by the University's IS&T Research Technology Support team.
Cluster Overview
-
20 physical on-premise servers
-
1800 total Intel Xeon CPU and AMD Epyc cores
-
6 TiB of RAM
-
45,568 NVIDIA Turing GPU Cuda cores
-
5,408 NVIDIA Turing GPU Tensor cores
Detailed server breakdown:
-
Two Deep-Learning Asus nodes
-
384Gb of RAM each
-
40 Intel Xeon cores each
-
4 NVIDIA Tesla V100 GPUs each
-
-
Two Machine-Learning Supermicro nodes
-
4 NVIDIA GTX980 GPUs (2048 Pascal CUDA cores, 4Gb RAM) each
-
-
Fifteen General-Purpose Supermicro nodes
-
256Gb of RAM
-
72 Intel Xeon cores each
-
-
One Supermicro Head/Login node
-
40 Intel Xeon CPU cores
-
128Gb of RAM
-
Bright Computing cluster management engine with SLURM scheduler
-
Additional on-demand nodes available, instantiated as needed in AWS, Azure, or Google public clouds.
Dedicated Research Server and Cluster Reservations
Dedicated physical, virtual, and cloud server instances can be reserved for the use of individual researchers, research teams, or specific projects.
Reserved research server categories:
Small
Medium
Large
Dedicated physical, virtual, and cloud clusters (comprised of multiple dedicated server instances) can also be reserved.
Cloud-based secure research enclave
NSF ACCESS Program
The National Science Foundation (NSF) ACCESS (Advanced Computing for Competitiveness, Economy, Security, and Society) program supports all US researchers through various advanced computing and cyberinfrastructure programs. The program offers researchers access to supercomputers, data storage, and other resources through allocation requests.
To request allocations through programs like ACCESS, please contact James Kelly (jakelly@chapman.edu), who is the Chapman University ACCESS liason.