The PARAM SHAVAK SUPER COMPUTING FACILITY For AI AND ML Centre of Excellence consist of dedicated and hardworking students along with highly skilled faculties of the department. It aims to provide practical knowledge and familiarity with how the industry is adopting and utilizing AI and ML. The students are provided dedicated cabins which with all the necessary equipment including systems installed with the latest version of Linux.
For high-performance computation, they have been provided Param Shavak desktop supercomputer by CDAC. The Param Shavak has dual intel Xenon Gold 6132, 14 cores server-grade processors with each core having a minimum 2.6GHz clock speed, 96GB (expandable) DDR4 2666MHz RAM in a balanced configuration, two 1GbE networks ports, 2x16 PCI-E Gen3 slots for GPU/Co-processors with NVIDIA P5000 GPUs. For deployment of their projects, students are provided NVIDIA Jetson devices. They have also been provided with access to the latest contents published by top publishers, including O’Reilly and Wiley.
The DGX A100 is built on the NVIDIA A100 Tensor Core GPU architecture, which features third-generation Tensor Cores for AI acceleration, enabling unprecedented performance and versatility. It also incorporates NVIDIA NVLink™ and NVIDIA NVSwitch™ technologies, allowing for high-speed GPU-to-GPU communication and scalability for large-scale AI workloads. The DGX A100 delivers exceptional performance for AI workloads, with up to 5 petaFLOPS of AI performance in a single system. The DGX A100 comes with a comprehensive software stack optimized for AI and data science workloads, including NVIDIA CUDA-X AI libraries, NVIDIA TensorRT™ for inference optimization, and NVIDIA Triton™ for deploying AI models in production environments. It also includes NVIDIA Deep Learning SDK, NVIDIA RAPIDS™ for accelerated data analytics, and support for popular deep learning frameworks such as TensorFlow, PyTorch, and MXNet.
It offers integrated monitoring and management capabilities, allowing administrators to monitor system health, performance, and resource utilization in real-time.The DGX A100 is well-suited for a wide range of AI applications, including deep learning training for image and speech recognition, natural language processing, recommendation systems, and autonomous vehicles. It is also used for data analytics, scientific computing, and other high-performance computing (HPC) workloads requiring massive parallel processing capabilities. The NVIDIA DGX A100 has 8x NVIDIA A100 Tensor Core GPUs, 40 GB HBM2 per GPU memory, Dual AMD EPYC CPUs, Up to 128 CPU cores, Up to 2 TB DDR4 memory, Up to 30 TB SSD, Dual 200GbE, HDR InfiniBand, and form factor: Rack-mount server.