Transform your PC into an AI supercomputer
NVIDIA recently introduced TITAN V, supposedly the world’s most powerful GPU for PCs, driven by the advanced GPU architecture, NVIDIA Volta.
Announced by NVIDIA founder and CEO Jensen Huang at the annual NIPS conference, TITAN V excels at computational processing for scientific simulation.
Its 21.1 billion transistors deliver 110 teraflops of raw horsepower, nine times that of its predecessor.
Huang states, “Our vision for Volta was to push the outer limits of high-performance computing and AI.
“We broke new ground with its new processor architecture, instructions, numerical formats, memory architecture and processor links.”
“With TITAN V, we are putting Volta into the hands of researchers and scientists all over the world, I can’t wait to see their breakthrough discoveries.”
TITAN V’s Volta architecture features a major redesign of the streaming multiprocessor that is at the centre of the GPU.
It doubles the energy efficiency of the previous generation Pascal design, enabling dramatic boosts in performance in the same power envelope.
New Tensor Cores designed specifically for deep learning deliver up to nine times higher peak teraflops.
With independent parallel integer and floating-point data paths, Volta is also much more efficient on workloads with a mix of computation and addressing calculations.
Its new combined L1 data cache and shared memory unit significantly improve performance while also simplifying programming.
Fabricated on a new TSMC 12-nanometer FFN high-performance manufacturing process customized for NVIDIA, TITAN V also incorporates Volta’s highly tuned 12GB HBM2 memory subsystem for advanced memory bandwidth utilization.
Check out the TITAN V here:
Users of TITAN V can gain immediate access to the latest GPU-optimized AI, deep learning and HPC software by signing up at no charge for an NVIDIA GPU Cloud account.
This container registry includes NVIDIA-optimized deep learning frameworks, third-party managed HPC applications, NVIDIA HPC visualisation tools and the NVIDIA TensorRT inferencing optimiser.