Nvidia’s Quadro GP100 Announced, Will Supercharge Your Deep Learning and Design Capabilities
Kicking off the Solidworks World conference currently being held in Los Angeles, California, Nvidia recently announced that the supergrandaddy GP100 GPU has finally trickled its way into a Quadro workstation card.
Known by its street name, “The Big Pascal”, the GP100 GPU is the massive GPU powering the Tesla P100 which features 16.5 billion transistors spread across a 610mm² die. In comparison, the GP102 GPU used in the flagship consumer Titan X has just 12 billion transistors and a 471mm² die.
The new card, known as the Quadro GP100 is so powerful and in a class of its own, Nvidia didn’t even bother changing the name of the card to line up with the rest of the Quadro’s Pxxxx naming scheme.
|Model||Quadro GP100||Quadro P6000|
|Transistor Count||16.5 billion||12 billion|
|FP16||2x FP32||1x FP32|
|FP32||>10 TFLOPS||12 TFLOPS|
|FP64||1/2 FP32||1/32 FP32|
|VRAM||16GB HBM2||24GB GDDR5X|
Initially looking at a comparison with the GP102 used in the Quadro P6000, the Quadro GP100 doesn’t look all that special, featuring a lower CUDA core count and only supporting 16GB of VRAM; however, looking closer, the Quadro GP100 is actually a beast designed for applications the Quadro P6000 simply isn’t capable of covering.
You see, the Quadro GP100’s appeal lies in its capability of performing computations other than single precision FP32. In cases where only single precision FP32 computations is necessary, it will lose out to the Quadro P6000, but it compensates by being able to perform 1/2x FP64 and 2x FP16. FP16 is currently heavily used in neural learning, where the quantity of data, rather than the precision of values is most important. FP64 on the other hand is necessary for high precision computations such as applications in 3D modeling and finance. The HBM2 memory in the Quadro GP100 also gives it a leg up in memory bandwidth, crucial for moving data quickly or pushing out pixels in CAD/CAE applications. For those who find VRAM lacking, two Quadro GP100 GPUs can be combined using Nvlink scaling the combined memory to up to 32GB HBM2 VRAM.
Overall, the Quadro GP100 doesn’t aim to be the best solution for everyone, but as a solution for those who need the capabilities it offers without having to pay the sky high prices on the Tesla P100. It also puts Nvidia in a better spot to compete with whatever workstation graphics cards AMD may launch following their Vega GPU announcement coming in Q2. Pricing on the Nvidia Quadro GP100 is undisclosed at this time however, it’s expected to be higher than the Quadro P6000, which is currently available online for ~$4,529.
During the event, Nvidia also completed rest of the Pascal Quadro lineup including the Quadro P4000, P2000, P1000, P600, and P400 which will complement the Quadro P6000 and P5000 which had already launched previously.