In an era of ever-increasing need for computational power, companies are beginning to embrace the latest trend in GPU-based processing: The Cloud.
By leveraging the immensely powerful cloud technology and the computational prowess of GPUs, research centers in deep learning, physical simulation and molecular modeling, to name a few, are reducing the time required to undergo complex compute-intensive workload. So, how does it work?
To begin, we need to differentiate between a few concepts and ideas. The first being GPU vs. CPU:
- GPU – Graphics Processing Unit: is a specialised electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. Basically, the GPU is responsible for rendering visual content.
- CPU – Central Processing Unit: is the electronic circuitry within a computer that carries out the instructions of a computer program by performing the basic arithmetic, logical, control and input/output (I/O) operations specified by the instructions.
There are several important differences to note between the GPU and CPU. Architecturally, the GPU is composed of hundreds of cores that can handle thousands of threads simultaneously whilst the CPU is composed of just a few cores with copious cache memory that can handle a few software threads at a time. The GPU’s advanced capabilities were initially utilized for 3D game rendering.
Insight64’s principal analyst Nathan Brookwood described GPUs as “…optimised for taking huge batches of data and performing the same operation over and over very quickly, unlike PC microprocessors, which tend to skip all over the place.”
Recently, GPUs have been increasingly utilized for compute-intensive operations such as modeling and simulations. However, with the advent of Cloud Computing and its plethora of applications, a new genre of GPU-based uses found its way to the surface: GPU Cloud.
More researchers, individuals and organizations than ever are leveraging GPU Cloud to carry out High Performance calculations (HPC). At the outset, it is noteworthy to mention that GPU-based computing required immense investments due to the need for capable, local hardware.
This has generally been the catalyst for researchers and individuals to shy away from the use of GPUs to undergo their computations. However, what Cloud Computing offers is the ability to use existing hardware in a Hardware-as-a-Service fashion without incurring significant costs.
Many of the big players including Google, Amazon, NVIDIA and others, have entered into a contest to deliver the best value for GPU Cloud. With minute differences, the race between these companies has been accelerated and is at its peak as new GPU hardware and accompanying software bundles are increasingly being offered.
For instance, Google Cloud offers virtual machines with GPUs capable of tens of teraflops of performance, and claims its platform takes hours to process large amounts of data whereas NVIDIA’s Tesla K80 and P100s GPUs take days to render the same computations.
Iliass, Consultant, Leyton UK