IBM and Stone Ridge Technology have reduced the number of processors needed to run reservoir simulations for the oil, gas and water industries from 700 to just 60, along with 120 graphics processor units (GPU).

Working with Nvidia, the companies said they had managed to run the simulation using one-tenth of the power and one-hundredth of the space.

Nvidia said the demonstration showed how GPUs could be used to simulate one billion cell models in a fraction of the published time, while delivering 10x the performance and efficiency of legacy CPU-based high-performance computers.

The breakthrough used 60 Power processors and 120 GPU accelerators, shattering the previous supercomputer record, according to IBM.

Sumit Gupta, IBM vice-president, high-performance computing, artificial intelligence and analytics, said: “The previous record used more than 700,000 processors in a supercomputer installation that occupied nearly half a football field. Stone Ridge did this calculation on two racks of IBM Power Systems machines that could fit in the space of half a ping-pong table.”

Stone Ridge, developer of the Echelon petroleum reservoir simulation software, completed the billion-cell reservoir simulation in 92 minutes using 30 IBM Power Systems S822LC for HPC servers equipped with 60 Power processors and 120 Nvidia Tesla P100 GPU accelerators.

“This calculation is a very salient demonstration of the computational capability and density of solution that GPUs offer,” said Vincent Natoli, president of Stone Ridge Technology. “That speed lets reservoir engineers run more models and ‘what-if’ scenarios than previously, so they can have insights to produce oil more efficiently, open up fewer new fields and make responsible use of limited resources. By increasing compute performance and efficiency by more than an order of magnitude, we are democratising HPC for the reservoir simulation community.”

According to Nvidia, the demonstration proves that GPUs can be used to accelerate complex application codes such as reservoir simulators. Previously, GPUs have been applied for simpler applications that lend themselves to parallelisation, such as seismic imaging.

GPUs are also being used by AI researchers and developers to train and deploy machines that can classify images, analyse videos, recognise speech and process natural language.


Source link