Graphics hardware accelerated transmission line matrix procedures

Date

2010-08-11T16:31:53Z

Authors

Rossi, Filippo Vincenzo

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

The past decade has seen a transition of Graphics Processing Units (GPUs) from special purpose graphics processors, to general purpose computational accelerators. GPUs have been investigated to utilize their highly parallel architecture to accelerate the computation of the Transmission Line Matrix (TLM) methods in two and three dimensions. The design utilizes two GPU programming languages, Compute Unified Device Architecture (CUDA) and Open Computing Language (OpenCL), to code the TLM methods for NVIDIA GPUs. The GPU accelerated two-dimensional shunt node TLM method (2D-TLM) achieves 340 million nodes per second (MNodes/sec) of performance which is 25 times faster than a commercially available 2D-TLM solver. Initial attempts to adapt the three-dimensional Symmetrical Condensed Node (3D-SCN) TLM method resulted in a peak performance of 47 MNodes/sec or7 times in speed-up. Further efforts to improve the 3D-SCN TLM algorithm, as well as investigating advanced GPU optimization strategies resulted in performances accelerated to 530 MNodes/sec, or 120 times speed-up compared to a commercially available 3D-SCN TLM solver.

Description

Keywords

Transmission Line Matrix, GPU

Citation