This work seeks to develop an autonomous optimization of input computational resource parameters for arbitrary big-data computed tomography (CT) configurations. It is well known that graphics processing units (GPU) have been a boon to many high-performance applications, including CT. The reconstruction task has both colossal computational and data throughput requirements that easily tax high-end GPUs to their limit. For big-data industrial and research applications, the burden is exacerbated through the use of high pixel count detectors (≥ 16 megapixels) and the large number of projections needed to meet Nyquist sampling requirements, resulting in datasets up to terabytes in size. Previous work has shown that the GPU kernels can be optimized to efficiently handle big-data; however, as this work will show, some sensitivities exist with respect to the tunable input parameters that can exact an exaggerated toll on reconstruction performance. This work will investigate the input parameter space for various relevant and future-sized datasets and will present a calibration approach to optimize reconstruction performance for varying sized detectors, geometries, and graphics processing resources. This work has the potential to dramatically improve many non-destructive evaluation and inspection applications in industry, security, and research where reconstruction rate is the main bottleneck of the resource chain. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration under contract DEAC04-94AL85000.
|