ACM DL

ACM Transactions on

Parallel Computing (TOPC)

Menu
Latest Articles

Scalable Deep Learning via I/O Analysis and Optimization

Scalable deep neural network training has been gaining prominence because of the increasing importance of deep learning in a multitude of scientific... (more)

I/O Scheduling Strategy for Periodic Applications

With the ever-growing need of data in HPC applications, the congestion at the I/O level becomes critical in supercomputers. Architectural enhancement... (more)

Scheduling Mutual Exclusion Accesses in Equal-Length Jobs

A fundamental problem in parallel and distributed processing is the partial serialization that is imposed due to the need for mutually exclusive... (more)

Modeling Universal Globally Adaptive Load-Balanced Routing

Universal globally adaptive load-balanced (UGAL) routing has been proposed for various interconnection networks and has been deployed in a number of... (more)

NEWS

ACM Transactions on Parallel Computing Names David Bader as Editor-in-Chief

ACM Transactions on Parallel Computing (TOPC) welcomes David Bader as new Editor-in-Chief, for the term November 1, 2018 to October 31, 2021. David is a Professor and Chair in the School of Computational Science and Engineering and College of Computing at Georgia Institute of Technology.

 

About TOPC

ACM Transactions on Parallel Computing (TOPC) is a forum for novel and innovative work on all aspects of parallel computing, including foundational and theoretical aspects, systems, languages, architectures, tools, and applications. It will address all classes of parallel-processing platforms including concurrent, multithreaded, multicore, accelerated, multiprocessor, clusters, and supercomputers. READ MORE

Forthcoming Articles

Tight Bounds for Clairvoyant Dynamic Bin Packing

Pagoda: A GPU Runtime System For Narrow Tasks

Tapir: Embedding Recursive Fork-Join Parallelism into LLVM's Intermediate Representation

On Energy Conservation in Data Centers

Near Optimal Parallel Algorithms for Dynamic DFS in Undirected Graphs

Processor-Oblivious Record and Replay

TOPC Introduction to the Special Issue for SPAA?17

Distributed Graph Clustering and Sparsification

Hyperqueues: Design and Implementation of Deterministic Concurrent Queues

The hyperqueue is a programming abstraction for queues that results in deterministic and scale-free parallel programs. Hyperqueues extend the concept of Cilk++ hyperobjects to provide thread-local views on a shared data structure. While hyperobjects are organized around private local views, hyperqueues provide a shared view on a queue data structure. Hereby, hyperqueues guarantee determinism for programs using concurrent queues. We define the programming API and semantics of two instances of the hyperqueue concept. These hyperqueues differ in their API and the degree of concurrency that is extracted. We describe the implementation of the hyperqueues in a work-stealing scheduler and demonstrate scalable performance on pipeline-parallel benchmarks from PARSEC and StreamIt.

Using Butterfly-Patterned Partial Sums to Draw from Discrete Distributions

New Cover Time Bounds for the Coalescing-Branching Random Walk on Graphs

The Mobile Server Problem

Extracting SIMD Parallelism from Recursive Task-Parallel Programs

The pursuit of computational efficiency has led to the proliferation of throughput-oriented hardware, from GPUs to increasingly wide vector units on commodity processors and accelerators. This hardware is designed to efficiently execute data-parallel computations in a vectorized manner. However, many algorithms are more naturally expressed as divide-and-conquer, recursive, task-parallel computations. In the absence of data parallelism, it seems that such algorithms are not well suited to throughput-oriented architectures. This paper presents a set of novel code transformations that expose the data parallelism latent in recursive, task- parallel programs. These transformations facilitate straightforward vectorization of task-parallel programs on commodity hardware. We also present scheduling policies that maintain high utilization of vector resources while limiting space usage. Across several task-parallel benchmarks, we demonstrate both efficient vector resource utilization and substantial speedup on chips using Intel's SSE4.2 vector units, as well as accelerators using Intel's AVX512 units. We then show through rigorous sampling that, in practice, our vectorization techniques are effective for a much larger class of programs.

All ACM Journals | See Full Journal Index

Search TOPC
enter search term and/or author name