In The 17th ACM SIGPLAN International Conference on Functional Programming, ACM Press, 2012

**Abstract**

Existing approaches to *higher-order vectorisation*, also known as *flattening nested data parallelism,* do not preserve the asymptotic work complexity of the source program. Straightforward examples, such as sparse matrix-vector multiplication, can suffer a severe blow-up in both time and space, which limits the practicality of this method. We discuss why this problem arises, identify the mis-handling of index space transforms as the root cause, and present a solution using a refined representation of nested arrays. We have implemented this solution in Data Parallel Haskell (DPH) and present benchmarks showing that realistic programs, which used to suffer the blow-up, now have the correct asymptotic work complexity. In some cases, the asymptotic complexity of the vectorised program is even better than the original.

ICFP'12 PDF (12 pages)

TR PDF (20 pages — slightly older unabridged technical report version)

This page is part of Manuel Chakravarty's WWW-stuff.