It is widely agreed that the management of data movement will be a
key, if not the key factor in both achieving high performance and
controlling energy consumption on future large scale systems.
Consequently, the software research community confronts the formidable
problem of finding an execution model for HPC that can not only deliver a
thousand times more parallelism than currently available, but also
optimizes the amount of data locality to an unprecedented degree. The
leading idea of the PULSAR project is to create a virtual systolic array
architecture that can solve this problem for some important application
types. Specifically, PULSAR tests the following hypothesis:
If we create a data-driven execution model that virtualizes
classic systolic array architectures and supports flexible control over
the granularity of operations, then we will find innovative new
algorithms and implementations for dense linear algebra that can use
this platform to achieve outstanding performance and scalability on the
massively parallel and data-starved HPC systems of the future.