Traditional solutions for data and graph analytics tend to be highly fragmented, and take the form of stand-alone frameworks. In this poster session, we shall describe our approach that is centered around a suite of advanced parallel primitives embedded within SPM.Python. These primitives augment the serial Python language with concepts like parallel generators and emitters, parallel exceptions, and * parallel data structures (synchronous and asynchronous)
Our solution is anchored around a basic concept called Hybrid Flow whereby, essentially, the traditional data flow is augmented with specialized form of parallel control flow. The net result is a general purpose solution that can be used to navigate a herd of compute resources across the parallel landscape in real-time.
In this poster, we shall describe four very different and distinct forms of Hybrid Flows that may be used to very efficiently perform both data and graph analytics. Each flow may be implemented in Python with as little as 160 lines of code.
Each resource would process its local task queue in a completely asynchronous fashion (without having to coordinate with any other resource).
The local generator is designed to depopulate the local task queue. Meanwhile, the local emitter has the ability to populate the task queue of any resource. The parallel exception and data structures may be used to track some global attributes.
Each resource would process a global task queue in a synchronous fashion to enable speculative work.
The global task queue is depopulated in such a manner so that all local generators return the same task. The resources are expected to apply different heuristics, options, effort level and/or algorithms to the same task.
The first resource to throw the parallel exception would stop everybody else from continuing with the current task.
Each resource would process its local task queue in a synchronous fashion. In other words, all resources end up executing N global iterations in lock step. The value for N may be hard coded. Alternatively, the parallel exception may be raised by any resource to terminate early.
During each global iteration, all local generators provide tasks that were generated in the last iteration, while all local emitters push new tasks that will be made available in the next iteration. Incidentally, the local emitters have the ability to populate the task queue for the next iteration of any resource.
A general purpose, real-time data flow is mapped over all the compute resources; thus, enabling the processing of streams of data.
At each resource, the local generator would depopulate the local task queue, while the local emitter would populate the local task queue of some resource "down stream" as per user defined policies (like random order, pseudo sorted, sorted, round robin).
The parallel exception and data structures may be used to share knowledge across all resources.