By Philip J. Hatcher
MIMD pcs are notoriously tough to software. Data-Parallel Programming demonstrates that architecture-independent parallel programming is feasible by means of describing intimately how courses written in a high-level SIMD programming language could be compiled and successfully executed-on either shared-memory multiprocessors and distributed-memory multicomputers.The authors supply sufficient information in order that the reader can make a decision the feasibility of architecture-independent programming in a data-parallel language. for every benchmark application they provide the resource code directory, absolute execution time on either a multiprocessor and a multicomputer, and a speedup relative to a sequential software. they usually usually current a number of strategies to a similar challenge, to higher illustrate the strengths and weaknesses of those compilers.The language awarded is Dataparallel C, a version of the unique C* language constructed via considering Machines company for its Connection computing device processor array. Separate chapters describe the compilation of Dataparallel C courses for execution at the Sequent multiprocessor and the Intel and nCUBE hypercubes, respectively. The authors record the functionality of those compilers on numerous benchmark courses and current numerous case studies.Philip J. Hatcher is Assistant Professor within the division of computing device technological know-how on the collage of recent Hampshire. Michael J. Quinn is affiliate Professor of desktop technological know-how at Oregon kingdom University.Contents: advent. Dataparallel c language Description. layout of a Multicomputer Dataparallel C Compiler. layout of a Multiprocessor Dataparallel C Compiler. Writing effective courses. Benchmarking the Compilers. Case stories. Conclusions.
Read or Download Data-Parallel Programming on MIMD Computers PDF
Best compilers books
Initially released in 1981, this was once the 1st textbook on programming within the Prolog language and remains to be the definitive introductory textual content on Prolog. even though many Prolog textbooks were released considering, this one has withstood the try of time as a result of its comprehensiveness, educational strategy, and emphasis on normal programming functions.
- A Tight, Practical Integration of Relations and Functions
- Reachability Problems: 8th International Workshop, RP 2014, Oxford, UK, September 22-24, 2014. Proceedings
- Verified Software: Theories, Tools, Experiments: Third International Conference, VSTTE 2010, Edinburgh, UK, August 16-19, 2010, Proceedings (Lecture Notes ... Programming and Software Engineering)
- Integrated Formal Methods: 12th International Conference, IFM 2016, Reykjavik, Iceland, June 1-5, 2016, Proceedings
Additional info for Data-Parallel Programming on MIMD Computers
During each iteration the algorithm divides the active processors into two sets of e q ual size. The processors in the upper half send their values to the processors in the lower half. The processors in the lower half add the two values, while the processors in the upper hal f become inactive. If a processor is sen ding a value, variab le de s t contains the number of the processor receiving the value; if a processor is receiving a value, variable source conta i n s the number of the processor sending the value.
The overhead associated with implementations of functional programming languages makes it extremely difficult for their programs to be competitive with programs handcrafted for a particular parallel machine. Regardless of whether the overhead of functional programming languages is ulti mately removed, we feel there i s a role, particularly i n the short run. for an imperative language that is explicitl y parallel. yet quite similar to existing sequential languages. An easy-to-Iearn data-parallel language such as Dataparallel C may help popularize parallel computing.
Many important features of C++ do not appear in Dataparallel C. For example, Dataparallel C has no notion of inheritance. The Dataparallel C programming model is based upon virtual processors, global name space, and synchronous execution of a single instruction stream. The first three sections of this chapter describe these high-level features in detail. We then discuss point ers, functions, virtual topologies, and parallel 1/0. In August 1990 Thinking Machines Corporation announced a new C* language.