Theme and Laboratory Outline
Construction of Virtual Private Distributed Systems by Comet
Parallel Distributed System Fujitsu Laboratory
http://www.pds-flab.rwcp.or.jp/comet/ResearchAchieve.html

The Parallel and Distributed Systems Fujitsu Laboratory is developing an ultra-high-speed communication technology called Comet (COMmunication Enterprising Technology) aiming at realizing VPDS (Virtual Private Distributed Systems). VPDS connects computers distributed on the Internet using gigabit transmission technology such as Gigabit Ethernet, SONET and WDM to allow these computers to function as a private virtual parallel and distributed system. At RWC 2000, we exhibits the Comet NP communication processor, Comet VIA communication architecture, Comet DV/IP application system and other technologies that are being developed. (http://www.pds-flab.rwcp.or.jp)
A Programming Environment for Heterogeneous Parallel and Distributed Systems
Parallel and Distributed System NEC Laboratory
http://www.rwcp.or.jp/activities/achievements/PD/nec/index.htm

The Parallel and Distributed Systems NEC Laboratory is carrying out research and development of a programming support environment to facilitate development of applications for "heterogeneous parallel and distributed systems" that are created by connecting supercomputers. This environment consists of three parts: library, compiler and programming tools. The demonstration will focus on the programming tools. These tools include a function for automatically allocating data or processing to a parallel processor, a function for estimating the performance before execution, and a function for optimizing the program using runtime information. The tool can transform a program written in Fortran language for a conventional single system into a parallel program for heterogeneous parallel systems by a simple, visual operation.
Interprocedural Parallelizing Compiler WPP
Multi-Processor Computing Hitachi Laboratory
http://www.rwcp.or.jp/activities/achievements/MP/hitachi/PDC-hitachi.html

The mainstream of supercomputers at present is the parallel computer that boosts the performance of a program by running its multiple CPUs simultaneously. In order to perform high-speed computation on the parallel computers, programmers must parallelize a wide range of the program, that is, write the program so that multiple CPUs can be efficiently used in the range.

Multiprocessor Computing Hitachi laboratory has been prototyping an automatic parallelization tool WPP (Whole Program Parallelizer). WPP can parallelize a wider range of a program by analyzing all the procedures comprising the program together and output the result as an OpenMP program.

By using WPP, users can execute existing programs without modification on the various types of parallel computers for which the OpenMP compiler is available.

In our demonstration, using an industry-standard benchmark program as an example, we will graphically show both the process of analyses in WPP and the resulting OpenMP program.