||In order to meet the operational requirements of cloud computing, how to improve the performance of parallel applications on a cloud platform is a hot topic. There are two types of applications running on the cloud platform: computation-intensive and communication-intensive. In running a computation-intensive program, the difference in performance of hardware between nodes can affect the parallelism of the work; and in the communication-intensive program, the communication speed within tasks can affect the efficiency of data exchange. To optimize communication, tasks that exchange lots of data should be mapped to processing units that have a high network performance. This technique is called communication-aware task mapping.|
In this paper, we explore the communication-aware task mapping as the starting point to reduce the communication latency as the goal. Because the speed of the Internet is full of uncertainty, and it is impractical to actually analyze the underlying network topology and hardware performance, so we propose a method to implement task mapping based on building a cloud connection type. The user can dispatch the cloud computing resources as the separated way to collect processes with high data communication rates at the same node. Finally, we also define the load status of different node according to the heterogeneity of node. In order to analyze the complex communication within tasks effectively and allocate computing resources more easily, we adopt MPI Kernel Cluster on Kernel Distributed Computing Management, proposed by Chiu and Guo. By its ability to analyze and dispatch tasks, and we improved MPIKC's Communication System to build a communication channel that is sufficient to handle complex data transfer topology. As MPIKC and KDCM fit tightly, they provide the feasibility of the program to be designed in a pipeline and significantly optimize the performance of communication-intensive types of programs.
At the end, we experimented with two communication-intensive programs, respectively, training Neural Network and Advanced Encryption Standard encryption program. We compare the running time of the process under various distribution conditions and the results show that the method of our purpose can save up to 12 times running time. We confirm that our purpose can effectively reduce the running time of parallel program.