Literature Survey : A number of research efforts in the area of improving the performance of distributed applications in a cluster computing environment have emerged [2][7] [8]. Martin, Vahdat, Culler and Anderson [9] revealed that the communication latency, overhead, and bandwidth can be independently varied to observe the effects on a wide range of applications. They showed that applications demonstrate strong sensitivity to overhead, slowing down, when overhead is increased. Lange, D.B [10] discussed the impact of cluster size and underlying network on the performance of distributed applications. Our research effort (i) evaluates the performance of application as a function of problem size, network parameters and cluster size (ii) provides features for monitoring of cluster. The developed MCLUSTER model concentrates on reducing the communication overhead by using the mobility of code and bridging all system characteristics. There are many cluster configurations, but a simple architecture is shown in the Figure 1. Their abilities to cope with system heterogeneity and to deploy user-customized procedures at remote sites are well suited for cluster computing environments [12]. By interacting with a remote host after migrating to it, an agent can perform complex operations on remote data without transferring them, because the agent can carry the application logic to where it is needed. M status of each slave node. A slave node within a cluster provides the cluster a computing and data storage capability. These nodes are derived from fully operational, standalone computers that are typically marketed as desktop or server systems that are off-theshelf commodity systems. The framework consists personal computers, high speed communication network, sequential and parallel applications as shown in Figure 2. PC's are connected to the network using Ethernet Network Interface Card. Cluster middleware is implemented in JAVA so that middleware can provide the single system image of the cluster to any computer with different OS platforms, once the JAVA virtual machine is installed. JVM makes it easier to implement, migrate and execute the mobile code at remote computer in the cluster. The user is guided through the creation and management of cluster via GUI. It frees the user from identifying the network topology of the framework of cluster. The framework has been designed in such a way that incremental changes to it can easily enhance the generality and usability of cluster. AppManager coordinates all functions related to application description, distribution and execution. AppManager provides a GUI from which user can easily select a particular application, provides the input data and specify the parameters such as number of nodes to be used for the execution of application. Once the application is selected, then the AppManager will divides the data range into the number of nodes and distribute the mobile code to the selected client nodes as shown in Figure 4. ClientManager listens to the requests from MCLUSTER related to cluster membership, application execution and service those requests. In lieu of invitation packet, ClientManager running on a node will communicate its IP address to the server. On receiving the mobile code from server it load and execute the mobile code and reply the results back. The performance of cluster, during the execution of distributed applications not having significant amount of data (such as between 1-20000) deteriorate as indicated by Overhead point in Figure 6. At these data ranges the communication overhead between MCLUSTER Server and Client nodes overwhelm the advantage of distributed processing power obtained from client nodes. When data range is extensive then the time consumed by an application so executed in a distributed manner is several times smaller than the execution time of the same application processed on a single node. The MCLUSTER model is designed in such a way that even for a large problem size (such as between 1-5000000) and average cluster size (up to N=11) performance of distributed application will not deteriorate as shown in Figure 7. Communication Overhead Analysis : When the scalability of cluster increases, the communication links near the server are congested due to large transmission of data, thereby degrading the performance of cluster. Communication overhead includes the overhead due to exchange of data between nodes and message delay caused by network congestion. Figure 8 is represented by the result so generated after execution of the application Series for a data range 1-100000 on a cluster of 11 client nodes. On the basis of assumption that external network load is low and constant, the communication overhead for MCLUSTER model can be parameterized as a simple linear function of number of bytes transmitted and number of client nodes selected as represented in Figure 8. 1![Figure 1. Cluster Computing ArchitectureAs shown in Figure1, node participation in the cluster falls into master or head node and computing or slave nodes. The master node is the unique server in cluster systems. It is responsible for running the file system and also serves as the key system for clustering middleware to route processes, duties and monitor the](image-2.png "Figure 1 .") 2![Figure 2. Cluster Computing Framework](image-3.png "Figure 2 .") 3![Fig. 3 Cluster middleware MCLUSTER and client nodes are interacting with each other through message passing and any parallel application is executed using mobile code which contains input data/application code.](image-4.png "Fig. 3") 4![Figure 4 Communication between Master node and client node](image-5.png "Figure 4") 5![Figure 5. Statistical Data of Application : Series Figure 6. Effect of data range and number of nodes on execution time Global Journal of Computer Science and Technology Volume XII Issue V Version I](image-6.png "Figure 5 .") Communication © 2012 Global Journals Inc. (US) Global Journal of Computer Science and Technology Volume XII Issue V Version I © 2012 Global Journals Inc. (US) 2012 March ## Data Range No. of Nodes Time (Milliseconds) * Network partitioning of data parallel computations BWJon ASGrimshaw Proc. Third International Symposium. High Performance Distributed Computing (HPDC '94) Third International Symposium. High Performance Distributed Computing (HPDC '94)San Francisco, CA, USA IEEE Computer Society 1994. April 2-5, 1994 * Causes of blocking overhead in message-passing programs JLemeire EDirkx 2002 10 th * Euro PVM/MPI 2002 Conference Venice, Italy * Effects of communication latency,overhead and bandwidth in a cluster architecture RMartin AVahdat DCuller TAnderson Proceedings 24th Annual International. Symposium Computer Architecture (ISCA) DLange 24th Annual International. Symposium Computer Architecture (ISCA)New York, USA General Magic, Inc. California 1997 Mobile objects and Mobile Agents: The Future of Distributed Computing? * Implementing a full single system image unixware cluster: Middleware vs. underware BWalker DSteel Proceedings: International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA'99) International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA'99)Las Vegas, USA 1999 * IchiroSatoh Configurable Network Processing for Mobile Agents on the Internet. Cluster computing, The Journal of Networks, Software Tools and Applications) Kluwer 2004 7