Thursday, April 4, 2019
MapReduce for Distributed Computing
MapReduce for Distributed cypher1.) IntroductionA distributed computation goerning body smoke be specify as a collection of lickors complect by a discourse ne dickensrk such that all in all(prenominal) processor has its take local memory. The intercourse between some(prenominal) two or much processors of the governance of rules takes place by passing information over the communication network. It has its application in conglomerate handle uniform Hadoop and Map Reduce which we get out be discussing further in details.Hadoop is becoming the technology of election for enterprises that need to effectively collect, store and process bigger amounts of structured and complex data.The purpose of the thesis is to look into about the possibility of victimization a MapReduce framework to implement Hadoop.Now all this is possible by the archive strategy that is commitd by Hadoop and it is HDFS or Hadoop Distributed data institutionalize System.HDFS is a distributed file cabinet administration and opened to tramp on hardw atomic enumerate 18. It is similar with existing distributed file organisations and its main advantage over the new(prenominal)wise(a) distributed File carcass is, it is designed to be deployed on low-cost hardw be and highly fault-tolerant. HDFS provides extreme throughput doorway to applications having mountainous data sets.Originally it was built as infrastructure attendant for the Apache Nutch web search engine. Applications that run using HDFS retain extremely large data sets like few gigabytes to even terabytes in size. Thus, HDFS is designed to support very large sized files. It provides high data communication and erect connect hundreds of nodes in a undivided bunch together and supports tens of millions of files in a corpse at a age.Now we take all the above things menti unityd above in details. We will be discussing various fields where Hadoop is being implemented like in reposition facility of Fa cebook and twitter, HIVE, PIG etc.2.) Serial vs. Parallel ProgrammingIn the advance(prenominal) decades of computing, political platforms were serial or incidental, that is, a program consisted of a categorization of instructions, where apiece instruction executed sequential as find suggests. It ran from start to finish on a wizard processor.Parallel programming (grid computing) true as a means of improving serveance and efficiency. In a correspond program, the process is scurvy up into some(prenominal) parts, each of which will be executed simultaneously. The instructions from each part run simultaneously on different CPUs. These CPUs chamberpot exist on a undivided machine, or they can be CPUs in a set of computers connected via a network.Not only be collimate programs faster, they can also be apply to solve problems on large datasets using non-local alternatives. When you grow a set of computers connected on a network, you have a vast pool of CPUs, and you often have the ability to read and write very large files (assuming a distributed file dodging is also in place).Parallelism is nonhing but a strategy for performing complex and large tasks faster than conventional serial expression. A large task can either be performed serially, one step sp atomic number 18- m application a nonher, or can be decomposed into little tasks to be performed simultaneously using concurrent mechanism in parallel systems.Parallelism is do byBreaking up the process into smaller processesAssigning the smaller processes to multiple processors to work on simultaneouslyCoordinating the processorsParallel problem solving can be seen in real life application in like manner.Examples automobile manufacturing plant operating a large fundamental law building construction3.) History of clodsClustering is the use of glob of computers, typically PCs or slightly workstations, storage devices, and interconnections, appears to outsider (substance ab exploiter) as a single highly super system. Cluster computing can be used for high availability and load balancing. It can be used as a relatively low-cost form of parallel treat system for scientific and other related applications.Computer cluster technology put cluster of few systems together to provide stop system reliability. Cluster emcee systems can connect a group of systems together in prepare to provide combined processing usefulness for the lymph nodes in the cluster.Cluster operating systems distribute the tasks amongst the ready(prenominal) systems. Clusters of systems or workstations can connect a group of systems together to share critically demanding and tough tasks. Theoretically, a cluster operating system can provide seamless optimization in each case.At the present succession, cluster host and workstation systems are mostly used in High Availability applications and in scientific applications such as numerical computations.A cluster is a shell of parallel or distr ibuted system thatconsists of a collection of interconnected whole computersand is used as single, unified computing resource.The whole computer in above definition can have one or more processors built into a single operating system image.Why a ClusterLower cost In all-purpose small sized systems loot from using proper technology. Both hardware and software costs tend to be expressively squirt for minor systems. However one must study the entire cost of proprietorship of your computing environs while making a buying conclusion. Next subdivision facts to just about issues which may counterbalance some of the gains of primary cost of acquirement of a cluster. .Vendor independence Though it is special Kly suit equal to use similar components through a number of innkeepers in a cluster, it is worthy to retain a definite degree of vendor independence, especially if the cluster is being organized for long term usage. A Linux cluster created on mostly service hardware permits for much better vendor liberation than a large multi-processor scheme using a proprietary operating system.Scalability In several environments the problem load is too large that it just cannot be processed on a specific system within the clock time limits of the organization. Clusters similarly provide a hassle-free path for increasing the computational means as the load rises over time. Most large systems scale to a assured number of processors and withdraw a costly emanationReliability, Availability and Serviceability (RAS) A larger system is typically more vulnerable to harm than a smaller system. A major hardware or software component failure fetches the whole system dget. Hence if a large single system is positioned as the computational resource, a module failure will bring down substantial computing power. In case of a cluster, a single module failure only affects a small part of the overall computational resources. A system in the cluster can be repaired without bringing res t of the cluster down. Also, additional computational resources can be added to a cluster while it is running the exploiter assignment. Hence a cluster maintains resolve of substance ab user operations in both of these cases. In similar type of situations a SMP system will require a perform shutdown and a restart.Adaptability It is much easier to adapt the topology. The patterns of linking the compute nodes together, of a cluster to surpass suit the application requirements of a computer center. Vendors typically support much classified topologies of MPPs because of design, or sometimes testing, issues.Faster technology innovation Clusters benefit from thousands of researchers all around the world, who typically work on smaller systems rather than luxurious high end systems.Limitations of ClustersIt is noteworthy to reference certain shortcomings of using clusters as opposite to a single large system. These should be closely cautious while defining the best computational resour ce for the organization. System overseers and programmers of the organization should intensely take part in estimating the following trade-offs.A cluster increases the number of individual components in a computer center. Every server in a cluster has its own sovereign network ports, power supplies, etc. The increased number of components and cables going across servers in a cluster incompletely counterbalances some of the RAS advantages orderd above. It is easier to achieve a single system as opposed to numerous servers in a cluster. There are a exercise set more system services obtainable to manage computing means within a single system than those which can assistance manage a cluster. As clusters promoteively find their way into moneymaking organizations, more cluster savvy tools will become accessible over time, which will bridge some of this gap.In order for a cluster to scale to make actual use of numerous CPUs, the workload needs to be properly well-adjusted on the c luster. Workload inequity is easier to handle in a dual-lane memory environment, because switching tasks across processors doesnt involve too much data movement. On the other hand, on a cluster it tends to be very tough to move a by this time running task from one node to some other. If the environment is such that workload balance cannot be controlled, a cluster may not provide good parallel proficiency.Programming patterns used on a cluster are typically diverse from those used on shared out-memory systems. It is relatively easier to use parallelism in a shared-memory system, since the shared data is gladly available. On a cluster, as in an MPP system, either the programmer or the compiler has to explicitly transport data from one node to other. Before deploying a cluster as a get a line resource in your environment, you should make sure that your system administrators and programmers are well-off in working in a cluster environment.Getting Started With Linux ClusterAlthoug h clustering can be performed on various operating systems like Windows, Macintosh, Solaris etc. , Linux has its own advantages which are as follows-Linux runs on a long range of hardwareLinux is exceptionally stableLinux source code is freely distributed.Linux is relatively virus free.Having a wide variety of tools and applications for free.Good environment for developing cluster infrastructure.Cluster Overview and TerminologyA compute cluster comprises of a lot of different hardware and software modules with complex interfaces between various modules. In fig 1.3 we show a simplified concept of the key layers that form a cluster. pursual sections give a brief overview of these layers.4.) Parallel computing and Distributed Computing systemParallel computingIt is the concurrent execution of some permutation of multiple instances of programmed instructions and data on multiple processors in order to achieve results faster.A parallel computing system is a system in which computer wi th more than one processor for parallel processing. In the past, each processor of a multiprocessing system either time came in its own processor packaging, but in recent times-introduced multicore processors contain multiple logical processors in a single package. There are many diverse kinds of parallel computers. They are well-known(a) by the kind of interconnection among the processors (processing elements or PEs) and memory.Distributed Computing SystemThere are two types of distributed Computing systemsTightly coupled system In these systems, there is a single system wide primary memory (address space) that is shared by all the processors. In these systems any communication between the processors usually takes place through the shared memory. In tightly coupled systems, the number of processors that can be usefully deployed is usually small and express mail by the bandwidth of the shared memory. Tightly coupled systems are referred to as parallel processing systemsLoosely c oupled systems In these systems, the processors do not share memory, and each processor has its own local memory. In these systems, all physical communication between the processors is done by passing messages across the network that interconnects the processors. In this type of System Processors are expandable and can have unlimited number of processor. Loosely coupled systems, are referred to as distributed computing systems.Various Models are used for building Distributed Computing System4.1) Minicomputer ModelIt is a simple extension of the centralized time-sharing system. A distributed computing system ground on this classical consists of a few minicomputers or large supercomputers unified by a communication network. Each minicomputer usually has many user simultaneously logged on to it through several terminals linked to it with every user logged on to one exact minicomputer, with remote access to other minicomputers, The network permits a user to access remote resources that are available on same machine other than the one on to which the user is currently logged.The minicomputer model is used when resource sharing with remote users is anticipated.The initial ARPAnet is an example of a distributed computing system found on the minicomputer model.4.2) Workstation ModelWorkstation model consists of several workstations unified by a communication network. The best example of a Workstation Model can be a companys office or a university department which may have quite a few workstation dis stationd throughout a building or campus, with each workstation equipped with its individual criminal record and serving time which is specifically during the night, Notion of using workstation Model is that when certain workstations are idle (not being used), resulting in the looseness of great amounts of CPU time the model connects all these workstations by a high-speed LAN so that futile workstations may be used to process jobs of users who are logged onto to other workstations and do not have adequate processing power at their own workstations to get their jobs handled efficiently.A user logs onto one of the workstations which is his national workstation and submits jobs for execution if the system does not have sufficient processing power for implementation the processes of the submitted jobs resourcefully, it transfers one or more of the processes from the users workstation to some other workstation that is currently ideal and gets the process executed there, and at last the outcome of execution is given affirm to the users workstation deprived of the user being advised of it.The main Issue increases if a user logs onto a workstation that was idle until now and was being used to perform a process of another workstation .How the remote process is to be controlled at this time .To handle this type of problem we have three solutions The first method is to allow the remote process share the resources of the workstation on with its own lo gged-on users processes. This method is lenient to apply, but it setbacks the main idea of workstations helping as personal computers, because if remote processes are permitted to execute concurrently with the logged-on users own processes, the logged-on user does not get his or her fail-safe response.The second method is to kill the remote process. The main disadvantage of this technique is that all the processing done for the remote process gets lost and the file system may be left in an erratic state, making this method repellent.The third method is to migrating the remote process back to its scale workstation, so that its execution can be actd there. This method is tough to implement because it involves the system to support preemptive process migration facility that is stopping the current process when a higher precedence process comes into the execution.Thus we can say that the workstation model is a network of individual workstations, each with its own disk and a local f ile system.The Sprite system and experimental system developed at Zerox PARC are two examples of distributed computing systems, based on the workstation model.4.3) Workstation-Server ModelWorkstation Server Model consists of a limited minicomputers and numerous workstations (both diskful and diskless workstations) but most of them are diskless connected by a high speed communication Network. A workstation with its own local disk is generally called a diskful workstation and a workstation without a local disk is named as diskless workstation.The file systems used by these workstations is either applied either by a diskful workstation or by a minicomputer armed with a disk for file storage. One or more of the minicomputers are used for applying the file system. Other minicomputer may be used for providing other types of service area, such as database service and print service. Thus, every minicomputer is used as a server machine to provide one or more types of services. Therefore in t he workstation-server model, in addition to the workstations, there are dedicated machines (may be specialized workstations) for running server processes (called servers) for handling and providing access to shared resources.A user logs onto a workstation called his home workstation, Normal computation activities required by the users processes are performed at the users home workstation, but requirements for services provided by special servers such as a file server or a database server are sent to a server providing that type of service that performs the users requested activity and returns the result of request processing to the users workstation. Therefore, in this model, the users processes need not be migrated to the server machines for getting the work done by those machines.For better complete system performance, the local disk of diskful workstation is normally used for such purposes as storage of transitory file, storage of unshared files, storage of shared files that are rarely changed, paging activity in virtual(prenominal)-memory management, and caching of remotely accessed data.Workstation Server Model is better than Workstation Model in the following waysIt is much cheaper to use a few minicomputers equipped with large, fast disks than a large number of diskful workstations, with each workstation having a small, slow disk.Diskless workstations are also preferred to diskful workstations from a system maintenance point of view. Backup and hardware maintenance are easier to perform with a few large disks than with many small disks scattered all Furthermore, installing clean releases of software (such as a file server with new functionalities) is easier when the software is to be installed on a few file server machines than on every workstations.In the workstation-server model, since all files are managed by the file servers, users have the flexibility to use any workstation and access the files in the same manner irrespective of which workstation the user is currently logged on .Whereas this is not true with the workstation model, in which each workstation has its local file system, because different mechanisms are needed to access local and remote files. Unlike the workstation model, this model does not need a process migration facility, which is difficult to implement.In this model, a client process or workstation sends a request to a server process or a mini computer for getting some service such as reading a block of a file. The server executes the request and sends back a reply to the client that contains the result of request processing.A user has guarantied response time because workstations are not used for executing remote process. However, the model does not utilize the processing capability of idle workstation.The V-System (Cheriton 1988) is an example of a distributed computing system that is based on the workstation-server model.4.4) Processor-Pool ModelIn the process of pool model the processors are pooled tog ether-to be shared by the users needed. The pool -or processors consist of a large number of micro-computers and minicomputers attached to the network. Each processor in the pool has its own memory to load and run a system program or an application program of the distributed-computing system. The processor-pool model is used for the purpose that most of the time a user does not need any computing power but once in a while he may need a very large amount of computing power for short time (e.g., when recompiling a program consisting of a large number of files after changing a basic shared declaration).In processor-pool model, the processors in the pool have no terminal attached directly to them, and users access the system from terminals that are attached to the network via special devices. These terminals are either small diskless workstations or graphic terminals. A special server called a run server manages and allocates the processors in the pool to different users on a demand bas is. When a user submits a job for computation an appropriate number of Processors are temporarily assigned to his or her job by the run server. In this type of model we do not have a concept of home machine, in this when a user logs on he is logged on to the whole system by default.The processor-pool model allows better utilization of the available processing power of a distributed computing system as in this model the entire processing power of the system is available for use by the current logged-on users, whereas this is not true for the workstation-server model in which several workstations may be idle at a particular time but they cannot be used for processing the jobs of other users.Furthermore, the processor-pool model provides greater flexibility than the workstation-server model as the systems services can be easily expanded without the need to install any more computers. The processors in the pool can be allocated to act as extra servers to carry any additional load arisin g from an increased user community or to provide new services.However, the processor-pool model is usually considered to be unsuitable for high-performance interactive application, program of a user is being executed and the terminal via which the user is interacting with the system. The workstation-server model is generally considered to be more suitable for such applications.Amoeba Mullender et al. 1990. Plan 9 Pike et al. 1990, and the Cambridge Distributed Computing System Needham and Herbert 1982 are examples of distributed computing systems based on the processor-pool model.5) ISSUES IN DESIGNING A DISTRIBUTED OPERATING SYSTEMTo design a distributed operating system is a more difficult task than designing a centralized operating system for several reasons. In the design of a centralized operating system, it is assumed that the operating system has access to complete and accurate information about the environment is which it is functioning. In a distributed system, the resourc es are physically separated, their is no common clock among the multiple processors as the delivery of messages is delayed, and not have up-to-date, consistent knowledge about the state of the various components of the underlying distributed system .And lack of up-to-date and consistent information makes many thing (such as management of resources and synchronization of cooperating activities) much harder in the design of a distributed operating system,. For example, it is hard to schedule the processors optimally if the operating system is not sure how many of them are up at the moment.Therefore a distributed operating system must be designed to provide all the advantages of a distributed system to its users. That is, the users should be able to view a distributed system as a virtual centralized system that is flexible, efficient, reliable, secure, and easy to use. To meet this challenge, designers of a distributed operating system must deal with several design issues. Some of the key design issues are5.1) foilThe main goal of a distributed operating system is to make the universe of discourse of multiple computers invisible (transparent) and that is to provide each user the whole tone that he is the only user working on the system. That is, distributed operating system must be designed in such a way that a collection of distinct machines connected by a communication subsystem appears to its users as a virtual unprocessed.Accesses TransparencyAccess transparency typically refers to the situation where users should not need or be able to recognize whether a resource (hardware or software) is remote or local. This implies that the distributed operating system should allow users to access remote resource in the same ways as local resources. That is, the user should not be able to distinguish between local and remote resources, and it should be the responsibility of the distributed operating system to locate the resources and to arrange for servicing user requ ests in a user-transparent manner.Location TransparencyLocation Transparency is achieved if the name of a resource is kept hidden and user mobility is there, that isName transparencyThis refers to the fact that the name of a resource (hardware or software) should not reveal any hint as to the physical location of the resource. Furthermore, such resources, which are capable of being moved from one node to another in a distributed system (such as a file), must be allowed to move without having their names changed. Therefore, resource names must be unique system wide.User Mobility this refers to the fact that no matter which machine a user is logged onto, he should be able to access a resource with the same name he should not require two different names to access the same resource from two different nodes of the system. In a distributed system that supports user mobility, users can freely log on to any machine in the system and access any resource without making any extra effort.Replic ation TransparencyReplicas or copies of files and other resources are created by the system for the better performance and reliability of the data in case of any loss. These replicas are placed on the different nodes of the distributed System. Both, the existence of multiple copies of a replicated resource and the restoration activity should be transparent to the users. Two important issues related to replication transparency are naming of replicas and replication control. It is the responsibility of the system to name the various copies of a resource and to map a user-supplied name of the resource to an appropriate replica of the resource. Furthermore, replication control decisions such as how many copies of resource should be created, where should each copy be placed, and when should a copy be created/deleted should be made entirely automatically by the system in a user -transparent manner.Failure TransparencyFailure transparency deals with masking from the users partial failures in the system,Such as a communication link failure, a machine failure, or a storage device crash. A distributed operating system having failure transparency home will continue to function, perhaps in a degraded form, in the face of partial failures. For example suppose the file service of a distributed operating system is to be made failure transparent. This can be done by implementing it as a group of file servers that closely cooperate with each other to manage the files of the system and that function in such a manner that the users can utilize the file service even if only one of the file servers is up and working. In this case, the users cannot notice the failure of one or more file servers, except for slower performance of file access operations. Be implemented in this way for failure transparency. An attempt to design a completely failure-transparent distributed system will result in a very slow and highly expensive system due to the large amount of redundance required for tolerating all types of failures.Migration TransparencyAn bearing is migrated from one node to another for a better performance, reliability and great security. The aim of migration transparency is to control that the movement of the object is handled automatically by the system in a user-transparent manner. Three important issues in achieving this goal are as followsMigration decisions such as which object is to be moved from where to where should be made automatically by the system.Migration of an object from one node to another should not require any change in its name.When the migrating object is a process, the interposes communication mechanism should ensure that a massage sent to the migrating process reaches it without the need for the sender process to resend it if the receiver process moves to another node before the massage is received.Concurrency TransparencyIn a distributed system multiple users uses the system concurrently. In such a situation, it is economical to sha re the system resource (hardware or software) among the concurrently executing user processes. However since the number of available resources in a computing system is restricted one user processes, must necessarily influence the action of other concurrently executing processes. For example, concurrent update to the file by two different processes should be prevented. Concurrency transparency means that each user has a feeling that he is the sole user of the system and other users do not exist in the system. For providing concurrency transparency, the resort hotel sharing mechanisms of the distributed operating system must have the following propertiesAn event-ordering property ensures that all access requests to various system resources are properly ordered to provide a consistent view to all users of the system.A mutual-exclusion property ensures that at any time at most one process accesses a shared resource, which must not be used simultaneously by multiple processes if program operation is to be correct.A no-starvation property ensures that if every process that is granted a resources which must not be used simultaneously by multiple processes, eventually releases it, every request for that restore is eventually granted.A no-deadlock property ensures that a situation will never occur in which competing process prevent their mutual progress ever though no single one requests more resources than available in the system.Performance TransparencyThe aim of performance transparency is never get
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.