Distributed Systems and Parallel Programming
In today’s world of technology, computing has been making a certain level of creations leading to other upcoming needs. Trends such as distributed system and parallel programming are being embraced by organizations to enable them fashion workstations in a unison manner hence addressing technical issues such as system down times, sharing of devices, and increasing the security by using appropriate network infrastructures. Unfortunately, the architecture of the networks that are used to tackle these issues fails to do so. This outline will explore alternative solutions on how by applying the modern forms of network infrastructure, the computers can improve their performance.
In today’s world of technology, computing has been encouraging the rise of other upcoming technological inventions. These inventions have been urging system administrators to come up with new solutions on tackling problems such as system down times, sharing of devices, and increasing the security through distributed systems. Distributed systems are preferred as they offer a platform for execution of parallelism, while they are as they are adaptable to new computing trends. As such, the need to develop proper networks that are flexible to be applied with the existing devices urges organizations to be proactive in creating infrastructures that will make it possible to realize the objectives of an organization (Wang, Jie & Chen, 2009).
Statement of the Problem
The problem then arises on whether it is possible for organizations to harness the power of the systems from a one off application that makes the optimum use of a computer through proper connectivity on the networks. Wang, Jie & Chen (2009), notes that distributed systems have a differing aspect with parallel purposed programmed computers in that they exude a high level of likelihood of processer failure and no synchrony in regards to communication and computation, they lack homogeneity in their architecture and communicate on high latency and low band with networks. The challenge then falls on network developers as they query whether it is possible to use new computing forms of infrastructures to institute efficacy of the systems by creating robust systems that allow an organization to share the facilities while solving the stated problems.
Previous researches by Meghanathan (2010) advise that among the most flexible existing forms of networks that can be applied to solve this problem is the grid computing; a technology that makes it possible for computers that are within a certain network to share resources equally. The connectivity of the network embraces a synchronic model that allows all the computers within a network to share data. In as far as other authors such as Foster & Kesselman (2003), have offered their recommendation claiming that the type of language that is a applied also matters on how the computers will respond to speed and the querying of requests, the type of a network that an organization embraces also offers a way out to the how it will share data within the nodes. Properly set network grids avoid incidences of dreadlocks since the order sent by a computer takes the shortest route to the destination (Carstensen, Morgenthal & Golden, 2012). The heterogeneity of the grid type of computing allows a high level of flexibility on how the computers will share some of the facilities such as the mainframes, servers, supercomputers or any other form of devices that the user group that has a common agenda may intend to share (Wang, Jie & Chen, 2009).
New concepts in this field
There are new approaches that are being developed in this field all with the aim of improving the computing infrastructure and to serve other purposes such as security of data, to have back up plans in case the main server or facilities fail to operate, increased reliability and productivity. In as far it falls as a category of grid computing a concept that has been discussed in this outline, Cloud computing is considered unique and the latest form of grid computing (Carstensen, Morgenthal & Golden, 2012). This form of infrastructure simply adopts the client server approach where large remotely located servers allow for both centralization and online access of data.
Existing Approaches and Their Shortcomings
There are different forms of computing infrastructures that have been applied in most of organizations for decades. Unfortunately, most of these infrastructures as Meghanathan (2010) notes have their own shortcomings and they are not preferred as they tend to slow down the computer while they affect other aspects such as the security of the data. The need for development of newer systems that will address new needs such as the growth of the company has led most companies to begin adopting other swifter forms of infrastructure that can accommodate interconnectivity of their devices from many locations (Tselentis, 2010). Some of the previous trends that have been applied as computing infrastructure are;
- Peer to Peer an architecture that does not have a central control of coordination. All of the participants in this model are both the suppliers. This form of computing infrastructure is slow as there is not a coordination from the nodes and a simple request goes through all the computers before a query is resolved
- Utility computing just as the name implies it follows the structure of public utilities. One major disadvantage with this method is that nodes that are next to the source are usually at an advantage and they can make requests that can be processed faster
- Cluster computing-this method allows for connectivity of machines with similar features such as cluster Linux
- Distributed computing also the widely know method which connects computers to several resources. This is method that has been adopted by this outline