Distributed Computing
What is Distributed Computing?
Distributed computing, in the simplest terms, is handling compute tasks via a network of computers or servers, rather than a single computer and processor (referred to as a monolithic system).
How does distributed computing work?
Distributed computing works by sharing processing workloads between a near-infinite number of computing resources, via the Internet or a cloud-based network. Each processing node handles its own workload, but the overall compute load is shared dynamically between all nodes. Nodes can be brought online in real time to handle process-intensive workloads, and any point of failure remains isolated from the rest of the distributed computing system.
Distributed computing vs. cloud computing
The critical difference between distributed computing and cloud computing is the location of the computing resources. In distributed computing, the resources are local, but the connection is made via the network. In cloud computing, all the resources (hardware, software, infrastructure) are provided by and delivered via the cloud/network.
What is distributed tracing?
Distributed tracing is sometimes referred to as distributed request tracing and is a method for tracking the various and disparate processes in distributed computing. This is critical in identifying points of failure such as bugs, bottlenecks, or throttling in a larger distributed computing scenario. As the name suggests, this is a method of tracing steps to glean greater insight into the details of a larger complex system.
What is the difference between horizontal scaling vs. vertical scaling?
Vertical scaling is the process of strengthening processing power without increasing the footprint, meaning that one would add RAM, increase CPU speed, or add additional storage to an existing computer or server. Horizontal scaling is the process of strengthening computing power by increasing the overall footprint, such as adding additional servers or chief/worker computers via a network.
What are the types of distributed computing?
A variety of complex architectures are used in distributed computing, based on resources and required tasks. Because distributed computing is scalable, there can be nuanced differences in large networks, but many will fall into one of the following basic categories:
Client-server
A client-server network consists of a central server, handling processing and storage duties, with clients functioning as terminals that send and receive messages to/from the server. The most common example of a client-server network is email.
Three-tier
In this type of distributed computing network, the first tier is called the presentation tier and is the interface through which an end user sends and receives messages. The middle section is called the application tier, middle tier, or logic tier and controls the application’s functionality. The final tier is the database servers or file shares, which house the required data used to complete tasks. The most common example of a three-tier system is an e-commerce site.
Note that there is some degree of crossover between “multitier” or “n-tier” distributed systems and “three-tier” systems, since multitier and n-tier systems are variations on three-tier architecture. The main distinction here is that each of the tiers is in a separate physical space and responsible for specialized, localized tasks within the larger computing architecture.
Peer-to-peer
In this distribution architecture model, peers are equally privileged and equally powerful to handle workloads. In this environment, peers, users, or machines are called nodes and do not require centralized coordination between the parties. The most famous usage of peer-to-peer networking was the file-sharing application, Napster, which launched in 1999 as a means of sharing music between Internet-capable listeners.
What are the benefits of distributed computing?
Distributed computing has a wide variety of benefits, which explains why just about every modern computing process above simple calculations utilizes a distributed computing architecture.
Scalability
For starters, the network can not only be architected to meet the needs of the tasks, but it can also scale dynamically in real time to onboard nodes to meet demands, and then return them to inactive states when the demands reduce.
Reliability
Because of the nature of a distributed system, natural redundancies are inherent in the architecture. As nodes might jump in to support computing tasks, those same nodes can contribute to a zero-downtime process by covering for a failed or malfunctioning node. In an e-commerce scenario, a healthy server could step in and complete the sale if a shopping cart server failed mid-transaction.
Speed
The single most important benefit of distributed computing systems is the speed at which complex tasks are handled. Where otherwise a server might get bogged down in heavy traffic, a distributed system can scale in real time to handle the same tasks with more computing power. Essentially, the distributed system can be architected to make workloads standardized by matching needs with resources dynamically.
How does HPE enhance distributed computing with modern data management and cloud solutions?
HPE has decades of experience working with global organizations to build modern data management strategies and solutions. The HPE portfolio spans on-premises to cloud-enabled, end-to-end intelligent and workload-optimized solutions to help you make sense of your data and unlock business value faster.
HPE GreenLake for Compute
Multi-generational IT environments are complex, not optimized for cost or speed, span various locations, and often require overprovisioning. The move to a cloud platform with an innovative compute foundation can unify and modernize data everywhere, from edge to cloud. With a cloud operational experience, you’ll gain the needed speed for today’s digital-first world, be able to act on data-first modernization initiatives, as well as have full visibility and control over costs, security, and governance.
Configuring, installing, and operating compute resources is labor- and capital-intensive. This cloud service approach from HPE GreenLake offers end-to-end cloud-like simplicity and efficiency, with workload-optimized modules delivered directly to your data center or edge location and installed for you by HPE. Your IT staff will be freed to focus on higher-value tasks, and trusted HPE experts will provide proactive and reactive support.