Home / Blog / Distributed Computing vs. Edge Computing vs. Cloud Computing: What Network Engineers Need to Know

Distributed Computing vs. Edge Computing vs. Cloud Computing: What Network Engineers Need to Know

A male computer network engineer smiles as he works in a server room.

Organizations that rely heavily on data – virtually every major organization in every industry – are increasingly likely to turn to centralized cloud services, distributed computing systems and edge computing infrastructures. These architectures let organizations meet growing computation, connectivity and bandwidth requirements by taking advantage of a wider variety of computing and data storage resources. However, what the proliferation of these technologies means for network engineers isn't always clear.

Cloud, distributed and edge computing seem similar, but their implementation in network engineering can be quite different. As the demand for network resources increases, organizations will look toward network engineers to find ways to implement these technologies to ensure network stability and reliability and to scale operations. Understanding how cloud networks, distributed networks and edge networks are alike and different requires staying abreast of the changes taking place in networking. Pursuing a 100 percent online Master of Science in Network Engineering (MSNE) at SMU Lyle School of Engineering is one way to develop and refine the skills you'll need to keep up.

How Computer Networking Is Changing

As new paradigms disrupt traditional IT infrastructures, network engineers have to adapt to keep up. Traditional networks made up of physically linked hardware in a centralized location are giving way to hardware that is virtualized and programmable. That change, which is happening at all levels of the networking stack, affects everything from the network architecture of data centers to the network applications on individual servers. These changes are helping network engineers craft highly responsive network ecosystems that are automated, virtualized, programmable, secure and scalable to meet ever-increasing bandwidth demands.

The uptick in virtualized network implementation is likely the result of several factors, including the growing number of networked devices, more applications connecting in real time and the explosion of Big Data. It's not just computers and phones driving this movement. Embedded systems in everything from medical devices and autonomous vehicles to virtual assistants, smart home systems and products not yet invented are connecting to networks. As more real-time applications test the limits of computing power, organizations will have to upgrade their networks to ensure high bandwidth, low latency and robust security.

The deployment of next-generation wireless networks is also changing networking. Technologies like 5G cellular networks and Wi-Fi 6 are boosting data coverage and signal penetration and supporting greater device density, making augmented reality and widespread Internet of Things (IoT) connectivity possible. They're also taxing legacy systems and revealing the limits of current networking infrastructure.

Software-defined wide-area networking (SD-WAN) is making boundary-driven, hardware-based networking obsolete. Today, network engineers can combine cloud servers, data centers and branch offices to build private LAN analogs that function as single, seamless systems. Organizations can move large amounts of data at LAN speeds across vast geographical distances – good news for widely distributed enterprises that need to process data quickly.

Meanwhile, machine learning-driven network automation is playing a growing role in networking. Artificial intelligence software can be used to predict network behaviors and make whatever changes are necessary to keep networks stable, while still giving users access to the resources they need.

And public cloud adoption is on the rise, which means on-premises solutions are giving way to cloud storage as organizations realize how transitioning to public cloud infrastructure can reduce costs, increase scalability and ensure business continuity. Cloud service providers are exploring ways to provide Network-as-a-Service (NaaS), freeing organizations to farm out their networking needs.

Other key changes in networking are related to network security and information security. Network engineers are responsible for securing the entire IT continuum as a flood of new devices connect to the network. Data security is more important than ever and also more challenging than ever to guarantee given the flood of people working remotely using their own devices. Beyond that, IoT devices have further increased the need for security as more network connections expose potential network vulnerabilities. Every device on a network widens the network attack surface.

In the midst of all this change, three primary areas of disruption have evolved: cloud computing, distributed computing and edge computing. Network engineers must understand the use cases for all three moving forward to make strategic decisions about the design, procurement and implementation of appropriate architectures for mission-critical applications.

The applications of cloud, distributed and edge computing may differ, but the technologies are not mutually exclusive. Network engineers can't think only in terms of cloud computing vs. edge computing or distributed computing vs. cloud computing. For example, edge computing is a specific type of distributed computing, public and private clouds can be employed in distributed computing applications, and edge computing can be employed both on the cloud and other distributed networks.

What Is Cloud Computing?

Cloud networks can deliver data more rapidly, reliably and securely than on-site physical networks. A cloud is a cluster of computers, computing devices or servers that are networked and available remotely to provide scalable, high-capacity computing resources and IT services. Cloud computing treats computing as a service, so resources are available over the internet and on demand. Because of this, organizations can purchase cloud computing resources – applications, operating systems, programming environments, storage and processing power – as needed, buying more or fewer as their needs change.

Cloud networking is a type of network infrastructure in which an organization’s network capabilities and resources are hosted on a cloud platform made up of programmable and virtualized hardware. Organizations can use on-premises cloud networking resources to build a private cloud network that network engineers manage in-house, virtual networking resources in a public cloud or a combination of both, known as hybrid cloud. Private clouds limit access and provide hosted services to a limited number of individuals or organizations, while public clouds sell services to anyone. Public cloud providers include Amazon, Google, IBM, Microsoft, Oracle and others.

Deciding whether to use a public cloud, create a private cloud or implement a hybrid cloud involves evaluating not only use cases but also privacy, control and the most efficient use of human and technological resources.

What Network Engineers Need to Know About Cloud Networks

Cloud networking is a way to use virtualization to create a network of servers that delivers data more rapidly, reliably and securely than on-site physical networks. Virtual servers can be on-premises resources that engineers and administrators manage or NaaS resources purchased as services from a vendor. Cloud network resources can include virtual routers, firewalls and bandwidth and network management software, with other tools and functions available on demand as required.

Network engineers must factor in cost, personnel, available network resources and privacy issues when determining whether to build a private network, access a public network or build systems that integrate both. They must also plan to secure any interfaces between the public cloud and private networks.

While traditional network engineers had to design and implement internal networks, today's network engineers must be comfortable using public cloud resources to create virtual private networks (VPNs). VPNs function and appear like an internal network. They provide greater security and mitigate performance problems that can arise when there is competition for resources. Third-party virtual private clouds (VPCs) isolate resources as though they were in house. To do this, public cloud providers offer content delivery network (CDN) services to move data, APIs and applications quickly and securely.

What Is Distributed Computing?

Clouds are just one example of distributed computing, which applies software on multiple computing devices to single tasks. Coordination in distributed computing systems is so complete that users don't realize multiple machines are working together. In a sense, the network functions as a single computer. Distributed computing systems do not share common memory or a physical clock. In fact, the computers in the system don't need to have the same processors or run the same operating systems. This makes them infinitely flexible and scalable because the network can be expanded by simply adding more devices.

The advantages of distributed networks over traditional networks are similar to those of distributed computing. Distributed networks form a united whole that delivers more processing power and storage than a single network. They offer extreme fault tolerance, enhanced scalability, increased speed and better security.

What Network Engineers Need to Know about Distributed Networks

Although both traditional decentralized networks and distributed networks connect servers in remote locations, they differ. While decentralized networks are characterized by dispersed physical components, distributed networks are defined by concurrent program execution. Distributed networking architecture includes multiple equal interconnected nodes configured to evenly share load over connected network sites.

There's no central server or separate set of master nodes in a distributed network, so data processing is crowdsourced across the network. Software monitors and manages data routing, network bandwidth allocation, access control, load balancing and other networking processes, but there is no top-down node hierarchy. Computing devices in distributed systems can be computers, physical servers, virtual devices, containers or any other node with local memory. Those devices can be loosely coupled in a WAN or tightly coupled in a LAN.

What is Edge Computing?

Edge computing is a type of distributed computing that puts servers, storage and resources closer to the points at which data is generated. Global research and advisory firm Gartner defines edge computing as "a part of a distributed computing topology in which information processing is located close to the edge – where things and people produce or consume that information." It involves pushing client services out as close as possible to the edge of a network by adding additional platforms between cloud and user – or even pushing some services onto the devices themselves.

Putting processing power at the edge of the network reduces latency in applications that need to process massive amounts of data in real time. Interest in edge computing has been driven mainly by the growing number of IoT devices creating massive amounts of data that push network bandwidth requirements to the limit. In many of these applications, it's not feasible or even desirable to move, process or store on the core network. For example, autonomous or semi-autonomous vehicle applications can't wait to send necessary data across a network. They need to take information and process it on the spot.

What Network Engineers Need to Know About Edge Networks

Edge networking uses devices such as modems, routers, routing switches, integrated access devices and multiplexers to control access to and from the core network. To create effective edge systems, network engineers must design networks so that they route some requests to data centers and others to edge servers that act as micro-cloud platforms nearer to users. Different portions of the network may be wired or wireless and can include cloud or on-premises networks. Dividing processes between the core network and the edge servers improves network performance by pushing some network traffic away from the core network. This reduces network latency, saves bandwidth, lowers costs and dramatically enhances the end-user experience.

The Only Constant in Networking Is Change

The skills a network engineer needs to get hired today are not the same skills they needed a decade ago, nor are they the same skills they'll need to get hired a decade from now. Network engineering is technically complex and time-consuming, and it's not always easy to look past challenges to see trends. While the basics of networking tend to change slowly, innovations in computing – for example, fog computing, mist computing, multi-cloud computing – are prompting sweeping transformations in the field. There's no way to know what kinds of computer and network technologies will emerge next. What is clear is that the speed of technological change is making disruptive breakthroughs more common.

A successful network engineer should know how to use the technologies of today and learn how to apply networking principles to tomorrow's technologies – including those that haven't been invented yet. Network engineers must learn the fundamentals of networking, reskill based on developments in the field such as virtualization and automation, and then most importantly, master the art of ongoing professional development. Network engineers are, by necessity, lifelong learners because bachelor's degree and master's degree programs in networking can only speak to the technologies of today.

That said, the very best network engineering master's programs teach IT professionals not just network engineering skills but also how to adapt to future changes. The 100 percent online Master of Science in Network Engineering program at SMU Lyle School of Engineering combines a didactic curriculum designed by leading experts such as Bhalaji Kumar and Dr. M. Scott Kingsley with extensive hands-on work made up of labs, industry projects and research – all designed to teach network engineers how to excel in this rapidly changing field. Apply now.