Edge computing is a distributed information technology (IT) architecture in which client data is processed at the periphery of the network, as close to the originating source as possible.
Data is the lifeblood of modern business, providing valuable business insight and supporting real-time control over critical business processes and operations. Today's businesses are awash in an ocean of data, and huge amounts of data can be routinely collected from sensors and IoT devices operating in real time from remote locations and inhospitable operating environments almost anywhere in the world.
But this virtual flood of data is also changing the way businesses handle computing. The traditional computing paradigm built on a centralized data center and everyday internet isn't well suited to moving endlessly growing rivers of real-world data. Bandwidth limitations, latency issues and unpredictable network disruptions can all conspire to impair such efforts. Businesses are responding to these data challenges through the use of edge computing architecture.
In simplest terms, edge computing moves some portion of storage and compute resources out of the central data center and closer to the source of the data itself. Rather than transmitting raw data to a central data center for processing and analysis, that work is instead performed where the data is actually generated -- whether that's a retail store, a factory floor, a sprawling utility or across a smart city. Only the result of that computing work at the edge, such as real-time business insights, equipment maintenance predictions or other actionable answers, is sent back to the main data center for review and other human interactions.
Thus, edge computing is reshaping IT and business computing. Take a comprehensive look at what edge computing is, how it works, the influence of the cloud, edge use cases, tradeoffs and implementation considerations.
Key capabilities for edge computing
- Manage the distribution of software at massive scale-Reduce unnecessary administrators, save the associated costs and deploy software where and when it’s needed.
- Leverage open-source technology-Leverage an edge computing solution that nurtures the ability to innovate and can handle the diversity of equipment and devices in today’s marketplace.
- Address security concerns-Know that the right workloads are on the right machine at the right time. Make sure there’s an easy way to govern and enforce the policies of your enterprise.
- Engage a trusted partner with deep industry expertise-Find a vendor with a proven multicloud platform and a comprehensive portfolio of services designed to increase scalability, accelerate performance and strengthen security in your edge deployments. Ask your vendor about extended services that maximize intelligence and performance at the edge.
What is an example of edge computing?
Consider a building secured with dozens of high-definition IoT video cameras. These are "dumb" cameras that simply output a raw video signal and continuously stream that signal to a cloud server. On the cloud server, the video output from all the cameras is put through a motion-detection application to ensure that only clips featuring activity are saved to the server’s database. This means there is a constant and significant strain on the building’s Internet infrastructure, as significant bandwidth gets consumed by the high volume of video footage being transferred. Additionally, there is very heavy load on the cloud server that has to process the video footage from all the cameras simultaneously.
Now imagine that the motion sensor computation is moved to the network edge. What if each camera used its own internal computer to run the motion-detecting application and then sent footage to the cloud server as needed? This would result in a significant reduction in bandwidth use, because much of the camera footage will never have to travel to the cloud server.
Additionally, the cloud server would now only be responsible for storing the important footage, meaning that the server could communicate with a higher number of cameras without getting overloaded. This is what edge computing looks like.

What are the benefits of edge computing?
Cost savings
As seen in the example above, edge computing helps minimize bandwidth use and server resources. Bandwidth and cloud resources are finite and cost money. With every household and office becoming equipped with smart cameras, printers, thermostats, and even toasters, Statista predicts that by 2025 there will be over 75 billion IoT devices installed worldwide. In order to support all those devices, significant amounts of computation will have to be moved to the edge.
Performance
Another significant benefit of moving processes to the edge is to reduce latency. Every time a device needs to communicate with a distant server somewhere, that creates a delay. For example, two coworkers in the same office chatting over an IM platform might experience a sizable delay because each message has to be routed out of the building, communicate with a server somewhere across the globe, and be brought back before it appears on the recipient’s screen. If that process is brought to the edge, and the company’s internal router is in charge of transferring intra-office chats, that noticeable delay would not exist.
Similarly, when users of all kinds of web applications run into processes that have to communicate with an external server, they will encounter delays. The duration of these delays will vary based upon their available bandwidth and the location of the server, but these delays can be avoided altogether by bringing more processes to the network edge.
New functionality
In addition, edge computing can provide new functionality that wasn’t previously available. For example, a company can use edge computing to process and analyze their data at the edge, which makes it possible to do so in real time.
To recap, the key benefits of edge computing are:
- Decreased latency
- Decrease in bandwidth use and associated cost
- Decrease in server resources and associated cost
- Added functionality
What are the drawbacks of edge computing?
One drawback of edge computing is that it can increase attack vectors. With the addition of more "smart" devices into the mix, such as edge servers and IoT devices that have robust built-in computers, there are new opportunities for malicious attackers to compromise these devices.
Another drawback with edge computing is that it requires more local hardware. For example, while an IoT camera needs a built-in computer to send its raw video data to a web server, it would require a much more sophisticated computer with more processing power in order for it to run its own motion-detection algorithms. But the dropping costs of hardware are making it cheaper to build smarter devices.
One way to completely mitigate the need for extra hardware is to take advantage of edge servers. For example, with Cloudflare’s network of 320 geographically distributed edge locations, Cloudflare customers can have edge code running worldwide using Cloudflare Workers.
No comments:
Post a Comment