Fog vs Edge Computing: What Are the Differences and Do They Matter?
The last few decades have seen a massive shift from on-premise software to cloud computing. By storing data and performing computing processes elsewhere we have freed ourselves to be able to do more on our phones, computers or IoT devices without needing the corresponding extra memory or computing power. However, we are about to see things begin to swing back in the other direction.
This is for a variety of reasons including a need for extremely low latency in certain applications, for example, self-driving cars. Shifting computing power nearer to the edge of the network can also reduce costs and improve security. According to Matt Vasey, who focuses on IoT strategy at Microsoft, “The ideal use cases [for both fog and edge computing] require intelligence near the edge where ultra low latency is critical, run in geographically dispersed areas where connectivity can be irregular, or create terabytes of data that are not practical to stream to the cloud and back.”
The Two Types of Computing Share Many Similarities
The terms edge and fog computing seem to be more or less interchangeable and they do share several key similarities. Fog and edge computing systems both shift processing of data towards the source of data generation. They attempt to reduce the amount of data sent to the cloud. This is to decrease latency, and thereby to improve system response time in remote mission-critical applications, to improve security as the need to send data across the public internet is lessened, and to reduce costs.
Some applications may gather a huge amount of data (which would be costly to send to a central cloud service). However, only a small amount of the data they gather may be relevant. If some processing is done at the edge of the network and only the relevant information is sent to the cloud then this will reduce costs.
Think of a security camera. Sending 24 hours of video to a central server would be hugely expensive, and 23 of those hours may be of nothing more than an empty hallway. If you utilize edge computing then you can choose to only send the one hour where something is actually happening.
The main difference between edge computing and fog computing is where the data is processed.
Both fog and edge computing involve processing data closer to where it originates. A key difference is exactly where that processing takes place. Fog computing processes take place on the LAN (local area network) level of network architecture. Fog computing uses a centralized system that interacts with industrial gateways and embedded computer systems.
Edge computing processes much of the data that IoT devices are generating directly on the devices themselves.
How Fog and Edge Computing are Used Differently
As we can see these two technologies are very similar. To differentiate them let’s think about the use case of a smart city. Imagine a smart city, complete with smart traffic management infrastructure. The traffic lights have a sensor connected which can detect how many cars are waiting at each side of a junction and prioritize turning the light green for the greatest number of cars. This is a fairly simple calculation which can be performed in the traffic light itself using edge computing. This then reduces the amount of data which needs to be sent over the network, reducing operating and storage costs.
Now imagine that those traffic lights are part of a network of connected objects which include more traffic lights, pedestrian crossings, pollution monitors, bus GPS trackers, and so on. The decision about whether to turn that traffic light green in five seconds or ten becomes more sophisticated. Perhaps there’s a bus that’s running late on the side of the junction with less traffic. Maybe it’s started raining and, in a bid to encourage residents to travel more actively, the city has decided to give priority to pedestrians and cyclists at lights when it rains. Is there a pedestrian crossing or a cycle path nearby? Is anyone using it? Is it raining?
In this more complex scenario, micro-data centers can be deployed locally in order to analyze data from multiple edge nodes. These data centers act like a local, mini cloud within the local area network and are considered to be fog computing.
So Which is “Better”? Fog or Edge Computing?
In summary, as the IoT continues to grow and more data is generated, processing that data close to the point of generation will become crucial. Clearly, other people agree with this. According to a recent report by Million Insights, the Global Edge Computing market size is predicted to be valued at around $3.24 billion USD by 2025.
Edge and fog computing will play a vital part in the future of IoT. Whether edge or fog computing is utilized is less important and will depend on the application and the specific use case. Like many IoT considerations, such as which type of connectivity to choose, the answer is not black and white. Whether fog or edge computing is “better” will depend on the IoT application, what its requirements and the desired outcomes are.