have additional 6.4 billion “things” technically connected to the cloud worldwide by 2016. With the huge amount of data, in some implementation cases, we will need to have a separate cloud for “things” only, to provide specific needs for IoT & optimize the bandwidth. In the long run, existing cloud models are not really designed for the 3 V‘s (volume, variety, and velocity of data ) that IoT devices generates (see Cisco’s white paper )
That’s why we need “Fog Computing” for IoT. If we need to acquire data from sensors from a single commercial jet, we need to store & analyze 10 TB of data for every 30 minutes of flight. A wearable device that measures your heart beat, physiological & motion activities will probably generate GB’s of raw data in less than a week of continuous monitoring. In most cases, the data will be lost if not pushed to the cloud continuously for analytics & long term storage. Less than ideal bandwidth or infrastructure issues in connecting IoT edge nodes to the cloud further complicates the situation & accelerate the eventual loss of data or connectivity.
With fog computing or “fogging” , the analytics & storage usually provided by the cloud is moved closer to the data source or edge nodes. Other than providing a “local” storage to maintain continuous & real-time data acquisition, the fog can also solves the following:-
- Redundancy – latest data or buffered data can be uploaded & synchronize with the cloud in the event of loss in connectivity. In some scenario where the earlier data (with time-stamp) is received later than the current data, the redundant data can be chronologically re-constructed.
- Security – additional cryptography & multi-level authenthication can be executed at the fog without additional computation needed from the cloud
- Data Compression – raw data from sensors & device statuses can be compress before sending to the cloud to reduce bandwidth requirements
- Lower latency – provide immediate computing resources for less complex analytics & faster response
- Offline configuration – Re-programming or configuration of edge nodes can be done through the cloud, however, network congestion & stability issues might cause failures during the updates & possibly rendering the edge nodes useless. The same tasks can be done more reliably through the fog.
So, where does the “fog” resides? Any gateways, routers, switches & edge nodes with powerful enough embedded processors can host a “fog”. As fog computing is still in it’s infancy, there’s still a lot of development that is on-going to realize it’s full-potential & adoption. At the core, the standardization or visualization of software that can run on tiny embedded edge nodes & protocols similar to what the cloud computing has to make the integration seamless. Nevertheless, more & more powerful edge hardware that can run complex algorithms or larger storage capabilities(see previous post ) & flexible embedded OS’s , makes setting-up “fog” easier & cost effective compared to “cloud” infrastructure.
So, will “fog computing” eventually takes over from “cloud computing”? At the present moment, both “cloud” & “fog” works hand-in-hand to deliver more robust & reliable IoT services & opening doors to solving other issues plaguing the adoption of IoT like security, QoS, redundancy etc.
The future of “fog computing” is not so foggy anymore & actually quite clear, key to implementing a successful IoT solution.