Abstract:
The huge volume of data produced by IoT procedures needs the processing power and space for storage provided by cloud, edge, and fog computing systems. Each of these ways of computing has benefits as well as drawbacks. Cloud computing improves the storage of information and computational capability while increasing connection delay. Edge computing and fog computing offer similar advantages with decreased latency, but they have restricted storage, capacity, and coverage. Initially, optimization has been employed to overcome the issue of traffic dumping. Conversely, conventional optimization cannot keep up with the tight latency requirements of decision-making in complex systems ranging from milliseconds to sub-seconds. As a result, ML algorithms, particularly reinforcement learning, are gaining popularity since they can swiftly handle offloading issues in dynamic situations involving certain unidentified data. We conduct an analysis of the literature to examine the different techniques utilized to tackle this latency-aware intelligent task offloading issue schemes for cloud, edge, and fog computing. The lessons acquired consequently, from these surveys are then presented in this report. Lastly, we identify some additional avenues for study and problems that must be overcome in order to attain the lowest latency in the task offloading system.
Keywords:task offloading, cloud computing, edge computing, fog computing, Internet of things, latency.