Code Offloading in Opportunistic Computing

Staff - Faculty of Informatics

Date: 28 November 2017 / 16:30 - 17:30

USI Lugano Campus, room CC-250, Main building (Via G. Buffi 13)

You are cordially invited to attend the PhD Dissertation Defense of Alan FERRARI on Tuesday, November 28th 2017 at 15h45 in room CC-250 (Main building)

Abstract:

With the advent of cloud computing, applications are no longer tied to a single device, but they can be migrated to a high-performance machine located in a distant data centre. The key advantage is the enhancement of performance and consequently the users experience.

This activity is commonly referred as computational offloading and it has been strenuously investigated in the past years. The natural candidate for computational offloading is the cloud, but recent results point out the hidden costs of cloud reliance in terms of latency and energy; Cuervo et. al. illustrates the limitations on cloud-based Computational Offloading based on WANs latency times. The dissertation confirms the results of Cuervo et. al. and illustrates more use cases where the cloud may not be the right choice.

This dissertation addresses the following question: is it possible to build a novel approach for offloading the computation that overcomes the limitations of the state-of-the-art? In other words, is it possible to create a computational offloading solution that is able to use local resources when the Cloud is not usable, and remove the strong bond with the local infrastructure?

To this extent, I propose a novel paradigm for computation offloading named AnyRun Computing, whose goal is to use any piece of higher-end hardware (locally or remotely accessible) to offloading a portion of the application.

With AnyRun Computing I removed the boundaries that tie the solution to an infrastructure by adding locally available devices to augment the chances to succeed in offloading.

To achieve the goals of the dissertation it is fundamental to have a clear view of all the steps that take part in the offloading process. To this extent, I firstly provided a categorization of such activities combined with their interactions and assessed the impact on the system.

The outcome of the analysis is the mapping to the problem to a combinatorial optimization problem that is notoriously know to be NP-Hard. There are a set of well known approaches to solve such kind of problems, but in this scenario they cannot be used because they require a global view that can be only maintained by a centralised infrastructure. Thus, local solutions are needed.

Moving further, to empirically tackle the AnyRun Computing paradigm, I propose the ARC framework, a novel software framework whose objective is to decide whether to offload or not to any resource-rich device willing to lend assistance is advantageous compared to local execution with respect to a rich array of performance dimensions.

The core of ARC is the Inference Model which receives a rich set of information about the available remote devices from SCAMPI and employs the information to profile a given device, in other words, it decides whether offloading is advantageous compared to local execution, i.e. whether it can reduce the local footprint compared to local execution in the dimensions of interest (CPU and RAM usage, execution time, and energy consumption).

To empirically evaluate ARC I presented a set of experimental results on cloud, cloudlet, and Opportunistic domain. In Cloud I used the state of the art in cloud solutions over a set of significant benchmark problems and with three WANs access technologies (i.e. 3G, 4G, and High Speed WAN). The main outcome is that the Cloud is an appealing solution for a wide variety of problems, but there is a set of circumstances where the cloud performs poorly.

Moreover, in the evaluation I showed in terms of latency times the main limitation in adopting a cloud-based approach that is strictly connected to the difference between computation and transmission costs, specifically, problems with high transmission costs tend to perform poorly, unless they have high computational needs. This confirms the results that Cuervo et. al. demonstrated 4 years ago.

The second part of the evaluation is done in opportunistic/cloudlet scenarios where I used my custom-made testbed to compare ARC and MAUI, the state of the art in computation offloading. To this extent, I have performed two distinct experiments: the first with a Cloudlet environment and the second with an opportunistic environment. The key outcome is that ARC virtually matches the performances of MAUI in Cloudlet environment, but it improved by a 50% to 60% in Opportunistic domain.

 

Dissertation Committee:

  • Prof. Luca Maria Gambardella, Istituto Dalle Molle di Studi sull’Intelligenza Artificiale, Switzerland (Research Advisor)
  • Prof. Silvia Giordano, SUPSI, Switzerland (Research co-Advisor)
  • Prof. Mehdi Jazayeri, Università della Svizzera italiana, Switzerland (Internal Member)
  • Prof. Cesare Pautasso, Università della Svizzera italiana, Switzerland (Internal Member)
  • Prof. Mario Gerla, University of California, USA (External Member)
  • Prof. Bernhard Plattner, ETH Zurich, Switzerland (External Member)