AV视频

Luca Giacomoni
Doctoral Tutor (Informatics)

Research

Research topic and interests
My current research is about data-driven congestion control algorithms, with a focus on networking in data centres. The aim is to enhance transport layer protocols in data-centres and perform traffic engineering using machine learning techniques. The foundations of my work follow up the research presented in [1] and [2]. Key aspects that will be explored include: applying Bayesian Optimisation [3] to enhance Remy’s [1] policy optimisation; a study of the exploration/exploitation trade-off of the action space; the implications and risks of on-line and off-line learning approaches implemented by PCC [2] and Remy; the impact that network state representations have in the learning model and how to optimise them using deep reinforcement learning [4]; network utility function estimation and prediction with Gaussian Processes [5].
 
Therefore, my research interests range from computer networks to machine learning. On one side, I am interested in transport layer protocols, traffic engineering, network utility maximisation and data centres’ networking. On the other side, I am interested in function approximation, Gaussian processes, Bayesian optimisation and Reinforcement learning [6].
 
Project description 
A growing body of research and operational experience has found that TCP often performs poorly in different settings, leading to a proliferation of domain-specific protocol implementations. TCP family has little hope to achieve consistently high performance due to its inability to adapt to new scenarios, a consequence of hard-wiring control responses to packet level-event. Recently, research has leveraged on machine learning techniques in different fields as well as networking. Particulatly, two different approaches [1,2] have been proposed to overcome TCP limitations.
 
Both approaches, however, come with different limitations. In the first one, the congestion control algorithm is generated off-line by trial and error over simulated random networks. The parameters of such networks, often unkwnown, are given to the optimizer as uniform distributions. Eventually, the mapping between network event and control response will be optimized and depending on the particular scenario. However, this approach still relies on prior assumptions of the network, which if not held can drastically degrade the protocol performance.

In the second approach, the end-point is completely unaware of the underlying network. During a connection, the sender has to decide whether to transmit information and at what rate. Every time, the sender evaluates the performance of the action taken and makes a decision about the next action to take, based on that information. The decision making process only depends on the performance score and does not rely on the knowledge of the underlying network. Thus, the protocol performs equally on every type of network. On the other hand, each action is taken prudently to avoid system collapses, that can result into suboptimal performances.

The main goal of my research is to design a congestion control protocol that performs optimally and equally regardless the underlying network topology and parameters, by combining the off-line and online methodologies. With the off-line learning, we expect to generate an optimal protocol over a (more or less) limited range of networks. This is possible via global optimization techniques of black box functions. With the on-line learning, we expect the protocol to adapt to unforeseen network scenarios and dynamically modify its policy to maintain optimality throughout time.

 
References
[1] Keith Winstein and Hari Balakrishnan. TCP ex machina: Computer-generated congestion control. In ACM SIGCOMM        Computer Communication Review, volume 43, pages 123–134. ACM, 2013.
 
[2] Mo Dong, Qingxi Li, Doron Zarchy, Philip Brighten Godfrey, and Michael Schapira. PCC: Re-architecting Congestion        Control for Consistent High Performance. InNSDI, pages 395–408,2015.
 
[3] E Brochu, V M Cora, and N De Freitas. A tutorial on Bayesian optimization of expensive cost functions, with                   application to active user modeling and hierarchical reinforcement learning. ArXiv, page 49, 2010.
 
[4] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and             Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
 
[5] Carl Edward Rasmussen. Gaussian processes for machine learning. International journal of neural systems,                  14(2):69–106, 2006.
 
[6] Richard S Sutton and Andrew G Barto. Introduction to Reinforcement Learning. Learning, 4(1996):1–5, 1998