Improving TCP CUBIC Congestion Control with Machine Learning
Keywords:Software Engineering, Transmission Control Protocol, Machine Learning
Transmission Control Protocol (TCP) is commonly used for reliable internet data transfers. However, TCP can experience packet loss due to network congestion. Packet loss happens when data doesn't reach its destination for various reasons. In recent years, there has been a growing inclination towards adopting novel, clean-slate learning-based designs as alternatives to traditional congestion control mechanisms for the Internet. However, it is posited that integrating machine learning techniques with the current congestion control schemes can achieve comparable, if not superior outcomes. This project endeavoured to address this gap and implement a system that can utilised with TCP CUBIC. The proposed method looked to enhance the efficiency of the TCP CUBIC congestion control by incorporating machine learning techniques. TCP CUBIC, the default congestion control variant in the current Linux Kernel, modifies the congestion window size based on a loss-based algorithm, thereby influencing the rate of data transmission. TCP CUBIC uses a parameter, beta, to modify the rate at which the congestion window grows. The approach involves employing a model-free reinforcement learning algorithm, specifically a Q-learning algorithm to optimize the TCP CUBIC beta parameter, targeting an increase in throughput for TCP CUBIC connections. Through extensive testing performed in various simulated network conditions demonstrates the performance and adaptability of the Q-Learning algorithm. Furthermore, this report details the various development decisions undertaken and their driving influences. It also provides an insight into the project's results, expanding on the existing system design, and elaborates on the potential for future work in this area.