Implementation of I-D Two-State Bufferbloat Solution in Linux Kernel
Bufferbloat is a phenomenon that will cause latency to be extremely high because the larger buffer is deployed in the network. I-D two-state DRWA had been proposed with using dynamic receive window adjustment to solve bufferbloat issue. We can not only approach maximum bandwidth but also reduce the latency with the two-state mechanism. Here, we had implemented this algorithm into Linux kernel and used the real network as test environment, and the results matched to the previous simulations. Furthermore, we took this two-state concept into sender side. We built a new congestion control to mitigate bufferbloat. Compare to another sender-based solution, the performance of latency could not always be optimal. However, the throughput and fair-share with compete traffic are much better.