標題: 在網際網路上提供具擴充性服務品質之研究
A Study for Providing Scalable QoS over the Internet
作者: 李春良
Chun-Liang Lee
Yaw-Chung Chen
關鍵字: 網際網路;差異式服務;服務品質;加權式比例公平性;無狀態核心公平排隊法;Internet;Differentiated Services;Quality of Service;Weighted Proportional Fairness;Core-Stateless Fair Queueing
公開日期: 2001
摘要: 現在的網際網路只提供單一種類的服務,也就是best-effort 服務,這種服務無法保證傳輸及時性以及傳輸速率。隨著網際網路轉變為商業架構,對於在其上提供更好的服務品質需求也隨之增加。 在本論文中,我們提出了四種能有效支援具有擴充性服務品質的方法。這些方法的目標在於達到傳輸速率的加權式公平分配,也就是說,每一條資料流能得到的傳輸速率取決於它被指定的權值。我們提出的方法遵循差異式服務的設計理念,亦即儘可能簡化核心路由器功能而達成良好的擴充性。 我們提出的第一個方法是由端點主機的觀點來解決服務品質的議題。與現行的網際網路架構相似,流量控制主要是由端點主機的演算法來達成,而路由器不需要對服務品質提供特殊的支援。不同於大部份的端點對端點協定所採用的被動式流量控制方式,我們的方法採用主動式流量控制方式,透過保持每一條資料流在網路上多餘的封包與其權值成比例的方式,我們能利用一個簡單且有效的演算法來達到速率的加權式公平分配。我們利用電腦模擬及在Linux上實作的方式來驗證提出方法的效能。 第一個方法的優點在於其十分簡單,很容易被實作。但是,由於很難要求每一個使用者都遵循相同的控制方法,當有不遵循控制方法的使用者時,第一個方法就無法保證其他使用者的效能。因此,我們將第一個方法延伸為邊緣對邊緣的流量控制方法,在這個方法中,封包在被送進核心網路前會先在入口的邊緣路由器上等候,同時我們採用以速率為主的流量控制方法,而非在第一個方法中所使用的滑動窗式的方法。由於核心路由器不需要提供任何支援,因此,第二個方法可以說是差異式服務模式的一個極端實現的例子。 在第三個方法中,我們提出了一個稱為多層級公平排隊演算法,其中需要修改邊緣及核心路由器。與現有的方法如CSFQ以及RFQ相比,我們的方法能達到更好的公平性及效能,同時它也支援分層編碼的應用程式,並且不會帶來額外的設計複雜度。 最後,我們提出一個基於虛擬資料流概念的封閉性流量控制方法,它能有效地避免在公平排隊演算法中可能造成的資源浪費。電腦模擬的結果證實它的確能明顯地提高網路效能,我們同時也利用分析的方式來證明這個方法的收斂性質。在一個單一瓶頸的網路下,我們證明該系統能在O(log N)的控制週期內達到速率的加權式公平分配,其中N代表資料流的數目。
Nowadays the Internet only provides one service class, the best-effort service, which does not guarantee any timeliness or transmission rate. With the transition to a commercial infrastructure, there is an increasing need to provide better service quality in the Internet. In this dissertation, we present four efficient approaches for supporting scalable quality of service (QoS) over the Internet. The proposed approaches aim at achieving weighted fair rate allocations. More specifically, each flow is assigned a weight which determines the service rate it will receive. The proposed approaches follow the design philosophy of the differentiated services (Diffserv) model; that is, keeping the core network as simple as possible for good scalability. The first approach addresses QoS issues from the aspect of end-hosts. Similar to the current Internet architecture, the traffic control is mainly accomplished through end-host algorithms. Routers in the network do not have to provide any particular support for the service quality. Instead of using a reactive flow control scheme commonly used in end-to-end protocols, the proposed approach uses a proactive flow control scheme. By keeping a certain amount of extra packets in proportion to its weight for each flow in the network, the proposed approach is able to achieve weighted fair rate allocations with a simple and efficient distributed algorithm. We evaluate the performance of the proposed approach through both simulations and experiments on Linux. The main advantage of the first approach is the simplicity, which eases the deployment. However, since it is difficult to ask every user to follow the same flow control rule, the first approach cannot guarantee the throughput of a well-behaved user if any ill-behaved traffic is present. Therefore, we extend the first approach to an edge-to-edge flow control algorithm, in which the packets of each flow are queued at ingress edge routers before they can be forwarded to the core network. In contrast to the window-based flow control used in the first approach, rate-based flow control is used in the second one. With the proposed approach, core routers do not have to provide any particular supports. Therefore, this approach can be considered as an extreme case for realization of the Diffserv model. In the third approach, we propose the so-called Multi-Level Fair Queueing (MLFQ) algorithm, in which both edge routers and core routers are required to be modified. As compared with existing approaches, such as Core-Stateless Fair Queueing (CSFQ) and Rainbow Fair Queueing (RFQ), MLFQ achieves better fairness of rate allocations and higher throughput. Moreover, it supports layer-encoded applications. In particular, it does not incur extra implementation complexity. Finally, a closed-loop flow control based on the idea of virtual flow is proposed. This approach avoids the potential bandwidth waste in Fair Queueing algorithms. Through simulations, we show that it significantly improves the network throughput. We also give an analytical argument for the convergence of the proposed approach. For a single bottleneck configuration, we prove that the system is guaranteed to achieve weighted fair rate allocations in O(log N) cycles, where N is the number of flows.