Response Paper Receipt no.: 11-H074f Title: DP-FEC: Dynamic Probing FEC for High-Performance Real-Time Interactive Video Streaming Thank you for your indications and comments. We made a thorough review of the points, and revised our paper as follows. ======================================================= =================== =================== ======================================================= ” Requirements for acceptance All the reviewers reported that the evaluation section has some points that must be revised before publication. Please read review reports and fix the problems mentioned in the reviews. ------ [Reply] --------------------------------------- In accordance with the reports and comments, we modified our paper as described below. ------------------------------------------------------ ” Other comments This paper proposes a mechanism for controlling a data transmission rate of video streaming so that the transmission will be friendly to other TCP streams. The idea is to dynamically adjust the window size for Forwarded Error Correction by monitoring a packet-loss ratio. The proposed technique is interesting but the evaluation is not very convincing. For example, the goal of the proposed mechanism is to keep sufficient streaming quality while doing TCP-friendly rate control. However, the proposed mechanism only controls the transmission of packets for error correction, which should be relatively smaller than the packets for video data in the stream. The reviewer is wondering why changing a control mechanism for relatively small amount of data transmission largely improves the TCP-friendliness. How was video data sent? Did the authors take the same algorithm for sending video data for the experiments? English writing should be improved. The reviewer felt non-trivial difficulties to read this paper. ------ [Reply] --------------------------------------- As described below (the comment-2 of the 1st reviewer), the approach of our proposed DP-FEC is different from TFRC approach. Our approach does not rely on TFRC rate to avoid a degradation of video qualiy caused by the regulation of the sending rate of source packets. By successively observing variation in the intervals between packet loss events, DP-FEC tries to 1) effectively utilize network resources that competing TCP flows cannot consume and 2) control the degree of FEC redundancy to recover lost data packets at the highest data transmission rate. Thus, DP-FEC achieves relatively higher TCP-friendliness index especially under TCP load of 25%. In all simulation experiments, a DP-FEC sender takes the same algorithm for sending video data. It transmits smoothed video data packets at a rate of 30Mbps. The packet size is 1500 bytes. We added a description about this point in Section 4.1, as follows: uIn all simulation experiments, a DP-FEC sender takes the same algorithm for sending data packets. It transmits smoothed data packets at a rate of 30 Mbps.v In addition, we improved English writing as much as possible. ------------------------------------------------------ ======================================================= =================== <1st reviewer> ==================== ======================================================= ” Requirements for acceptance 1. Authors should change the evaluation of figure 1(a) to the one shows the problems of TCP congestion control mechanism for high-performance TCP protocols you discussed in the paper. The degrade of the performance at 10ms RTT seems to be caused by the exhaust of the 50-packet buffer of the routers, not to be caused by the congestion control. Authors should also show the problem using the improved congestion control algorithms such as TCP Vegas. ------ [Reply] --------------------------------------- As you pointed out, we changed the evaluation of figure 1(a) to show the problems of TCP congestion control mechanisms for high-performance TCP protocols and the delay-based TCP protocol (i.e., TCP-Vegas). We changed the buffer size of the routers from fixed 50-packet to max{100, Bandwidth Delay Product (BDP)}. As a major high-performance TCP protocol, we used High-speed TCP (HSTCP). In Section 2.1, we described the review of the evaluation results as follows. uThe maximum throughput that a TCP-Vegas flow achieves under the RTTs becomes about 25 Mbps. Although a HSTCP flow with the RTT of 1ms achieves an average throughput of about 30 Mbps, the average throughputs under the RTTs of more than 1 ms are less than 30 Mbps. Because of losses caused by short-lived TCP flows, each of both TCP-Vegas and HSTCP flow causes the regulations and fluctuations of the streaming data rate. Since these behaviors lead to the severe degradation of video quality, these TCP protocols are not suitable for a high-performance streaming applicationv In addition, since TCP-Vegas was used in the evaluation, we added an explaination of the characteristic of delay-based congestion control mechanisms, as follows: u Estimated RTT variations are also utilized as congestion indicator to avoid packet losses caused by traffic congestion. Yet, delay-based congestion control mechanisms13),14) (e.g., TCP Vegas) react to RTT variations incorrectly and leads to severe quality degradation, especially in high bandwidth paths15),16). v ------------------------------------------------------ 2. This paper discusses about the TFRC characteristics only for the FEC traffic, but TFRC should be satisfied for the whole data and error correction traffic. Authors should show the TFRC is not satisfied with the fixed length of FEC traffic (the size should be the largest length observed in the DP-FEC) with TCP friendly data traffic. Then authors should show the proposed mechanism satisfies TFRC for the whole data and error correction traffic. ------ [Reply] --------------------------------------- As described in Section 2, in the presence of packet losses, standard TFRC susceptibly reduces the data transmission rate at the expense of a reduction in video quality without effectively using network resources. Even if a certain degree of FEC redundancy is applied to a high-performance streaming flow with TFRC, the flow cannot improve the video quality in congested networks. Therefore, in this paper, we focus largely on investigating how a flow should control the degree of FEC redundancy to recover lost data packets while maintaining the highest data transmission rate. We added a description about this point in Section 2.2, as follows: uEven if a certain degree of FEC redundancy is applied to a high-performance streaming flow with TFRC, the flow cannot improve the video quality in congested networks as described before. However, ascertaining and controlling optimal FEC redundancy at the highest data transmission rate is a real challenge, because 1) it is difficult for a sender to determine the packet loss pattern at each moment (as there is a feedback delay) and to predict the futural packet loss pattern, and 2) increasing FEC redundancy may disturb both streaming and competing flows such as TCP when the conditions of competing flows are sensitively oscillated in the networkv For this reason, in Section 4, we evaluated the performances (i.e. data loss rate and TCP friendliness index) of DP-FEC which maintain the highest data transmission rate, compared to those of TFRC. In addition, we added a description about the difference between TFRC and DP-FEC in Section 5. uIn contrast, our approach does not rely on TFRC rate to avoid a degradation of video qualiy caused by the regulation of the sending rate of source packets.v ------------------------------------------------------ 3. The error correction packets seem to be decreased on the lossy (congested) environment under the proposed mechanism. Authors should show the impact of the error recovery on the small FEC window. ------ [Reply] --------------------------------------- As you pointed out, we evaluated the impact of the error recovery on the small FEC window under TCP load of 25%/50%/75% in Section 4.4. Then, we described as follows: (Section 4.4) Fig. 9(a) shows the average loss recovery rates of DP-FEC flows (the cases of Threshold FEC Imapct of 0.1/0.3) under 25%/50%/75% load of TCP flows. The loss recovery rate is defined as the ratio of the number of the recovered data packets to that of the lost data packets in the network. We can see that under 25%load of TCP flows, DP-FEC flows with the Threshold FEC Impact of 0.1 maintain the average loss recovery rates of more than 0.98. However, under 50% and 75% load of TCP flows, the average loss recovery rates decrease because the attainable FEC window size of each DP-FEC flow tends to become small as described in Section 4.2. ------------------------------------------------------ ” Other comments You will find the transaction paper about the research referred in 17 and 23. "Dynamic FEC Algorithms for TFRC Flows" in IEEE Transaction on Multimedia, Vol.12, No.8 (Dec. 2010). ------ [Reply] --------------------------------------- We refered to the paper in the section of "5. Related Work". The description about the paper is as follows: uSeferoglu, et al.17) proposed TFRC with FEC to deal with packet losses induced by competing TCP flows. The mechanism utilized the correlation between packet losses and the estimated RTT fluctuation as a network indicator. Moreover, they provided a rate-distortion optimized way to decide the best allocation of the available TFRC rate between sources and FEC packets, and performed significantly extended performance evaluation18).v ------------------------------------------------------ ======================================================= =================== <2nd reviewer> ==================== ======================================================= ” Requirements for acceptance 1. Although the mechanism has some parameters such as the threshold of FEC impact, estimated RTT of competing TCP flows, MinLI, and multiplicative factor beta of Fwnd, the effect of these parameters on the performance of the proposed mechanism is unclear. This point should be described and/or discussed clearly. ------ [Reply] --------------------------------------- As you pointed out, we investigated the effect of these parameters on the DP-FEC performances again and described about it. With respect to the estimated RTT, MinLI, and multiplicative beta, we added the new descriptions in Section 3, as follows. Meanwhile, we made a new section of "4.3 Effect of the Value of Threshold FEC Impact" in which the new descriptions about the value of Threshold FEC Impact are added, because it is a considerably important parameter for DP-FEC to adjust the FEC window size. In addition, we added deeper analysis regarding the DP-FEC parameters as our future work, as follows. (6. Conclusion and Future Work) In our future work, we will make deeper analysis regarding the DP-FEC parameters such as 1) the threshold value of Threshold FEC impact, 2) estimated RTT of competing TCP flows, 3) MinLI, and 4) multiplicative factor ƒĄ, to optimally set the DP-FEC parameters for ever-changing network conditions. [about estimated RTT] (Section 3.2.1) If the RTT value is large, DP-FEC tends to conservatively behave in congested networks and cause a number of non-recovered data packets due to a small FEC window size. As of now, to avoid such a behavior, DP-FEC estimates ƒ¢FEC impact assuming that all of competing TCP flows has RTT = 0.01(sec). Thus, in competition with TCP flows with RTT > 0.01, the DP-FEC aggressively behaves due to the underestimation of ƒ¢FEC impact. [about MinLI] (Section 3.2.2) When DP-FEC uses a extremely-low value of MinLI, the attainable FEC window size becomes large even in highly congested networks due to the slow repsonse to the network congestion. Such a situation causes a degradation of the performances of competing TCP flows. To avoid it, we set MinLI to 0.035 (sec) based on our simulation experiments as of now. [about beta of Fwnd] (Section 3.2.3) In congestion, DP-FEC immediately decreases Fwnd using multiplicative factor ƒĄ. DP-FEC with a low value of ƒĄ tends to fail to recover lost data packets under continuously congested networks, because the attainable window size becomes low. Since 1) DP-FEC should try to recover data loss as much as possible, and 2) FEC impact can be gradually suppressed in network conditions where added FEC redundancy disturbs TCP performances, DP-FEC sets ƒĄ to relatively high value, 0.8 (i.e,. more than 0.5). [about the threshold FEC impact] We incrementally evaluated the performance of DP-FEC with the Threshold FEC Impact of 0.3 under TCP load of 25%/50%/75% load of TCP flows, and compared the results with those of DP-FEC with the Threshold FEC Impact of default value (i.e. 0.1) We made a new section of "4.3 Effect of the Value of Threshold FEC Impact" and described the effect of Threshold FEC Impact paramter on the DP-FEC performances. ------------------------------------------------------ 2. The font size in each graph is too small. The readability of them should be improved. ------ [Reply] --------------------------------------- As you pointed out, we improved the font size in each graph. ------------------------------------------------------ ” Other comments 1. In the last paragraph of Section 1, "NS-1 simulation" should be "NS-2 simulation." ------ [Reply] --------------------------------------- As you pointed out, we changed "NS-1" to "NS-2". ------------------------------------------------------ 2. In Equation (2), ")" is missing. ------ [Reply] --------------------------------------- As you pointed out, we modified the equation (2). ------------------------------------------------------ 3. In Section 3.2.2, "Fwnd" is used without explanation. ------ [Reply] --------------------------------------- To correct "Fwnd" in Section 3.2.2, we changed "Fwnd" to "FEC window size". ------------------------------------------------------ 4. In the first paragraph of Section 3.2.2 (last sentence), "Restime" should be "ResTime." ------ [Reply] --------------------------------------- As you pointed out, we changed "Restime" to "ResTime". ------------------------------------------------------