Change default parameters for the low-latency video pipeline

min_pacing:8ms, to avoid the situation where bursts of frames are sent
to the decoder at once due to network jitter. The bursts of frames
caused the queues further down in the processing to be full and
therefore drop all frames.

max_decode_queue_size:8, in the event that too many frames have piled
up, do as before and send all frames to the decoder to avoid building
up any latency.

These setting only affect the low-latency video pipeline that is enabled
by setting the playout RTP header extension to min=0ms, max>0ms.

Bug: chromium:1138888
Change-Id: I8154bf3efe7450b770da8387f8fb6b23f6be26bd
Reviewed-on: https://webrtc-review.googlesource.com/c/src/+/233220
Commit-Queue: Johannes Kron <kron@webrtc.org>
Reviewed-by: Ilya Nikolaevskiy <ilnik@webrtc.org>
Cr-Commit-Position: refs/heads/main@{#35119}
diff --git a/modules/video_coding/timing_unittest.cc b/modules/video_coding/timing_unittest.cc
index cc87a3b..71de1fe 100644
--- a/modules/video_coding/timing_unittest.cc
+++ b/modules/video_coding/timing_unittest.cc
@@ -130,13 +130,14 @@
 
 TEST(ReceiverTimingTest, MaxWaitingTimeIsZeroForZeroRenderTime) {
   // This is the default path when the RTP playout delay header extension is set
-  // to min==0.
+  // to min==0 and max==0.
   constexpr int64_t kStartTimeUs = 3.15e13;  // About one year in us.
   constexpr int64_t kTimeDeltaMs = 1000.0 / 60.0;
   constexpr int64_t kZeroRenderTimeMs = 0;
   SimulatedClock clock(kStartTimeUs);
   VCMTiming timing(&clock);
   timing.Reset();
+  timing.set_max_playout_delay(0);
   for (int i = 0; i < 10; ++i) {
     clock.AdvanceTimeMilliseconds(kTimeDeltaMs);
     int64_t now_ms = clock.TimeInMilliseconds();