NetEq background noise generation off by default
This CL turns the background noise generation in NetEq off by default. The noise generation used to kick in during long-duration packet losses, when there was no point in extrapolating the latest audio any longer. However, this sometimes produces annoying noise in situations where silence would have been preferable.
With this change, a long packet-loss concealment will be faded out to zeros instead of a low noise.
Reference files are updated where needed.
BUG=3519
R=tina.legrand@webrtc.org
Review URL: https://webrtc-codereview.appspot.com/20109004
git-svn-id: http://webrtc.googlecode.com/svn/trunk@6882 4adac7df-926f-26a2-2b94-8c16560cd09d
diff --git a/webrtc/modules/audio_coding/neteq/interface/neteq.h b/webrtc/modules/audio_coding/neteq/interface/neteq.h
index fdd000a..7196bc1 100644
--- a/webrtc/modules/audio_coding/neteq/interface/neteq.h
+++ b/webrtc/modules/audio_coding/neteq/interface/neteq.h
@@ -74,7 +74,7 @@
max_packets_in_buffer(50),
// |max_delay_ms| has the same effect as calling SetMaximumDelay().
max_delay_ms(2000),
- background_noise_mode(kBgnOn) {}
+ background_noise_mode(kBgnOff) {}
int sample_rate_hz; // Initial vale. Will change with input data.
bool enable_audio_classifier;