ACM/NetEq: Restructure how post-decode VAD is enabled

This change avoids calling neteq_->EnableVad() and DisableVad from the
AcmReceiver constructor. Instead, the new member
enable_post_decode_vad is added to NetEq's config struct. It is
disabled by defualt, but ACM sets it to enabled. This preserves the
behavior both of NetEq stand-alone (i.e., in tests) and of ACM.

BUG=webrtc:3520

Review URL: https://codereview.webrtc.org/1425133002

Cr-Commit-Position: refs/heads/master@{#10476}
diff --git a/webrtc/modules/audio_coding/neteq/include/neteq.h b/webrtc/modules/audio_coding/neteq/include/neteq.h
index 205a0df..d6c359b 100644
--- a/webrtc/modules/audio_coding/neteq/include/neteq.h
+++ b/webrtc/modules/audio_coding/neteq/include/neteq.h
@@ -81,6 +81,7 @@
     Config()
         : sample_rate_hz(16000),
           enable_audio_classifier(false),
+          enable_post_decode_vad(false),
           max_packets_in_buffer(50),
           // |max_delay_ms| has the same effect as calling SetMaximumDelay().
           max_delay_ms(2000),
@@ -92,6 +93,7 @@
 
     int sample_rate_hz;  // Initial value. Will change with input data.
     bool enable_audio_classifier;
+    bool enable_post_decode_vad;
     size_t max_packets_in_buffer;
     int max_delay_ms;
     BackgroundNoiseMode background_noise_mode;