Restore severity precondition to logging.h.

I mistakenly ommitted the checks when logging.h was ported from
libjingle to webrtc. This caused a significant CPU cost for logs which
were later filtered out anyway.

Verified with LS_VERBOSE logging in neteq4, running:
$ out/Release/modules_unittests \
--gtest_filter=NetEqDecodingTest.TestBitExactness \
--gtest_repeat=50 > time.txt
$ grep "case ran" time.txt | grep "[0-9]* ms" -o | sort

Results on a MacBook Retina, averaged over 5 runs:
Verbose logs disabled:                          666 ms
Exisiting implementation, verbose logs enabled: 944 ms (1.42x)
New implementation, verbose logs enabled:       673 ms (1.01x)

BUG=2314
R=henrik.lundin@webrtc.org, henrike@webrtc.org, kjellander@webrtc.org, turaj@webrtc.org

Review URL: https://webrtc-codereview.appspot.com/2160005

git-svn-id: http://webrtc.googlecode.com/svn/trunk@4682 4adac7df-926f-26a2-2b94-8c16560cd09d
diff --git a/webrtc/system_wrappers/source/logging_unittest.cc b/webrtc/system_wrappers/source/logging_unittest.cc
index ae92f0b..19f1394 100644
--- a/webrtc/system_wrappers/source/logging_unittest.cc
+++ b/webrtc/system_wrappers/source/logging_unittest.cc
@@ -47,7 +47,7 @@
     Trace::CreateTrace();
     Trace::SetTraceCallback(this);
     // Reduce the chance that spurious traces will ruin the test.
-    Trace::SetLevelFilter(kTraceWarning | kTraceError);
+    Trace::set_level_filter(kTraceWarning | kTraceError);
   }
 
   void TearDown() {