Fix NetEq performance test regression

The test code created an AudioBuffer object inside the work loop. This
turned out to be expensive, since the AudioBuffer ctor implicitly
called memset on all of the audio data array. The obvious remedy is to
create the buffer outside of the loop. This does not have any impact
apart from the performance boost, since the output data from NetEq is
not even considered in the test.

BUG=chromium:592907,webrtc:5647
TBR=ivoc@webrtc.org
NOTRY=true

Review URL: https://codereview.webrtc.org/1782803002

Cr-Commit-Position: refs/heads/master@{#11940}
This commit is contained in:
henrik.lundin
2016-03-10 02:26:29 -08:00
committed by Commit bot
parent 9cbebee523
commit d72595eeea
2 changed files with 3 additions and 1 deletions

View File

@ -75,6 +75,7 @@ int64_t NetEqPerformanceTest::Run(int runtime_ms,
// Main loop.
webrtc::Clock* clock = webrtc::Clock::GetRealTimeClock();
int64_t start_time_ms = clock->TimeInMilliseconds();
AudioFrame out_frame;
while (time_now_ms < runtime_ms) {
while (packet_input_time_ms <= time_now_ms) {
// Drop every N packets, where N = FLAGS_lossrate.
@ -104,7 +105,6 @@ int64_t NetEqPerformanceTest::Run(int runtime_ms,
}
// Get output audio, but don't do anything with it.
AudioFrame out_frame;
int error = neteq->GetAudio(&out_frame);
if (error != NetEq::kOK)
return -1;