[This CL is a rebase of an original CL by solenberg@:
https://codereview.webrtc.org/2948763002/ which in turn was a
rebase of an original CL by peah@:
https://chromium-review.googlesource.com/c/527032/]
Allow an external audio processing module to be used in WebRTC
This CL adds support for optionally using an externally created audio
processing module in a peerconnection. The ownership is shared
between the peerconnection and the external creator of the module.
As part of this the internal ownership of the audio processing module
is moved from VoiceEngine to WebRtcVoiceEngine.
BUG=webrtc:7775
Review-Url: https://codereview.webrtc.org/2961723004
Cr-Commit-Position: refs/heads/master@{#18837}
This CL adds the following interfaces:
* RtpTransportController
* RtpTransport
* RtpSender
* RtpReceiver
They're implemented on top of the "BaseChannel" object, which is normally used
in a PeerConnection, and roughly corresponds to an SDP "m=" section. As a result
of this, there are several limitations:
* You can only have one of each type of sender and receiver (audio/video) on top
of the same transport controller.
* The sender/receiver with the same media type must use the same RTP transport.
* You can't change the transport after creating the sender or receiver.
* Some of the parameters aren't supported.
Later, these "adapter" objects will be gradually replaced by real objects that don't
have these limitations, as "BaseChannel", "MediaChannel" and related code is
restructured. In this CL, we essentially have:
ORTC adapter objects -> BaseChannel -> Media engine
PeerConnection -> BaseChannel -> Media engine
And later we hope to have simply:
PeerConnection -> "Real" ORTC objects -> Media engine
See the linked bug for more context.
BUG=webrtc:7013
TBR=stefan@webrtc.org
Review-Url: https://codereview.webrtc.org/2675173003
Cr-Commit-Position: refs/heads/master@{#16842}