This change refactors WgcWindowCapturer into WgcCapturerWin, a source agnostic capturer, and finishes the implementation to enable both window and screen capture. This CL depends on another which must complete first: 196622: Add ability to load CreateDirect3DDeviceFromDXGIDevice from d3d11.dll | https://webrtc-review.googlesource.com/c/src/+/196622 This feature remains off by default behind a build flag, due to it adding a depency on the Win10 SDK vv10.0.19041 which not all consumers of WebRTC have upgraded to. A follow up change later will enable the rtc_enable_win_wgc build flag, but for now it should remain off. The basic operation of this class is as follows: Consumers call either WgcCapturerWin::CreateRawWindowCapturer or CreateRawScreenCapturer to receive a correctly initialized WgcCapturerWin object suitable for the desired source type. Callers then indicate via SelectSource and a SourceId the desired capture target, and the capturer creates an appropriate WgcCaptureSource for the correct type (window or screen) using the WgcCaptureSourceFactory supplied at construction. Next, callers request frames for the currently selected source and the capturer then creates a WgcCaptureSession and stores it in a map for more efficient capture of multiple sources. The WgcCaptureSession is supplied with a GraphicsCaptureItem created by the WgcCaptureSource. It uses this item to create a Direct3D11CaptureFramePool and create and start a GraphicsCaptureSession. Once started, captured frames will begin to be deposited into the FramePool. Typically, one would listen for the FrameArrived event and process the frame then, but due to the synchronous nature of the DesktopCapturer interface, and to avoid a more complicated multi- threaded architecture we ignore the FrameArrived event. Instead, we wait for a request for a frame from the caller, then we check the FramePool for a frame, and process it on demand. Processing a frame involves moving the image data from an ID3D11Texture2D stored in the GPU into a texture that is accessible from the CPU, and then copying the data into the new WgcDesktopFrame class. This copy is necessary as otherwise we would need to manage the lifetimes of the CaptureFrame and ID3D11Texture2D objects, lest the buffer be invalidated. Once we've copied the data and returned it to the caller, we can unmap the texture and exit the scope of the GetFrame method, which will destruct the CaptureFrame object. At this point, the CaptureSession will begin capturing a new frame, and will soon deposit it into the FramePool and we can repeat. Bug: webrtc:9273 Change-Id: I02263c4fd587df652b04d5267fad8965330d0f5b Reviewed-on: https://webrtc-review.googlesource.com/c/src/+/200161 Reviewed-by: Jamie Walch <jamiewalch@chromium.org> Reviewed-by: Guido Urdaneta <guidou@webrtc.org> Commit-Queue: Austin Orion <auorion@microsoft.com> Cr-Commit-Position: refs/heads/master@{#33083}
26 lines
856 B
C++
26 lines
856 B
C++
/*
|
|
* Copyright (c) 2020 The WebRTC project authors. All Rights Reserved.
|
|
*
|
|
* Use of this source code is governed by a BSD-style license
|
|
* that can be found in the LICENSE file in the root of the source
|
|
* tree. An additional intellectual property rights grant can be found
|
|
* in the file PATENTS. All contributing project authors may
|
|
* be found in the AUTHORS file in the root of the source tree.
|
|
*/
|
|
|
|
#include "modules/desktop_capture/win/wgc_desktop_frame.h"
|
|
|
|
#include <utility>
|
|
|
|
namespace webrtc {
|
|
|
|
WgcDesktopFrame::WgcDesktopFrame(DesktopSize size,
|
|
int stride,
|
|
std::vector<uint8_t>&& image_data)
|
|
: DesktopFrame(size, stride, image_data.data(), nullptr),
|
|
image_data_(std::move(image_data)) {}
|
|
|
|
WgcDesktopFrame::~WgcDesktopFrame() = default;
|
|
|
|
} // namespace webrtc
|