The virtio-gpu is a virtio based graphics adapter. It can operate in 2D mode and in 3D (virgl) mode. 3D mode will offload rendering ops to the host gpu and therefore requires a gpu with 3D support on the host machine.

The virtio-gpu is based around the concept of resources private to the host, the guest must DMA transfer into these resources. This is a design requirement in order to interface with 3D rendering. In the unaccelerated 2D mode there is no support for DMA transfers from resources, just to them.

Resources are initially simple 2D resources, consisting of a width, height and format [1] along with an identifier. The guest must then attach backing store to the resources in order for DMA transfers to work.

From guest's point of view virtio-gpu is a DRM [2] driver for some gpu device. Upon loading the driver checks whether 3D mode is supported by the host implementation and also requests screen(s) parameters. The host provides information about scree size, position and whether dispaly is enabled (i.e. connected) or not.

Allocation of framebuffers as well as general GPU memory allocations (allocation of so-called dumb buffers, see [3]) on the guest side are done in the following way:

The actual displaying operation is accomplished by a guest like this:

Actual framebuffer processing and displaying on the host side is implementation defined. QEMU, for instance, uses libpixman for this purpose.


The virtio-gpu 3D implementation is more complex. Besides supporting all the functionality from above, it also introduces some architecture changes as well as additions to the virtio-gpu command set. It is still work in progress.

On the guest side some modifications to the Mesa library (an open-source implementation of many graphics APIs, including OpenGLOpenGL ES (versions 1, 2, 3), OpenCL) have been done.  Applications on the guest side still speak unmodified OpenGL to the Mesa library. But instead of Mesa handing commands over to the hardware it is channeled through virtio-gpu to the backend on the host. The backend then receives the raw graphics stack state (Gallium state, see [4]) and interprets it using virglrenderer from the raw state into an OpenGL form, which can be executed as entirely normal OpenGL on the host machine. The host also translates shaders from the TGSI format [5] used by Gallium into the GLSL format used by OpenGL. The OpenGL stack on the host side does not even have to be Mesa, and could be some proprietary graphics stack.

Considering the command set extensions, the following was added:

The overall complexity of virtio-gpu 3D is mostly handled by the Mesa library on the guest side and by virglrenderer on the host side.

VirGL architecture, comparison to other approaches, future directions

An architecture presentation from Collabora (VirGL developers) has been attached below. This outlines the current design and approach taken for VirGL development, the reasons behind these choices, a comparison to other approaches (e.g. direct hardware sharing provided by vendor drivers), and some possible future directions for development of open, vendor-neutral, graphics sharing standards.

 


[1] https://en.wikipedia.org/wiki/FourCC

[2] https://events.static.linuxfound.org/sites/events/files/slides/brezillon-drm-kms.pdf

[3] https://www.systutorials.com/docs/linux/man/7-drm-memory/

[4] https://www.freedesktop.org/wiki/Software/gallium/

[5] https://gallium.readthedocs.io/en/latest/tgsi.html