Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

The virtio -gpu block device is a virtio based graphics adapter. It can operate in 2D mode and in 3D (virgl) mode. 3D mode will offload rendering ops to the host gpu and therefore requires a gpu with 3D support on the host machine.

The virtio-gpu is based around the concept of resources private to the host, the guest must DMA transfer into these resources. This is a design requirement in order to interface with 3D rendering. In the unaccelerated 2D mode there is no support for DMA transfers from resources, just to them.

Resources are initially simple 2D resources, consisting of a width, height and format [1] along with an identifier. The guest must then attach backing store to the resources in order for DMA transfers to work.

From guest's point of view virtio-gpu is a DRM [2] driver for some gpu device. Upon loading the driver checks whether 3D mode is supported by the host implementation and also requests screen(s) parameters. The host provides information about scree size, position and whether dispaly is enabled (i.e. connected) or not.

Allocation of framebuffers as well as general GPU memory allocations (allocation of so-called dumb buffers, see [3]) on the guest side are done in the following way:

  • issue a virtio command to create a resource on the host side (i.e. create a descriptor)
  • allocate a framebuffer from guest RAM
  • construct a scatter-gather list out of the allocated memory (the framebuffer doesn’t need to be contignous in guest physical memory)
  • send a command to make the scatter-gather list available to the host (i.e. map the memory for the host system)
  • send a command to link the framebuffer to a display scanout on the host side (i.e. set the framebuffer as a current one)

The actual displaying operation is accomplished by a guest like this:

  • render to the memory of the framebuffer in question
  • send a command to flush the updated resource to the display on the host side

Actual framebuffer processing and displaying on the host side is implementation defined. QEMU, for instance, uses libpixman for this purpose.

The virtio-gpu 3D implementation is more complex. Besides supporting all the functionality from above, it also introduces some architecture changes as well as additions to the virtio-gpu command set. It is still work in progress.

On the guest side some modifications to the Mesa library (an open-source implementation of many graphics APIs, including OpenGLOpenGL ES (versions 1, 2, 3), OpenCL) have been done.  Applications on the guest side still speak unmodified OpenGL to the Mesa library. But instead of Mesa handing commands over to the hardware it is channeled through virtio-gpu to the backend on the host. The backend then receives the raw graphics stack state (Gallium state, see [4]) and interprets it using virglrenderer from the raw state into an OpenGL form, which can be executed as entirely normal OpenGL on the host machine. The host also translates shaders from the TGSI format [5] used by Gallium into the GLSL format used by OpenGL. The OpenGL stack on the host side does not even have to be Mesa, and could be some proprietary graphics stack.

Considering the command set extensions, the following was added:

  • before creating any resources, a guest has to issue a new specific command in order to create a Virgl context on the host side so as to split resources between processes
  • a new create command with extended set of patameters has been introduced to create a 3d resource
  • an additional command should be issued to attach a newly allocated resource to a previously allocated Virgl context

The overall complexity of virtio-gpu 3D is mostly handled by the Mesa library on the guest side and by virglrenderer on the host side.

[1] https://en.wikipedia.org/wiki/FourCC

[2] https://events.static.linuxfound.org/sites/events/files/slides/brezillon-drm-kms.pdf

[3] https://www.systutorials.com/docs/linux/man/7-drm-memory/

[4] https://www.freedesktop.org/wiki/Software/gallium/

virtual block device (ie. disk), instead of placing write and read requests with the actual device, the backend needs to establish the connection. In consequence, read and write requests (and other exotic requests) are placed in the virtio queue, and serviced (probably out of order) by the device except where noted.

write is considered volatile when it is submitted; the contents of sectors covered by a volatile write are undefined in persistent device backend storage until the write becomes stable. A write becomes stable once it is completed and one or more of the following conditions is true:

  1. neither VIRTIO_BLK_F_CONFIG_WCE nor VIRTIO_BLK_F_FLUSH feature were negotiated, but VIR-TIO_BLK_F_FLUSH was offered by the device;
  2. the VIRTIO_BLK_F_CONFIG_WCE feature was negotiated and the writeback field in configuration space was 0 all the time between the submission of the write and its completion;
  3. a VIRTIO_BLK_T_FLUSH request is sent after the write is completed and is completed itself.

In the automotive case, the decision to make data persistent depends also on the platform health status, this is special, as the used block devices are either Nor or Nand flash devices, e.g., QSPI-attached Flash devices or eMMC flash devices. E.g., below a threshold value or above a certain temperature level these devices cannot be used anymore. Hence, the general approach of the virtio should be refined, i.e., specifically tailored to the aforementioned devices. This would yield virtio Nor Flash devices or virtio eMMC devices at the benefit that the essential features could be supported.

Examples of such features a trimming and block erase commands, data refresh reads or the use case of wear-leveling or other data persistence management features hidden inside the virtio blck device machinery.


[1] http://docs.oasis-open.org/virtio/virtio/v1.0/cs04/virtio-v1.0-cs04.pdf

[2] https://cdn.selinc.com/assets/Literature/Publications/White%20Papers/0015_NANDflash_IO_20141211.pdf?v=20170217-161047[5] https://gallium.readthedocs.io/en/latest/tgsi.html