You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 17 Next »

Automotive Virtual Platform Specification

1. Introduction

Automotive requirements lead to particular choices and needs from the underlying software stack.  Existing standards for device drivers in virtualization need to be augmented because they are often not focused on automotive or even embedded, systems.  Much of the progression comes from the IT/server consolidation and in the Linux world, some come from virtualization of workstation/desktop systems.

A collection of virtual device driver APIs constitute the defined interface between virtual machines and the virtualization layer, i.e. the hypervisor or virtualization "host system".  Together they make up a definition of a virtual platform.  

This has a number of advantages:

  • Device drivers (for paravirtualization) for the kernel (Linux in particular), don't need to be maintained uniquely for different hypervisors
  • Simplify moving hypervisor guests between different hypervisor environments
  • Some potential for shared implementation across guest operating systems
  • Some potential for shared implementation across hypervisors with different license models
  • Industry shared requirements and test-suites, a common vocabulary and understanding to reduce complexity of virtualization. 

In comparison, the OCI initiative for containers serves a similar purpose.  There are many compatible container runtimes → there could be the potential for standardized "hypervisor runtime environments" that allow a standards compliant virtual (guest) machine to run with less integration efforts.

  • Hypervisors can fulfill the specification, with local optimizations / advantages
  • Similarly, guests VMs can be engineered to match the specification.

2. Architecture

Assumptions made about the architecture, use-cases...

Limits to applicability, etc...


3. General requirements

Automotive requirements to be met (general)...

3. Common Virtual Device categories

3.1 Block Device

  [Placeholder, first text from Kai L]


3.1.x Meeting automotive persistence requirements

  Comments on the above, according to previous discussions


3.2 Network Device

...


3.3 GPU Device

The virtio-gpu is a virtio based graphics adapter. It can operate in 2D mode and in 3D (virgl) mode. The device architecture is based around the concept of resources private to the host, the guest must DMA transfer into these resources. This is a design requirement in order to interface with 3D rendering.

3.3.1 GPU Device in 2D Mode

In the unaccelerated 2D mode there is no support for DMA transfers from resources, just to them. Resources are initially simple 2D resources, consisting of a width, height and format along with an identifier. The guest must then attach backing store to the resources in order for DMA transfers to work.

        Device ID.

(lightbulb) REQ-1:   The device ID MUST be set according to the requirement in chapter 5.7.1 in [VIRTIO-GPU].

Virtqueues.

(lightbulb) REQ-2:   The virtqueues MUST be set up according to the requirement in chapter 5.7.2 in [VIRTIO-GPU].

Feature bits.

(lightbulb) REQ-3:   The VIRTIO_GPU_F_VIRGL flag, described in chapter 5.7.3 in [VIRTIO-GPU], SHALL NOT be set.

        Device configuration layout.

(lightbulb) REQ-4:   The implementation MUST use the device configuration layout according to chapter 5.7.4 in [VIRTIO-GPU].

(lightbulb)      REQ-4.1: The implementation SHALL NOT touch the reserved structure field as it is used for the 3D mode.

        Device Operation.

(lightbulb) REQ-5:   The implementation MUST suport the device operation conceprt (the command set and the operation flow) according to chapter 5.7.6 in [VIRTIO-GPU].

(lightbulb)      REQ-5.1: The implementation MUST support scatter-gather operations to fulfil the requirement in chapter 5.7.6.1 in [VIRTIO-GPU].

(lightbulb)      REQ-5.2: The implementation MUST be capable  to perform DMA operations to client's attached resources to fulfil the requirement in chapter 5.7.6.1 in [VIRTIO-GPU].

        VGA Compatibility.

(lightbulb) REQ-6:   VGA compatibility, as described in chapter 5.7.7 in [VIRTIO-GPU], is optional.


3.3.2 GPU Device in 3D Mode

3D mode will offload rendering operations to the host gpu and therefore requires a gpu with 3D support on the host machine. The guest side requires additional software in order to convert OpenGL commands to the raw graphics stack state (Gallium state) and channel them through virtio-gpu to the host. Currently the 'mesa' library is used for this purpose. The backend then receives the raw graphics stack state and interprets it using the virglrenderer library from the raw state into an OpenGL form, which can be executed as entirely normal OpenGL on the host machine. The host also translates shaders from the TGSI format used by Gallium into the GLSL format used by OpenGL.

        Device ID.

(lightbulb) REQ-1:   The device ID MUST be set according to the requirement in chapter 5.7.1 in [VIRTIO-GPU].

Virtqueues.

(lightbulb) REQ-2:   The virtqueues MUST be set up according to the requirement in chapter 5.7.2 in [VIRTIO-GPU].

        Feature bits.

(lightbulb) REQ-3:   The  implementation MUST set the VIRTIO_GPU_F_VIRGL flag, described in chapter 5.7.3 in [VIRTIO-GPU].

        Device configuration layout.

(lightbulb) REQ-4:   The implementation MUST use the device configuration layout according to chapter 5.7.4 in [VIRTIO-GPU].

(lightbulb)      REQ-4.1: The implementation MUST use the previously reserved config structure field to report the number of capsets supported by the virglrenderer library.

(lightbulb)           REQ-4.1.1: The implementation SHALL NOT report the value of '0' as it is treated as absence of 3D support. 

        Device Operation.

(lightbulb) REQ-5:   The implementation MUST suport the device operation concept (the command set and the operation flow) according to chapter 5.7.6 in [VIRTIO-GPU].

(lightbulb)      REQ-5.1: The implementation MUST support scatter-gather operations to fulfil the requirement in chapter 5.7.6.1 in [VIRTIO-GPU].

(lightbulb)      REQ-5.2: The implementation MUST support the extended command set as described in chapter 'Virtio-GPU | Virgl3D commands' in [VIRTIO-VIRGL].

(lightbulb)      REQ-5.3: The implementation MUST support the 3D command set as described in chapter 'VIRTIO_GPU_CMD_SUBMIT_3D' in [VIRTIO-VIRGL].

(lightbulb)      REQ-5.4: The implementation MUST support the VIRTIO_GPU_CMD_GET_CAPSET_INFO command set as described in [??? only kernel sources as a reference so far].

(lightbulb)      REQ-5.5: The implementation MUST support the VIRTIO_GPU_CMD_GET_CAPSET command set as described in [??? only kernel sources as a reference so far].

(lightbulb)      REQ-5.6: The implementation MUST be capable  to perform DMA operations to and from client's attached resources to fulfil the requirement in chapter 5.7.6.1 in [VIRTIO-GPU] and in 'Virtio-GPU | Virgl3D commands' in [VIRTIO-VIRGL].

        VGA Compatibility.

(lightbulb) REQ-6:   VGA compatibility, as described in chapter 5.7.7 in [VIRTIO-GPU], is optional.

Additional features.

(lightbulb) REQ-7:  In addition to the command set and features, defined in [VIRTIO-GPU] and [VIRTIO-VIRGL], the implementation MAY provide:

    • an additional flag to request more detailded GL error reporting to the client


3.4 IOMMU Device

NOTE: The current specification draft looks quite neat except the fact that it marks many requirements as SHOULD or MAY and leaves it for an implementation. Here I try to provide more strict rules when it is applicable.

An IOMMU provides virtual address spaces to other devices. Traditionally devices able to do Direct Memory Access (DMA masters) would use bus addresses, allowing them to access most of the system memory. An IOMMU limits their scope, enforcing address ranges and permissions of DMA transactions. The virtio-iommu device manages Direct Memory Access (DMA) from one or more physical or virtual devices assigned to a guest.

        Device ID.

(lightbulb) REQ-1:   The device ID MUST be set according to the requirement in chapter 2.1 in [VIRTIO-IOMMU].

Virtqueues.

(lightbulb) REQ-2:   The virtqueues MUST be set up according to the requirement in 2.2 in [VIRTIO-IOMMU].

Feature bits.

(lightbulb) REQ-3:   The valid feature bits set is described in chapter 2.3 in [VIRTIO-IOMMU] and is dependant on the particular implementation.

Device configuration layout.

(lightbulb) REQ-4:   The implementation MUST use the device configuration layout according to chapter 2.4 in [VIRTIO-IOMMU].

Device initialization.

(lightbulb) REQ-5:   The implementation MUST follow the initialisation guideline according to chapter 2.5 in [VIRTIO-IOMMU].

(lightbulb)      REQ-5.1: As for chapter 2.5.2, the requirement regarding the non accepted VIRTIO_IOMMU_F_BYPASS feature is to be read as SHALL NOT.

Device Operation.

(lightbulb) REQ-6:   The implementation MUST suport the device operation concept (the command set and the operation flow) according to chapter 2.6 in [VIRTIO-IOMMU].

(lightbulb)      REQ-6.1: As for chapter 2.6.2, MAY and SHOULD are to be read as MUST, SHOULD NOT is to be read as SHALL NOT.

        (lightbulb)      REQ-6.2: As for chapter 2.6.3.2, when implementing a handler for the ATTACH request, SHOULD is to be read as MUST, SHOULD NOT is to be read as SHALL NOT.

        (lightbulb)      REQ-6.3: As for chapter 2.6.4.2, when implementing a handler for the DETACH request, SHOULD is to be read as MUST, SHOULD NOT is to be read as SHALL NOT.

        (lightbulb)      REQ-6.4: As for chapter 2.6.5.2, when implementing a handler for the MAP request, SHOULD is to be read as MUST, SHOULD NOT is to be read as SHALL NOT.

        (lightbulb)      REQ-6.5: As for chapter 2.6.6.2when implementing a handler for the UNMAP request, SHOULD is to be read as MUST, SHOULD NOT is to be read as SHALL NOT.

        (lightbulb)      REQ-6.6: As for chapter 2.6.7.2when implementing a handler for the PROBE request, MAY and SHOULD are to be read as MUST, SHOULD NOT is to be read as SHALL NOT.

        (lightbulb)      REQ-6.7: As for chapter 2.6.8.2.2when implementing support for the RESV_MEM property, SHOULD is to be read as MUST, SHOULD NOT is to be read as SHALL NOT.

        (lightbulb)      REQ-6.8: As for chapter 2.6.9.2when implementing support for fault reporting, SHOULD is to be read as MUST, SHOULD NOT is to be read as SHALL NOT.

4. Supplemental Virtual Device categories

4.1 9pfs and host-to-vm filesystem sharing

Host to VM disk sharing

The function of providing disk access in the form of a "shared folder" or full disk passthrough is a function that seems to have appeared mostly to support desktop virtualization of the type where for example the user wants to run for example Microsoft Windows in combination with a MacOS or Linux by running it in a virtual machine hosted by their main operating system. It might serve some purpose also in server virtualization if that also is based on Type-2 hypervisor which is in itself an operating system kernel but also hosting multiple virtualized environment.

For the automotive use case, the working group found little need for this type of setup, although we summarize the situation here if the need arises for some particular product.

If simply network disk access is the desired feature, then most systems will be able to implement a larger and more standard protocol such as NFS within the normal operating system environment that is running in he VM and share storage between them over the (virtual) network they have. In other words, for many use cases it need not be implemented in the hypervisor itself.

[VIRTIO] describes one network disk protocol for the purpose of hypervisor-to-vm storage sharing. The protocol 9pfs is mentioned in two ways: A PCI type device can indicate that it is going to use the 9P protocol. The specification also has 9P as a specific separate device type. There seems to be no definition (or even specific reference) to the protocol itself and it is assumed to be well known by name and possible to find online. The specification is complemented by scattered information regarding the specific implementations (Xen, KVM, QEMU, ...)

The 9pfs protocol seems proven and supposedly OK for what it does. Possibly more security features needed, depending on use-case. VIRTIO however seems to defer the definition completely to "somewhere else"? At least a reference to a canonical specification would seem appropriate.

It is a minimalistic network file-system protocol that the group think is appropriate for the task. Other network protocols like NFS, SMB/SAMBA etc. would be too heavy. 9pfs feels a bit esoteric, and while "reinventing" is usually unnecessary there might be an appropriate opportunity to do that here, with a new modern protocol plus, a reference open-source implementation. It ought to take a closer look at flexibility and security seem somewhat glossed over in the current 9pfs description which references only "fixed user" or "pass-through" for mapping ownership on files in guest/host.

Links: Virtio 1.0 spec : {PCI-9P, 9P device type}. Kernel support: Xen/Linux 4.12+ FE driver Xen implementation details

References:
A set of man pages seemingly defining P9? intro, others QEMU instruction how to set up a VirtFS (P9). Example/info how to natively mount a 9P network filesystem, Source code for 9pfs FUSE driver



3. References

    [VIRTIO]  Virtual I/O Device (VIRTIO) Version 1.0, Committee Specification 04, release 03 March 2016.

    [VIRTIO-GPU]  Virtual I/O Device (VIRTIO) Version 1.0, Committee Specification 03-virtio-gpu, release 02 August 2015.

    [VIRTIO-VIRGL]  [AN OASIS STANDARD PROPOSAL OR OWN PAPER IS NEEDED]  https://github.com/Keenuts/virtio-gpu-documentation/blob/master/src/virtio-gpu.md


    [VIRTIO-IOMMU]  VIRTIO-IOMMU DRAFT 0.8




 RESV_MEM

  • No labels