You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 22 Next »

Automotive Virtual Platform Specification

1. Introduction

Automotive requirements lead to particular choices and needs from the underlying software stack.  Existing standards for device drivers in virtualization need to be augmented because they are often not focused on automotive or even embedded, systems.  Much of the progression comes from the IT/server consolidation and in the Linux world, some come from virtualization of workstation/desktop systems.

A collection of virtual device driver APIs constitute the defined interface between virtual machines and the virtualization layer, i.e. the hypervisor or virtualization "host system".  Together they make up a definition of a virtual platform.  

This has a number of advantages:

  • Device drivers (for paravirtualization) for the kernel (Linux in particular), don't need to be maintained uniquely for different hypervisors
  • Simplify moving hypervisor guests between different hypervisor environments
  • Some potential for shared implementation across guest operating systems
  • Some potential for shared implementation across hypervisors with different license models
  • Industry shared requirements and test-suites, a common vocabulary and understanding to reduce complexity of virtualization. 

In comparison, the OCI initiative for containers serves a similar purpose.  There are many compatible container runtimes → there could be the potential for standardized "hypervisor runtime environments" that allow a standards compliant virtual (guest) machine to run with less integration efforts.

  • Hypervisors can fulfill the specification, with local optimizations / advantages
  • Similarly, guests VMs can be engineered to match the specification.


The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC 2119].

2. Architecture

Assumptions made about the architecture, use-cases...

Limits to applicability, etc...


3. General requirements

Automotive requirements to be met (general)...

3. Common Virtual Device categories

3.1 Block Device

 When using hypervisor technology data on storage devices needs to adhere to high-level security and safety requirements such as isolation and access restrictions. Virtio and its layer for block devices provides the infrastructure for sharing block devices and establish isolation of storage spaces. This is because, actual device access can be controlled by the hypervisor. However, Virtio favors generality over using hardware-specific features. This is problematic in case of specific requirements w.r.t. robustness and endurance measures often associated with the use of persistent data storage such as flash devices. In this context one can spot three relevant scenarios:

  1. Features transparent to the GuestOS.  For these features, the required functionality can be implemented close to the access point, e.g., inside the actual driver. As an example, one may think of a flash device where the flash translation layer (FTL) needs to be provided by software. This is in contrast to, for example, MMC flash device, SD cards and USB thumb drives where the FTL is transparent to the software.
  2. Features established via driver extensions and workarounds at the level of the GuestOS. These are features which can be differentiated at the level of (logical) block devices such that the GuestOS use different block devices and the driver running in the backend enforces a dedicated strategy for each (logical) block device. E.g., GuestOS and its application may require different write modes, here reliable vs. normal write.
  3. Features which call for an extension of the VIRTIO Block device driver standard. Whereas category 1 and 2 does not need an augmentation of the Virtio block device standard, a different story needs to be told, whenever such workarounds do not exist. An example to this is the erase of blocks. The resp. commands can only be emitted by the GuestOS. The resulting TRIM commands need to be explicitly implemented in both the front-end as well as in the back-end driven by the hypervisor.

3.1.x Meeting automotive persistence requirements

  Comments on the above, according to previous discussions


3.2 Network Device

...


3.3 GPU Device

The virtio-gpu is a virtio based graphics adapter. It can operate in 2D mode and in 3D (virgl) mode. The device architecture is based around the concept of resources private to the host, the guest must DMA transfer into these resources. This is a design requirement in order to interface with 3D rendering.

3.3.1 GPU Device in 2D Mode

In the unaccelerated 2D mode there is no support for DMA transfers from resources, just to them. Resources are initially simple 2D resources, consisting of a width, height and format along with an identifier. The guest must then attach backing store to the resources in order for DMA transfers to work.

        Device ID.

(lightbulb) REQ-1:   The device ID MUST be set according to the requirement in chapter 5.7.1 in [VIRTIO-GPU].

Virtqueues.

(lightbulb) REQ-2:   The virtqueues MUST be set up according to the requirement in chapter 5.7.2 in [VIRTIO-GPU].

Feature bits.

(lightbulb) REQ-3:   The VIRTIO_GPU_F_VIRGL flag, described in chapter 5.7.3 in [VIRTIO-GPU], SHALL NOT be set.

        Device configuration layout.

(lightbulb) REQ-4:   The implementation MUST use the device configuration layout according to chapter 5.7.4 in [VIRTIO-GPU].

(lightbulb)      REQ-4.1: The implementation SHALL NOT touch the reserved structure field as it is used for the 3D mode.

        Device Operation.

(lightbulb) REQ-5:   The implementation MUST suport the device operation conceprt (the command set and the operation flow) according to chapter 5.7.6 in [VIRTIO-GPU].

(lightbulb)      REQ-5.1: The implementation MUST support scatter-gather operations to fulfil the requirement in chapter 5.7.6.1 in [VIRTIO-GPU].

(lightbulb)      REQ-5.2: The implementation MUST be capable  to perform DMA operations to client's attached resources to fulfil the requirement in chapter 5.7.6.1 in [VIRTIO-GPU].

        VGA Compatibility.

(lightbulb) REQ-6:   VGA compatibility, as described in chapter 5.7.7 in [VIRTIO-GPU], is optional.


3.3.2 GPU Device in 3D Mode

3D mode will offload rendering operations to the host gpu and therefore requires a gpu with 3D support on the host machine. The guest side requires additional software in order to convert OpenGL commands to the raw graphics stack state (Gallium state) and channel them through virtio-gpu to the host. Currently the 'mesa' library is used for this purpose. The backend then receives the raw graphics stack state and interprets it using the virglrenderer library from the raw state into an OpenGL form, which can be executed as entirely normal OpenGL on the host machine. The host also translates shaders from the TGSI format used by Gallium into the GLSL format used by OpenGL.

        Device ID.

(lightbulb) REQ-1:   The device ID MUST be set according to the requirement in chapter 5.7.1 in [VIRTIO-GPU].

Virtqueues.

(lightbulb) REQ-2:   The virtqueues MUST be set up according to the requirement in chapter 5.7.2 in [VIRTIO-GPU].

        Feature bits.

(lightbulb) REQ-3:   The  implementation MUST set the VIRTIO_GPU_F_VIRGL flag, described in chapter 5.7.3 in [VIRTIO-GPU].

        Device configuration layout.

(lightbulb) REQ-4:   The implementation MUST use the device configuration layout according to chapter 5.7.4 in [VIRTIO-GPU].

(lightbulb)      REQ-4.1: The implementation MUST use the previously reserved config structure field to report the number of capsets supported by the virglrenderer library.

(lightbulb)           REQ-4.1.1: The implementation SHALL NOT report the value of '0' as it is treated as absence of 3D support. 

        Device Operation.

(lightbulb) REQ-5:   The implementation MUST suport the device operation concept (the command set and the operation flow) according to chapter 5.7.6 in [VIRTIO-GPU].

(lightbulb)      REQ-5.1: The implementation MUST support scatter-gather operations to fulfil the requirement in chapter 5.7.6.1 in [VIRTIO-GPU].

(lightbulb)      REQ-5.2: The implementation MUST support the extended command set as described in chapter 'Virtio-GPU | Virgl3D commands' in [VIRTIO-VIRGL].

(lightbulb)      REQ-5.3: The implementation MUST support the 3D command set as described in chapter 'VIRTIO_GPU_CMD_SUBMIT_3D' in [VIRTIO-VIRGL].

(lightbulb)      REQ-5.4: The implementation MUST support the VIRTIO_GPU_CMD_GET_CAPSET_INFO command set as described in [??? only kernel sources as a reference so far].

(lightbulb)      REQ-5.5: The implementation MUST support the VIRTIO_GPU_CMD_GET_CAPSET command set as described in [??? only kernel sources as a reference so far].

(lightbulb)      REQ-5.6: The implementation MUST be capable  to perform DMA operations to and from client's attached resources to fulfil the requirement in chapter 5.7.6.1 in [VIRTIO-GPU] and in 'Virtio-GPU | Virgl3D commands' in [VIRTIO-VIRGL].

        VGA Compatibility.

(lightbulb) REQ-6:   VGA compatibility, as described in chapter 5.7.7 in [VIRTIO-GPU], is optional.

Additional features.

(lightbulb) REQ-7:  In addition to the command set and features, defined in [VIRTIO-GPU] and [VIRTIO-VIRGL], the implementation MAY provide:

    • an additional flag to request more detailded GL error reporting to the client


3.4 IOMMU Device

NOTE: The current specification draft looks quite neat except the fact that it marks many requirements as SHOULD or MAY and leaves it for an implementation. Here I try to provide more strict rules when it is applicable.

An IOMMU provides virtual address spaces to other devices. Traditionally devices able to do Direct Memory Access (DMA masters) would use bus addresses, allowing them to access most of the system memory. An IOMMU limits their scope, enforcing address ranges and permissions of DMA transactions. The virtio-iommu device manages Direct Memory Access (DMA) from one or more physical or virtual devices assigned to a guest.

        Device ID.

(lightbulb) REQ-1:   The device ID MUST be set according to the requirement in chapter 2.1 in [VIRTIO-IOMMU].

Virtqueues.

(lightbulb) REQ-2:   The virtqueues MUST be set up according to the requirement in 2.2 in [VIRTIO-IOMMU].

Feature bits.

(lightbulb) REQ-3:   The valid feature bits set is described in chapter 2.3 in [VIRTIO-IOMMU] and is dependant on the particular implementation.

Device configuration layout.

(lightbulb) REQ-4:   The implementation MUST use the device configuration layout according to chapter 2.4 in [VIRTIO-IOMMU].

Device initialization.

(lightbulb) REQ-5:   The implementation MUST follow the initialisation guideline according to chapter 2.5 in [VIRTIO-IOMMU].

(lightbulb)      REQ-5.1: When implementing device initialization requirements from chapter 2.5.2, a stricter requirement takes place:

                     a. If the driver does not accept the VIRTIO_IOMMU_F_BYPASS feature, the device SHALL NOT let endpoints access the guest-physical address space.

Device Operation.

(lightbulb) REQ-6:   The implementation MUST suport the device operation concept (the command set and the operation flow) according to chapter 2.6 in [VIRTIO-IOMMU].

(lightbulb)      REQ-6.1: When implementing support for device operation requirements from chapter 2.6.2, stricter requirements take place:

             a. The device SHALL NOT set status to VIRTIO_IOMMU_S_OK if a request didn’t succeed.

             b. If a request type is not recognized, the device MUST return the buffers on the used ring and set the len field of the used element to zero.

             c. If the VIRTIO_IOMMU_F_INPUT_RANGE feature is offered and the range described by fields virt_start and virt_end doesn’t fit in the range described by input_range, the device MUST set status to VIRTIO_IOMMU_S_RANGE and ignore the request.

             d. If the VIRTIO_IOMMU_F_DOMAIN_BITS is offered and bits above domain_bits are set in field domain, the device MUST set status to VIRTIO_IOMMU_S_RANGE and ignore the request.

        (lightbulb)      REQ-6.2: When implementing a handler for the ATTACH request as described in chapter 2.6.3.2, stricter requirements take place:

                     a. If the reserved field of an ATTACH request is not zero, the device MUST set the request status to VIRTIO_IOMMU_S_INVAL and SHALL NOT attach the endpoint to the domain.

                     b. If the endpoint identified by endpoint doesn’t exist, then the device MUST set the request status to VIRTIO_IOMMU_S_NOENT.

                     c. If another endpoint is already attached to the domain identified by domain, then the device MUST attempt to attach the endpoint identified by endpoint to the domain. If it cannot do so, the device MUST set the request status to VIRTIO_IOMMU_S_UNSUPP.

                     d. If the endpoint identified by endpoint is already attached to another domain, then the device MUST first detach it from that domain and attach it to the one identified by domain. In that case the device behaves as if the driver issued a DETACH request with this endpoint, followed by the ATTACH request. If the device cannot do so, it MUST set the request status to VIRTIO_IOMMU_S_UNSUPP.

                     e. If properties of the endpoint (obtained with a PROBE request) are incompatible with properties of other endpoints already attached to the requested domain, the device SHALL NOT attach the endpoint and MUST set the request status to VIRTIO_IOMMU_S_UNSUPP.

        (lightbulb)      REQ-6.3: When implementing a handler for the DETACH request as described in chapter 2.6.4.2, stricter requirements take place:

                     a. If the reserved field of a DETACH request is not zero, the device MUST set the request status to VIRTIO_IOMMU_S_INVAL, in which case the device SHALL NOT perform the DETACH operation.
                     b. If the endpoint identified by endpoint doesn’t exist, then the device MUST set the request status to VIRTIO_IOMMU_S_NOENT.
                     c. If the domain identified by domain doesn’t exist, or if the endpoint identified by endpoint isn’t attached to this domain, then the device MUST set the request status to VIRTIO_IOMMU_S_INVAL.

        (lightbulb)      REQ-6.4: When implementing a handler for the MAP request as described in chapter 2.6.5.2, stricter requirements take place:

                     a. If virt_start, phys_start or (virt_end + 1) is not aligned on the page granularity, the device MUST set the request status to VIRTIO_IOMMU_S_RANGE and SHALL NOT create the mapping.
                     b. If the device doesn’t recognize a flags bit, it MUST set the request status to VIRTIO_IOMMU_S_INVAL. In this case the device SHALL NOT create the mapping.
                     c. If a flag or combination of flag isn’t supported, the device MUST set the request status to VIRTIO_IOMMU_S_UNSUPP.
                     d. The device SHALL NOT allow writes to a range mapped without the VIRTIO_IOMMU_MAP_F_WRITE flag. However, if the underlying architecture does not support write-only mappings, the device MAY allow reads to a range mapped with VIRTIO_IOMMU_MAP_F_WRITE but not VIRTIO_IOMMU_MAP_F_READ.
                     e. If domain does not exist, the device MUST set the request status to VIRTIO_IOMMU_S_NOENT.

        (lightbulb)      REQ-6.5: When implementing a handler for the UNMAP request as described in chapter 2.6.6.2stricter requirements take place:

                     a. If the reserved field of an UNMAP request is not zero, the device MUST set the request status to VIRTIO_IOMMU_S_INVAL, in which case the device SHALL NOT perform the UNMAP operation.

                     b. If domain does not exist, the device MUST set the request status to VIRTIO_IOMMU_S_NOENT. 
                     c. If a mapping affected by the range is not covered in its entirety by the range (the UNMAP request would split the mapping), then the device MUST set the request status to VIRTIO_IOMMU_S_RANGE, and SHALL NOT remove any mapping.
                     d. If part of the range or the full range is not covered by an existing mapping, then the device MUST remove all mappings affected by the range and set the request status to VIRTIO_IOMMU_S_OK.

        (lightbulb)      REQ-6.6: When implementing a handler for the PROBE request as described in chapter 2.6.7.2stricter requirements take place:

                     a. If the reserved field of a PROBE request is not zero, the device MUST set the request status to VIRTIO_IOMMU_S_INVAL.

                     b. If the endpoint identified by endpoint doesn’t exist, then the device SHOULD set the request status to VIRTIO_IOMMU_S_NOENT.

                     c. If the device does not offer the VIRTIO_IOMMU_F_PROBE feature, and if the driver sends a VIRTIO_IOMMU_T_PROBE request, then the device MUST return the buffers on the used ring and set the len field of the used element to zero.
                     d. The device MUST set bits [15:12] of property type to zero.
                     e. If the properties list is smaller than probe_size, then the device SHALL NOT write any property and MUST set the request status to VIRTIO_IOMMU_S_INVAL.
                     f. If the device doesn’t fill all probe_size bytes with properties, it MUST terminate the list with a property of type NONE and size 0. The device MAY fill the remaining bytes of properties, if any, with zeroes. If there isn’t enough space remaining in properties to terminate the list with a complete NONE property (4 bytes), then the device MUST fill the remaining bytes with zeroes.

        (lightbulb)      REQ-6.7: When implementing support for RESV_MEM property as described in chapter 2.6.8.2.2, stricter requirements take place:

                     a. The device MUST set reserved to zero.
                     b. The device SHALL NOT present more than one VIRTIO_IOMMU_RESV_MEM_T_MSI property per endpoint.
                     c. The device SHALL NOT present RESV_MEM properties that overlap each others for the same endpoint.

        (lightbulb)      REQ-6.8: When implementing support for fault reporting as described in chapter 2.6.9.2, stricter requirements take place:

                     a. The device MUST set reserved and reserved1 to zero.
                     b. The device MUST set undefined flags to zero.
                     c. The device MUST write a valid endpoint ID in endpoint.
                     d. If a buffer is too small to contain the fault report (this would happen for example if the device implements a more recent version of this specification than the driver, whose fault report contains additional fields) , the device SHALL NOT use multiple buffers to describe it. The device MUST fall back to using an older fault report format that fits in the buffer.


4. Supplemental Virtual Device categories

4.1 9pfs and host-to-vm filesystem sharing

Host to VM disk sharing

The function of providing disk access in the form of a "shared folder" or full disk passthrough is a function that seems mostly used by desktop virtualization of the type where for example the user wants to run for example Microsoft Windows in combination with a MacOS, or to run Linux by running it in a virtual machine hosted by another main operating system.

It might serve some purpose also in server virtualization if that also is based on Type-2 hypervisor which is in itself an operating system kernel but also hosting multiple virtualized environment.

For the automotive use case, the working group found little need for this host-to-vm disk sharing, but we summarize the situation here if the need arises for some particular product.

Most systems will be able to accomodate any network disk protocol needs by implementing the network protocol in one or several of the VMs. Typical systems we deal with are large enough to include a more complete and more standard protocol such as NFS within the normal operating system environment that is running in he VM and share storage between them over the (virtual) network they have. In other words, for many use cases it need not be implemented in the hypervisor itself.

[VIRTIO] describes one network disk protocol for the purpose of hypervisor-to-vm storage sharing. The protocol 9pfs is mentioned in two ways: A PCI type device can indicate that it is going to use the 9P protocol. The specification also has 9P as a specific separate device type. There seems to be no definition (or even specific reference) to the protocol itself and it is assumed to be well known by name and possible to find online. The specification is complemented by scattered information regarding the specific implementations (Xen, KVM, QEMU, ...)

REQ X:Y: Implementation of host-to-vm disk sharing using to the 9pfs protocol is optional.

The 9pfs protocol seems proven and supposedly OK for what it does. Possibly more security features are needed, depending on use-case. VIRTIO however seems to defer the definition completely to "somewhere else"? At least a reference to a canonical specification would seem appropriate.

9pfs is a minimalistic network file-system protocol that the working group figures is appropriate for the task. Other network protocols like NFS, SMB/SAMBA etc. would be too heavy. 9pfs however feels a bit esoteric, and while "reinventing" is usually unnecessary there might be an appropriate opportunity to do that here, with a new modern protocol plus, a reference open-source implementation. It ought to take a closer look at flexibility and security seem somewhat glossed over in the current 9pfs description which references only "fixed user" or "pass-through" for mapping ownership on files in guest/host.

Links: Virtio 1.0 spec : {PCI-9P, 9P device type}. Kernel support: Xen/Linux 4.12+ FE driver Xen implementation details

References: A set of man pages seemingly defining P9? intro, others QEMU instruction how to set up a VirtFS (P9). Example/info how to natively mount a 9P network filesystem, Source code for 9pfs FUSE driver

3. References

    [RFC 2119] https://www.ietf.org/rfc/rfc2119.txt

    [VIRTIO]  Virtual I/O Device (VIRTIO) Version 1.0, Committee Specification 04, release 03 March 2016.

    [VIRTIO-GPU]  Virtual I/O Device (VIRTIO) Version 1.0, Committee Specification 03-virtio-gpu, release 02 August 2015.

    [VIRTIO-VIRGL]  [AN OASIS STANDARD PROPOSAL OR OWN PAPER IS NEEDED]  https://github.com/Keenuts/virtio-gpu-documentation/blob/master/src/virtio-gpu.md

    [VIRTIO-IOMMU]  VIRTIO-IOMMU DRAFT 0.8




 RESV_MEM

  • No labels