You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 36 Next »

Automotive Virtual Platform Specification

1. Introduction

Automotive requirements lead to particular choices and needs from the underlying software stack.  Existing standards for device drivers in virtualization need to be augmented because they are often not focused on automotive or even embedded, systems.  Much of the progression comes from the IT/server consolidation and in the Linux world, some come from virtualization of workstation/desktop systems.

A collection of virtual device driver APIs constitute the defined interface between virtual machines and the virtualization layer, i.e. the hypervisor or virtualization "host system".  Together they make up a definition of a virtual platform.  

This has a number of advantages:

  • Device drivers (for paravirtualization) for the kernel (Linux in particular), don't need to be maintained uniquely for different hypervisors
  • Simplify moving hypervisor guests between different hypervisor environments
  • Some potential for shared implementation across guest operating systems
  • Some potential for shared implementation across hypervisors with different license models
  • Industry shared requirements and test-suites, a common vocabulary and understanding to reduce complexity of virtualization. 

In comparison, the OCI initiative for containers serves a similar purpose.  There are many compatible container runtimes → there could be the potential for standardized "hypervisor runtime environments" that allow a standards compliant virtual (guest) machine to run with less integration efforts.

  • Hypervisors can fulfill the specification, with local optimizations / advantages
  • Similarly, guests VMs can be engineered to match the specification.


The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC 2119].

2. Architecture

Assumptions made about the architecture, use-cases...

Limits to applicability, etc...


3. General requirements

Automotive requirements to be met (general)...

→ Risk of being imprecise and not useful.


  • Each listed feature is optional but requirements need to be implemented entirely for each feature that is included.

Alternative to discuss:  Should certain features be mandatory (to be compliant with) the virtual platform (i.e. available for use if the OEM product requires them)?


Usage of built-in virtualization in hardware

Req: If running on hardware that supports it, then the architectural virtualization interfaces (for interrupts and timers, performance monitoring, ...) shall be used where available.

?? Does this not weaken the standardization?  In some cases supporting
(VIRTIO) abstraction might be more standard. 
Matti: No, these features do not overlap with VIRTIO typically.




2.5 Booting Guests

(warning) Placeholder

Bla bla boot Protocol between HV and Guest

This provides information to OSes - e.g. where do I find my device tree information.  Abstracting certain HW specifics. Base services like: real-time clock, wake-up reason,
Always been project specific outside PC world.  EBBR says you do this by using UEFI APIs.  The small subset of UEFI APIs that is sufficient are not too hard to handle.  This is what EBBR requires.

Requirement:

  • Systems that support a dynamic boot protocol should implement (the mandatory parts of*) EBBR
    • As EBBR allows either ACPI or Device Tree implementation, this can be chosen according to what fits best for the chosen hardware architecture.\

*TBD: Figure out of some optional EBBR requirements should be mandatory in this specification.

  •  For systems that do not support a dynamic boot protocol (see discussion), the virtual hardware shall be described (by HV vendor) using a device tree format so that implementors can program custom boot and setup code


Discussion.

Some systems might not realistically implement EBBR protocol (e.g. some ported legacy AUTOSAR Classic based systems and other RTOS guests).   These are typically implemented using compile-time definition of the hardware platform.  It is therefore expected that some of the code needs to be adjusted when porting such systems to a virtual platform.

Option 1)  HV exposes the expected devices in the expected location (and behavior) of an existing legacy system.
Option 2)  HV decides where to place device features and communicate that to the operating system.   This would be done by device-tree snippets (statically defined)

    More to do:  "Standard" for how HV describes hardware/memory map to legacy system guests, and recommendations for how to port legacy systems accordingly.
    Consensus seems to be:  Device Tree  (independent specification at https://devicetree.org) used as a human readable specification (i.e. the boot code could still be hard-coded and do not need to support a lot of runtime configuration) → see requirement above.


TBD: Linux and Android have different boot req
Android can be booted using UEFI - likely ebbr req will work for both.

3. Common Virtual Device categories

3.1 Storage (Block Device)

 When using hypervisor technology data on storage devices needs to adhere to high-level security and safety requirements such as isolation and access restrictions. Virtio and its layer for block devices provides the infrastructure for sharing block devices and establish isolation of storage spaces. This is because, actual device access can be controlled by the hypervisor. However, Virtio favors generality over using hardware-specific features. This is problematic in case of specific requirements w.r.t. robustness and endurance measures often associated with the use of persistent data storage such as flash devices. In this context one can spot three relevant scenarios:

  1. Features transparent to the GuestOS.  For these features, the required functionality can be implemented close to the access point, e.g., inside the actual driver. As an example, one may think of a flash device where the flash translation layer (FTL) needs to be provided by software. This is in contrast to, for example, MMC flash device, SD cards and USB thumb drives where the FTL is transparent to the software.
  2. Features established via driver extensions and workarounds at the level of the GuestOS. These are features which can be differentiated at the level of (logical) block devices such that the GuestOS use different block devices and the driver running in the backend enforces a dedicated strategy for each (logical) block device. E.g., GuestOS and its application may require different write modes, here reliable vs. normal write.
  3. Features which call for an extension of the VIRTIO Block device driver standard. Whereas category 1 and 2 does not need an augmentation of the Virtio block device standard, a different story needs to be told, whenever such workarounds do not exist. An example of this is the erase of blocks. The respective commands can only be emitted by the GuestOS. The resulting TRIM commands need to be explicitly implemented in both the front-end as well as in the back-end driven by the hypervisor.

Erase, Write Zeroes, Trim, Discard...



3.1.x Meeting automotive persistence requirements

Typical automotive persistence requirements
(to be met by the entire system, i.e. from App to persistence system on Guest, through drivers, HV, to hardware)

1) Ability to write some specific data items, for which it is guaranteed to be stored (within reasonable and bounded time),
i.e. "Write Through mode" (seen from the perspective of the user space program in the guest) while the majority of data is written in "Cached mode"
(Optionally:  Ability to do this using file system, i.e. mount something which guarantees this on a VFS path)

2) Data integrity in case of sudden power loss

3) Flash lifetime (read/write cycle maximums) guarantees (e.g. 10-15 years)


NOTE:  LUNs (defined in UFS) divide the device into parts, so that a fully-unified-access FUA does not mean all caches should be flushed.   eMMC does not provide this.  Either NVMe or SCSI or UFS devices are required.  You could map partitions onto LUNs (some optimized for write-through and some for better average performance) and build from there.


VIRTIO should be sufficient in combination with the right requests being made from user space programs (and running appropriate hardware devices below).

With VIRTIO:  option 1 - device is in write-through mode  (WCE = on, set per-device in VIRTIO block device)

                          option 2 - device has WCE = off → BLK_WRITE followed by BLK_SYNC in driver.  Possible on a raw block device from user space - open device with OSYNC. 
                                             For filesystem it is file system dependent which O-option (and only some guarantee to respect it 100%)

Linux API create a block request.  One of the flags in the request is FUA.

Conclusion is that VIRTIO does not break the native behavior.  Even in the native case it can be somewhat uncertain but VIRTIO does not make it worse.

Only thing missing:  VIRTIO might not provide the "total blocks written" data that is available in native systems.  Arguable if guest needs to know this?  Are there preventative measures / diagnostics that would benefit from this?




3.2 Network Device

Standard networks

Standard networks means those that are overwhelmingly standard in the computing world.  In other words, any IP based network, typically with TCP (UDP for certain cases) as transport.  While it's not really necessary to put limits on that, the physical layer is normally some variation of the Ethernet standard(s) or WiFi, or other transport that transparently exposes a TCP/IP network interface.

(warning) PLACEHOLDER:  It is assumed that [VIRTIO] specification is an adequate definition of how to describe virtual platform interface and that operating systems can easily expose physical, virtual and inter-vm networks as network interfaces, i.e. what in Linux might be listed for a physical interface with a name like "eth0".

  • Virtual and hardware (pass-through) interfaces shall be exposed as the operating system's standard network interface concept.
  • The hypervisor/equivalent shall provide the ability to dedicate and expose any hardware network interface to one virtual machine.
  • The hypervisor/equivalent shall(?) be able to configure virtual inter-vm networking interfaces. (question)


Communication between VM and hypervisor.   This is described as vsock in the VIRTIO specification.  (warning) Is there a use case for this?


Automotive networks

All traditional in-car networks and buses, such as CAN, FlexRay, LIN, etc., which are not Ethernet TCP/IP style networks, are treated in the chapter TBD.


Time-sensitive Networking standards

(warning) Placeholder.  How do those requirements affect, and how is the real-time demands implemented in practice in a virtual environment?



3.3 GPU Device

The virtio-gpu is a virtio based graphics adapter. It can operate in 2D mode and in 3D (virgl) mode. The device architecture is based around the concept of resources private to the host, the guest must DMA transfer into these resources. This is a design requirement in order to interface with 3D rendering.

3.3.1 GPU Device in 2D Mode

In the unaccelerated 2D mode there is no support for DMA transfers from resources, just to them. Resources are initially simple 2D resources, consisting of a width, height and format along with an identifier. The guest must then attach backing store to the resources in order for DMA transfers to work.

        Device ID.

(lightbulb) REQ-1:   The device ID MUST be set according to the requirement in chapter 5.7.1 in [VIRTIO-GPU].

Virtqueues.

(lightbulb) REQ-2:   The virtqueues MUST be set up according to the requirement in chapter 5.7.2 in [VIRTIO-GPU].

Feature bits.

(lightbulb) REQ-3:   The VIRTIO_GPU_F_VIRGL flag, described in chapter 5.7.3 in [VIRTIO-GPU], SHALL NOT be set.

        Device configuration layout.

(lightbulb) REQ-4:   The implementation MUST use the device configuration layout according to chapter 5.7.4 in [VIRTIO-GPU].

(lightbulb)      REQ-4.1: The implementation SHALL NOT touch the reserved structure field as it is used for the 3D mode.

        Device Operation.

(lightbulb) REQ-5:   The implementation MUST suport the device operation conceprt (the command set and the operation flow) according to chapter 5.7.6 in [VIRTIO-GPU].

(lightbulb)      REQ-5.1: The implementation MUST support scatter-gather operations to fulfil the requirement in chapter 5.7.6.1 in [VIRTIO-GPU].

(lightbulb)      REQ-5.2: The implementation MUST be capable  to perform DMA operations to client's attached resources to fulfil the requirement in chapter 5.7.6.1 in [VIRTIO-GPU].

        VGA Compatibility.

(lightbulb) REQ-6:   VGA compatibility, as described in chapter 5.7.7 in [VIRTIO-GPU], is optional.


3.3.2 GPU Device in 3D Mode

3D mode will offload rendering operations to the host gpu and therefore requires a gpu with 3D support on the host machine. The guest side requires additional software in order to convert OpenGL commands to the raw graphics stack state (Gallium state) and channel them through virtio-gpu to the host. Currently the 'mesa' library is used for this purpose. The backend then receives the raw graphics stack state and interprets it using the virglrenderer library from the raw state into an OpenGL form, which can be executed as entirely normal OpenGL on the host machine. The host also translates shaders from the TGSI format used by Gallium into the GLSL format used by OpenGL.

        Device ID.

(lightbulb) REQ-1:   The device ID MUST be set according to the requirement in chapter 5.7.1 in [VIRTIO-GPU].

Virtqueues.

(lightbulb) REQ-2:   The virtqueues MUST be set up according to the requirement in chapter 5.7.2 in [VIRTIO-GPU].

        Feature bits.

(lightbulb) REQ-3:   The  implementation MUST set the VIRTIO_GPU_F_VIRGL flag, described in chapter 5.7.3 in [VIRTIO-GPU].

        Device configuration layout.

(lightbulb) REQ-4:   The implementation MUST use the device configuration layout according to chapter 5.7.4 in [VIRTIO-GPU].

(lightbulb)      REQ-4.1: The implementation MUST use the previously reserved config structure field to report the number of capsets supported by the virglrenderer library.

(lightbulb)           REQ-4.1.1: The implementation SHALL NOT report the value of '0' as it is treated as absence of 3D support. 

        Device Operation.

(lightbulb) REQ-5:   The implementation MUST suport the device operation concept (the command set and the operation flow) according to chapter 5.7.6 in [VIRTIO-GPU].

(lightbulb)      REQ-5.1: The implementation MUST support scatter-gather operations to fulfil the requirement in chapter 5.7.6.1 in [VIRTIO-GPU].

(lightbulb)      REQ-5.2: The implementation MUST support the extended command set as described in chapter 'Virtio-GPU | Virgl3D commands' in [VIRTIO-VIRGL].

(lightbulb)      REQ-5.3: The implementation MUST support the 3D command set as described in chapter 'VIRTIO_GPU_CMD_SUBMIT_3D' in [VIRTIO-VIRGL].

(lightbulb)      REQ-5.4: The implementation MUST support the VIRTIO_GPU_CMD_GET_CAPSET_INFO command set as described in [??? only kernel sources as a reference so far].

(lightbulb)      REQ-5.5: The implementation MUST support the VIRTIO_GPU_CMD_GET_CAPSET command set as described in [??? only kernel sources as a reference so far].

(lightbulb)      REQ-5.6: The implementation MUST be capable  to perform DMA operations to and from client's attached resources to fulfil the requirement in chapter 5.7.6.1 in [VIRTIO-GPU] and in 'Virtio-GPU | Virgl3D commands' in [VIRTIO-VIRGL].

        VGA Compatibility.

(lightbulb) REQ-6:   VGA compatibility, as described in chapter 5.7.7 in [VIRTIO-GPU], is optional.

Additional features.

(lightbulb) REQ-7:  In addition to the command set and features, defined in [VIRTIO-GPU] and [VIRTIO-VIRGL], the implementation MAY provide:

    • an additional flag to request more detailded GL error reporting to the client


3.4 IOMMU Device

NOTE: The current specification draft looks quite neat except the fact that it marks many requirements as SHOULD or MAY and leaves it for an implementation. Here I try to provide more strict rules when it is applicable.

An IOMMU provides virtual address spaces to other devices. Traditionally devices able to do Direct Memory Access (DMA masters) would use bus addresses, allowing them to access most of the system memory. An IOMMU limits their scope, enforcing address ranges and permissions of DMA transactions. The virtio-iommu device manages Direct Memory Access (DMA) from one or more physical or virtual devices assigned to a guest.

Potential use cases are:

  • Limit guest devices' scope to access system memory during DMA (e.g. for a pass-through device).

  • Enable scatter-gather accesses due to remapping (DMA buffers do not need to be physically-contiguous).

  • 2-stage IOMMU support for systems that don't have relevant hardware.


        Device ID.

(lightbulb) REQ-1:   The device ID MUST be set according to the requirement in chapter 2.1 in [VIRTIO-IOMMU].

Virtqueues.

(lightbulb) REQ-2:   The virtqueues MUST be set up according to the requirement in 2.2 in [VIRTIO-IOMMU].

Feature bits.

(lightbulb) REQ-3:   The valid feature bits set is described in chapter 2.3 in [VIRTIO-IOMMU] and is dependant on the particular implementation.

Device configuration layout.

(lightbulb) REQ-4:   The implementation MUST use the device configuration layout according to chapter 2.4 in [VIRTIO-IOMMU].

Device initialization.

(lightbulb) REQ-5:   The implementation MUST follow the initialisation guideline according to chapter 2.5 in [VIRTIO-IOMMU].

(lightbulb)      REQ-5.1: When implementing device initialization requirements from chapter 2.5.2, a stricter requirement takes place:

                     a. If the driver does not accept the VIRTIO_IOMMU_F_BYPASS feature, the device SHALL NOT let endpoints access the guest-physical address space.

Device Operation.

(lightbulb) REQ-6:   The implementation MUST suport the device operation concept (the command set and the operation flow) according to chapter 2.6 in [VIRTIO-IOMMU].

(lightbulb)      REQ-6.1: When implementing support for device operation requirements from chapter 2.6.2, stricter requirements take place:

             a. The device SHALL NOT set status to VIRTIO_IOMMU_S_OK if a request didn’t succeed.

             b. If a request type is not recognized, the device MUST return the buffers on the used ring and set the len field of the used element to zero.

             c. If the VIRTIO_IOMMU_F_INPUT_RANGE feature is offered and the range described by fields virt_start and virt_end doesn’t fit in the range described by input_range, the device MUST set status to VIRTIO_IOMMU_S_RANGE and ignore the request.

             d. If the VIRTIO_IOMMU_F_DOMAIN_BITS is offered and bits above domain_bits are set in field domain, the device MUST set status to VIRTIO_IOMMU_S_RANGE and ignore the request.

        (lightbulb)      REQ-6.2: When implementing a handler for the ATTACH request as described in chapter 2.6.3.2, stricter requirements take place:

                     a. If the reserved field of an ATTACH request is not zero, the device MUST set the request status to VIRTIO_IOMMU_S_INVAL and SHALL NOT attach the endpoint to the domain.

                     b. If the endpoint identified by endpoint doesn’t exist, then the device MUST set the request status to VIRTIO_IOMMU_S_NOENT.

                     c. If another endpoint is already attached to the domain identified by domain, then the device MUST attempt to attach the endpoint identified by endpoint to the domain. If it cannot do so, the device MUST set the request status to VIRTIO_IOMMU_S_UNSUPP.

                     d. If the endpoint identified by endpoint is already attached to another domain, then the device MUST first detach it from that domain and attach it to the one identified by domain. In that case the device behaves as if the driver issued a DETACH request with this endpoint, followed by the ATTACH request. If the device cannot do so, it MUST set the request status to VIRTIO_IOMMU_S_UNSUPP.

                     e. If properties of the endpoint (obtained with a PROBE request) are incompatible with properties of other endpoints already attached to the requested domain, the device SHALL NOT attach the endpoint and MUST set the request status to VIRTIO_IOMMU_S_UNSUPP.

        (lightbulb)      REQ-6.3: When implementing a handler for the DETACH request as described in chapter 2.6.4.2, stricter requirements take place:

                     a. If the reserved field of a DETACH request is not zero, the device MUST set the request status to VIRTIO_IOMMU_S_INVAL, in which case the device SHALL NOT perform the DETACH operation.
                     b. If the endpoint identified by endpoint doesn’t exist, then the device MUST set the request status to VIRTIO_IOMMU_S_NOENT.
                     c. If the domain identified by domain doesn’t exist, or if the endpoint identified by endpoint isn’t attached to this domain, then the device MUST set the request status to VIRTIO_IOMMU_S_INVAL.

        (lightbulb)      REQ-6.4: When implementing a handler for the MAP request as described in chapter 2.6.5.2, stricter requirements take place:

                     a. If virt_start, phys_start or (virt_end + 1) is not aligned on the page granularity, the device MUST set the request status to VIRTIO_IOMMU_S_RANGE and SHALL NOT create the mapping.
                     b. If the device doesn’t recognize a flags bit, it MUST set the request status to VIRTIO_IOMMU_S_INVAL. In this case the device SHALL NOT create the mapping.
                     c. If a flag or combination of flag isn’t supported, the device MUST set the request status to VIRTIO_IOMMU_S_UNSUPP.
                     d. The device SHALL NOT allow writes to a range mapped without the VIRTIO_IOMMU_MAP_F_WRITE flag. However, if the underlying architecture does not support write-only mappings, the device MAY allow reads to a range mapped with VIRTIO_IOMMU_MAP_F_WRITE but not VIRTIO_IOMMU_MAP_F_READ.
                     e. If domain does not exist, the device MUST set the request status to VIRTIO_IOMMU_S_NOENT.

        (lightbulb)      REQ-6.5: When implementing a handler for the UNMAP request as described in chapter 2.6.6.2stricter requirements take place:

                     a. If the reserved field of an UNMAP request is not zero, the device MUST set the request status to VIRTIO_IOMMU_S_INVAL, in which case the device SHALL NOT perform the UNMAP operation.

                     b. If domain does not exist, the device MUST set the request status to VIRTIO_IOMMU_S_NOENT. 
                     c. If a mapping affected by the range is not covered in its entirety by the range (the UNMAP request would split the mapping), then the device MUST set the request status to VIRTIO_IOMMU_S_RANGE, and SHALL NOT remove any mapping.
                     d. If part of the range or the full range is not covered by an existing mapping, then the device MUST remove all mappings affected by the range and set the request status to VIRTIO_IOMMU_S_OK.

        (lightbulb)      REQ-6.6: When implementing a handler for the PROBE request as described in chapter 2.6.7.2stricter requirements take place:

                     a. If the reserved field of a PROBE request is not zero, the device MUST set the request status to VIRTIO_IOMMU_S_INVAL.

                     b. If the endpoint identified by endpoint doesn’t exist, then the device SHOULD set the request status to VIRTIO_IOMMU_S_NOENT.

                     c. If the device does not offer the VIRTIO_IOMMU_F_PROBE feature, and if the driver sends a VIRTIO_IOMMU_T_PROBE request, then the device MUST return the buffers on the used ring and set the len field of the used element to zero.
                     d. The device MUST set bits [15:12] of property type to zero.
                     e. If the properties list is smaller than probe_size, then the device SHALL NOT write any property and MUST set the request status to VIRTIO_IOMMU_S_INVAL.
                     f. If the device doesn’t fill all probe_size bytes with properties, it MUST terminate the list with a property of type NONE and size 0. The device MAY fill the remaining bytes of properties, if any, with zeroes. If there isn’t enough space remaining in properties to terminate the list with a complete NONE property (4 bytes), then the device MUST fill the remaining bytes with zeroes.

        (lightbulb)      REQ-6.7: When implementing support for RESV_MEM property as described in chapter 2.6.8.2.2, stricter requirements take place:

                     a. The device MUST set reserved to zero.
                     b. The device SHALL NOT present more than one VIRTIO_IOMMU_RESV_MEM_T_MSI property per endpoint.
                     c. The device SHALL NOT present RESV_MEM properties that overlap each others for the same endpoint.

        (lightbulb)      REQ-6.8: When implementing support for fault reporting as described in chapter 2.6.9.2, stricter requirements take place:

                     a. The device MUST set reserved and reserved1 to zero.
                     b. The device MUST set undefined flags to zero.
                     c. The device MUST write a valid endpoint ID in endpoint.
                     d. If a buffer is too small to contain the fault report (this would happen for example if the device implements a more recent version of this specification than the driver, whose fault report contains additional fields) , the device SHALL NOT use multiple buffers to describe it. The device MUST fall back to using an older fault report format that fits in the buffer.


3.5 USB Device

The working group and industry consensus seems to be that it is difficult to give concurrent access to USB hardware from more than one operating system instance.  In other words, to create multiple virtual USB devices that somehow map to a single host role on a single USB port, (or perhaps some kind partitioning of the tree of devices provided when USB hubs are involved).  The host (master) / device (slave) design of the USB protocol makes it challenging to have more than one software stack playing the host role.  Considering how this might be done could be an interesting theoretical exercise but value trade-off does not seem to be there, despite some potential ways it might be used if it were possible (see use-case section).

[VIRTIO] does not in its current version mention USB devices.

After deliberation we have decided also in this specification to assume that hypervisors will provide only pass-through access to USB hardware.

The ability for one VM to request dedicated access to the USB device during runtime is a potential improvement and it ought to be considered when choosing a hypervisor.   With such a feature, VMs could even alternate their access to the USB port with a simpler acquire/release protocol than a true full virtualization would use (note Use-case section for caveats).

USB On-The-Go(tm) is left out of scope, since most automotive systems implement the USB host role only, and in the case a system ever needs to have the device role it would surely have a dedicated port and a single operating system instance handling it.

(warning) The configuration of pass-through for USB is yet not standardized and for the moment considered a proprietary API.  This is a potential for future improvement.

Hardware support for virtualization

It seems likely that trying to implement special support for splitting a single host port, between multiple guests is more complicated than just approaching it as multiple ports.  This applies also  if the hardware implements not only host controllers but also a USB-hub.  In other words, SoCs are likely better off simply providing support for more separate USBs at the same time than to build in special virtualization features.

(question) Is there anything in particular the hardware should do to facilitate pass-through?


Requirements

Configurable pass-through access to USB devices.

(lightbulb) REQ-3.5-1:   The hypervisor shall provide statically configurable pass-through access to all hardware USB devices


Resource API for USB devices

(lightbulb) REQ-3.5-2:   The hypervisor may optionally provide an API/protocol to request USB access from the virtual machine, during normal runtime.


Use Case discussion

There is a case to be made for more than one VM needing access to a single USB device.  For example, a single mass-storage device (USB memory) may be required to provide files to.  There are many potential use cases but just as an example, consider software/data update files that need to be applied to more than one VM/guest, or media files being played by one guest system whereas navigation data is needed in another.

In the previous chapter, the idea of alternating/reconfiguring pass-through access was raised without defining it further.  It should be noted of course that it raises many considerations about reliability and one system starving the other of access.  Such a solution would only apply if policies, security and other considerations are met for the particular system.

The most likely remaining solution, considering that virtualized USB access is not a promoted solution, is that one VM is assigned to be the USB master and provide access to the filesystem (or part of it) by means of VM-to-VM communication.  For example, a network file system such as NFS or any equivalent solution could be used.

4. Special Virtual Device categories

4.x Automotive Sensors

Protocol access

Sensors can be handled by a dedicated co-processor or the hypervisor implementation and provide the sensor data through a communication protocol.  This essentially offloads the burden of defining a "virtual hardware access" from the VM to the measuring hardware.   

Systems Control Management Interface (SCMI) protocol was not originally defined for the virtual-sensor purpose itself, but describes a flexible and an appropriate abstraction for sensors. It is also appropriate for controlling power-management and related things.  The actual hardware access implementation is according to ARM offloaded to a "Systems Control Processor" but this is a virtual concept.  It could be a dedicated core in some cases, perhaps in others not.

  • The hypervisor/equivalent shall use SCMI protocol to expose sensor data from a dedicated sensor subsystem to the virtual machines.


Direct Hardware access

For sensor hardware that shall be processed directly by the operating system it may be necessary to provide physical or virtual hardware access

  • The hypervisor/equivalent shall provide configurable pass-through access to a VM for sensor hardware ((warning)which category of hardware?)
  • The hypervisor/equivalent shall be 

For digital I/O pins, refer to standard pinmux specification ((warning) need clarification)


4.x Audio

4.x Media codec 

TODO Hardware-assisted codecs



4.x Cryptography

(warning) TBC No particular info in VIRTIO on this?

4.x.x Random Number Generation

Random number generation is typically created by a combination of a true-random and pseudo-random implementations.  A pseudo-random generation algorithm is implemented in software.   "True" random values may be acquired by a hardware-assisted device, or a hardware (noise) device may be used to acquire a random seed which is then further used by a pseudo-random algorithm.

  • The virtual platform SHALL provide VMs with access to a hardware-assisted high quality random number generator through the operating system's preferred interface. 
    (/dev/random device on Linux)
  • (warning) TODO, improve this:
    The virtual platform should describe a security analysis of how to avoid any type of side-band analysis of the random number generation.



5. Supplemental Virtual Device categories

5.x  Text Console

While they may be rarely an appropriate interface for the normal operation of the automotive system, text consoles are expected to be present for development purposes.

It is expected that the virtual interface of the console is adequately defined by [VIRTIO] and this specification specifies to follow that standard, in so far as it fits with the chosen guest operating system:

Requirements:

  • To not impede efficient development, text consoles shall be according to the operating systems' normal standards so that they can be connected to any normal development flow.
  • Text consoles are often connected to a shell capable of running commmands.  For security reasons, text consoles MUST be possible to shut off entirely in the configuration of a production system.  This configuration MUST be not modifiable from within any guest operating system.
  • It is also recommended that countermeasures are introduced in place and documented during the development phase ensuring that there is no way to forget to disable these consoles in the final production system.


5.1 Host-to-vm filesystem sharing

Rationale

The function of providing disk access in the form of a "shared folder" or full disk passthrough is a function that seems more used for desktop virtualization than in the embedded systems that this document is for.  In desktop virtualisation, for example the user wants to run Microsoft Windows in combination with a MacOS host, or to run Linux in a virtual machine on a Windows-based corporate workstation, or bespoke Linux systems run in KVM/QEMU on Linux host for development of embedded systems.  Host-to-VM filsystem sharing might also serve some purpose also in certain server virtualization setups. With this background, we consider Host-to-VM filesharing an optional feature and cover it only briefly here.

VIRTIO covers (very briefly) the 9P / 9PFS protocol for host-to-vm filesystem sharing.

Discussion:

The working group found little need for this host-to-vm disk sharing in the final product in an automotive system, but we summarize the opportunities here if the need arises for some particular product.

[VIRTIO] describes one network disk protocol for the purpose of hypervisor-to-vm storage sharing, which is 9pfs.  It is a part of a set of protocols defined by the Plan9 operating system.

Most systems will be able to accomodate any network disk protocol needs by implementing the network protocol in one or several of the VMs. The typical systems we deal with can implement a more standard and capable protocol such as NFS within the normal operating system environment that is running in the VM and share storage between them over the (virtual) network they have. In other words, for many use cases sharing of disk/filesystem resources need not be implemented in the hypervisor itself.

In [VIRTIO], the protocol 9pfs is mentioned in two ways: A PCI type device can indicate that it is going to use the 9P protocol. The specification also has 9P as a specific separate device type. There seems to be no strict definition (or even specific reference) to the protocol itself.  It appears to be assumed to be well known by its name, and possible to find online. The specification is thus complemented only by scattered information found on the web regarding the specific implementations (Xen, KVM, QEMU, ...) 

9pfs is a minimalistic network file-system protocol that our working group figures is appropriate for the task. Other network protocols like NFS, SMB/SAMBA etc. would be too heavy. 9pfs however feels a bit esoteric, and while "reinventing" is usually unnecessary there might be an appropriate opportunity to do that here, with a new modern protocol plus, a reference open-source implementation. It ought to take a closer look particularly at a flexible and reliable security model, which seem somewhat glossed over in the 9pfs description:  It briefly references only "fixed user" or "pass-through" for mapping ownership on files in guest/host.

Requirements:

  • Implementation of host-to-vm disk sharing using to the 9pfs protocol is optional.

Links: Virtio 1.0 spec : {PCI-9P, 9P device type}.
Kernel support: Xen/Linux 4.12+ FE driver Xen implementation details

Some found references included: (Links are not provided since we cannot at the moment evaluate the completeness, or if these should be considered official specification).

  • A set of man pages that seem to be the definition of P9.  
  • QEMU instruction how to set up a VirtFS (P9).  
  • Example/info how to natively mount a 9P network filesystem.
  • Source code for 9pfs FUSE driver

3. References

    [RFC 2119] https://www.ietf.org/rfc/rfc2119.txt

    [VIRTIO]  Virtual I/O Device (VIRTIO) Version 1.0, Committee Specification 04, release 03 March 2016.

    [VIRTIO-GPU]  Virtual I/O Device (VIRTIO) Version 1.0, Committee Specification 03-virtio-gpu, release 02 August 2015.

    [VIRTIO-VIRGL]  [AN OASIS STANDARD PROPOSAL OR OWN PAPER IS NEEDED]  https://github.com/Keenuts/virtio-gpu-documentation/blob/master/src/virtio-gpu.md

    [VIRTIO-IOMMU]  VIRTIO-IOMMU DRAFT 0.8




 RESV_MEM

  • No labels