Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Starting cleanup

...

Minute takers according to this page 


(warning)  Some rough notes from F2F were captured here.  Cleanup in progress

Expand

F2F September 24 - notes to be copied to the right place...


https://etherpad.net/p/hvws


Motivation: Why to use HV:


WHAT IS THE PURPOSE OF THE WHITE PAPER?

1)  ...We need to explain why virtualization is actually needed.  (It is still not fully accepted as necessary by all)

This is the agreed main topic.

2)  Explaining concepts ->   1 chapter  pro/cons?
   Microkernel, Monolithic <- multiple privilege level of modern CPUs , Type 1, Type 2, Paravirt, HW emulation, Linux Container
try not to rehash too much of existing data on this, but make introduction to the reader who does not know it and needs the basics to understand the rest of the paper.

3)   How do improve the usage of virtualization to meet the goals previously stated.
    E.g. chapter on needed hardware support  --> At least 1 chapter
           standardization of interfaces               --> At least 1 paragraph
           training, education, explanation...        --> This is basically just a sentence We need more training.
          Sample implementations -->  ... also just a sentence.

HERE ARE THE REASONS:

 → Certain concrete security/safety issues that can be shown clearly and that HV can counteract
 → System flexibility is another very important point.

Consolidation of functions into a single SoC (multiple ECUs on a single platform), this includes legacy systems and new SWC
This calls for a clear interface between SW stacks of different vendors and shall allow provider of SWC to make their choices on the execution env./OS, examples to this are different combinations of OS/adaptive Autosar implementations or combinations of classic Autosar modules, e.g., SafetyOS / Com Module / RTE provider.
To this and, certification, let it be for security or safety is lost, as soon as componentes are re-arranged into new setups.
This clear calls for flexibility in the sense allowing SWC provider to integrate their complete SW stack into the platform and still gurntee fundamental isolation properties.

Technical:
    
* Single-kernel systems provide isolation but there are limits.
    --> Although capability of configuring for example the Linux kernel for isolation is ever improving, there are counter examples (see other chapter).
     In addition, a single Linux kernel solves only some scenarios since putting all SWCs on top of a single Linux kernel is still limited by the requested OS diversity (see next chapter).
     

Other realities:
* More freedom in choosing operating system kernel.  (The automotive industry has a reality of using multiple OS choices)
* Multiple software component parts coming from different vendors combined on common hardware.  (The automotive industry has a reality of business setups with multiple vendors, each wishing for a defined and limited responsibility)
     -->  Virtualization enables the use of common hardware but have freedom from interference

VMs provide a small, clear interface:
The virtual platform is a relatively speaking small and simple interface to separate different software deliveries from different vendors.  
--> Compare scenario where different vendors deliver programs to be  on a single kernel -> containers -> messy chapter, try to avoid mess...
--> Other options include single kernel + isolation (container approach)
     or advanced runtime/platform definition for software component interoperability (Classic AUTOSAR)
     
*Standardization of this interface will improve this further (reference Virtual Platform definition)
    write a bit about how this improves things
    

Containers vs VMs
------------------------
Are there concrete examples of isolation (resource sharing/interference) scenarios not handled by containers.
Avoid the either-or discussion since the future may be in container + VMs combinations.

Kai: Performance counters required to manage resource interference <--  details needed

HVs that can do resource management on a more detailed level than any kernel (currently) can. 

As an example:  A hypervisor implements limits on the rate of cache misses allowed for a particular VM.  This might be implemented to protect another critical part from being cache starved (.e.g. a real time core which shares memory cache).  Excessive cache misses would cause the offending VM to be throttled.
  
Are such requirements possible or likely to be accepted (into Linux mainline kernel)?

Contention on implicitly shared resources (memory buses, caches, interconnects...)

Opinions on the high level purpose of the paper.
 → Certain concrete security/safety issues that can be shown clearly and that HV can 
 → System flexibility is another very important point.

Fundamental system components can be replacable and pluggable.  

E.g. it is possible to insert (delivered by a separate vendor) a virtual network switch with advanced features such as traffic shaping, scheduling and optimization among VMs sharing a network.   While it would be technically possible to add this to the Linux kernel, it is less likely to be accepted as a separately delivered independent software part.  (Of course this is indirectly a consequence of the purely technical aspect that Linux is a monolithic kernel, since a microkernel + user-space implementation of system features would yield a similar effect).   Along the same lines, there are license requirements that are in effect.   Adding code to the Linux kernel requires them to be GPLv2 licensed, whereas independent parts (as VMs or in user-space) do not.   It is easier to assign responsibility to the vendor of this component if it is isolated from the rest of the kernel.

Interaction between general-purpose and dedicated cores is poorly understood.
Can we explain and give examples or is it enough to state this?


3.2 Network Device

Standard networks

Standard networks include those that are not automotive specific, but instead frequently used in the computing world.  In other words these are typically IP based networks, but some of them simulate this level through other means (e.g. vsock which does not use IP addressing).  The physical layer is normally some variation of the Ethernet/WiFi standard(s) (according to standards 802.*) or other transport that transparently exposes a similar network socket interface 

virtio-net  = Layer 2  (Ethernet / MAC addresses)
virtio-vsock = Layer 4.  Has its own socket type.  Optimized by stripping away the IP stack.  Possibility to address VMs without using IP addresses. Primary function is Host (HV) to VM communication.

Discussion:
Vsock: Each VM has logical ID but the VM normally does not know about it.  Example usage: Running a particular agent in the VM that does something on behalf of the HV.  There is also the possibility to use this for VM-to-VM communication, but since this is a special socket type it would involve writing code that is custom for the virtualization case, as opposed to native.

vsock is the application API.   Multiple different named transport variations exist in different hypervisors which means the driver implementation differs depending on chosen hypervisor.  Virtio-vsock however locks this down to one chosen method.

Requirements:

  • If the platform implements virtual networking, it shall also use the virtio-net required interface between drivers and Hypervisor.
  • If the platform implements vsock, it shall also use the virtio-vsock required API between drivers and Hypervisor.
  • Virtual network interfaces shall be exposed as the operating system's standard network interface concept, i.e. they should show up as a normal network device.
  • The hypervisor/equivalent shall provide the ability to dedicate and expose any hardware network interface to one virtual machine.
  • The hypervisor/equivalent shall(?) be able to configure virtual inter-vm networking interfaces.
  • Implementations of virtio-net shall support the    


Discussion:
    Virtual network interfaces ought to be exposed to user space code in the guest OS as standard network interfaces.   This minimizes custom code appearing because of the usage of virtualization is minimized.  

MTU may differ on the actual network being used.   There is a feature flag that a network device can state its maximum (advised) MTU and the guest application code might make use of this to avoid segmented messages.

-------------
Parked questions
- What could a common test suite look like?
- Google virtual ethernet driver - merged into mainline?

1. Introduction


Automotive requirements lead to particular choices and needs from the underlying software stack.  Existing standards for device drivers in virtualization need to be augmented because they are often not focused on automotive or even embedded, systems.  Much of the progression comes from the IT/server consolidation and in the Linux world, some come from virtualization of workstation/desktop systems.

A collection of virtual device driver APIs constitute the defined interface between virtual machines and the virtualization layer, i.e. the hypervisor or virtualization "host system".  Together they make up a definition of a virtual platform.  

This has a number of advantages:

  • Device drivers (for paravirtualization) for the kernel (Linux in particular), don't need to be maintained uniquely for different hypervisors


  • Simplify moving hypervisor guests between different hypervisor environments


  • Support guest and host interoperability:


  •  .... Programming against a specification simplifies developing local/custom variants of systems (e.g. Infotainment systems tailored for a certain geographical market)


  • .... Simplify moving legacy systems to a virtual environment


  • Some potential for shared implementation across guest operating systems


  • Some potential for shared implementation across hypervisors with different license models


  • Industry shared requirements and potential for shared test-suites


  • A common vocabulary and understanding to reduce complexity of virtualization. 


  • Similarly, guests VMs can be engineered to match the specification.


In comparison, the OCI initiative for containers serves a similar purpose.  There are many compatible container runtimes and the opportunity enabled by virtual platform definition is to have standardized "hypervisor runtime environments" that allow a standards compliant virtual (guest) machine to run with less integration efforts.

The specification shall enable all of the above while still enabling the ability for different implementations to differentiate, add additional features, optimize the implementation and put focus on different topics.


2. Architecture


-------------
Parked questions
- What could a common test suite look like?
- Google virtual ethernet driver - merged into mainline?



...


September 17, 2019

Participants:

...