Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Backing up text from whitepaper work

...

https://etherpad.net/p/hvws




Motivation: Why to use HV:


WHAT IS THE PURPOSE OF THE WHITE PAPER?

1)  ...We need to explain why virtualization is actually needed.  (It is still not fully accepted as necessary by all)

This is the agreed main topic.

2)  Explaining concepts ->   1 chapter  pro/cons?
   Microkernel, Monolithic <- multiple privilege level of modern CPUs , Type 1, Type 2, Paravirt, HW emulation, Linux Container
try not to rehash too much of existing data on this, but make introduction to the reader who does not know it and needs the basics to understand the rest of the paper.

3)   How do improve the usage of virtualization to meet the goals previously stated.
    E.g. chapter on needed hardware support  --> At least 1 chapter
           standardization of interfaces               --> At least 1 paragraph
           training, education, explanation...        --> This is basically just a sentence We need more training.
          Sample implementations -->  ... also just a sentence.

HERE ARE THE REASONS:

 → Certain concrete security/safety issues that can be shown clearly and that HV can counteract
 → System flexibility is another very important point.

Consolidation of functions into a single SoC (multiple ECUs on a single platform), this includes legacy systems and new SWC
This calls for a clear interface between SW stacks of different vendors and shall allow provider of SWC to make their choices on the execution env./OS, examples to this are different combinations of OS/adaptive Autosar implementations or combinations of classic Autosar modules, e.g., SafetyOS / Com Module / RTE provider.
To this and, certification, let it be for security or safety is lost, as soon as componentes are re-arranged into new setups.
This clear calls for flexibility in the sense allowing SWC provider to integrate their complete SW stack into the platform and still gurntee fundamental isolation properties.

Technical:
    
* Single-kernel systems provide isolation but there are limits.
    --> Although capability of configuring for example the Linux kernel for isolation is ever improving, there are counter examples (see other chapter).
     In addition, a single Linux kernel solves only some scenarios since putting all SWCs on top of a single Linux kernel is still limited by the requested OS diversity (see next chapter).
     

Other realities:
* More freedom in choosing operating system kernel.  (The automotive industry has a reality of using multiple OS choices)
* Multiple software component parts coming from different vendors combined on common hardware.  (The automotive industry has a reality of business setups with multiple vendors, each wishing for a defined and limited responsibility)
     -->  Virtualization enables the use of common hardware but have freedom from interference

VMs provide a small, clear interface:
The virtual platform is a relatively speaking small and simple interface to separate different software deliveries from different vendors.  
--> Compare scenario where different vendors deliver programs to be  on a single kernel -> containers -> messy chapter, try to avoid mess...
--> Other options include single kernel + isolation (container approach)
     or advanced runtime/platform definition for software component interoperability (Classic AUTOSAR)
     
*Standardization of this interface will improve this further (reference Virtual Platform definition)
    write a bit about how this improves things
    

Containers vs VMs
------------------------
Are there concrete examples of isolation (resource sharing/interference) scenarios not handled by containers.
Avoid the either-or discussion since the future may be in container + VMs combinations.

Kai: Performance counters required to manage resource interference <--  details needed

HVs that can do resource management on a more detailed level than any kernel (currently) can. 

As an example:  A hypervisor implements limits on the rate of cache misses allowed for a particular VM.  This might be implemented to protect another critical part from being cache starved (.e.g. a real time core which shares memory cache).  Excessive cache misses would cause the offending VM to be throttled.
  
Are such requirements possible or likely to be accepted (into Linux mainline kernel)?

Contention on implicitly shared resources (memory buses, caches, interconnects...)

Opinions on the high level purpose of the paper.
 → Certain concrete security/safety issues that can be shown clearly and that HV can 
 → System flexibility is another very important point.

Fundamental system components can be replacable and pluggable.  

E.g. it is possible to insert (delivered by a separate vendor) a virtual network switch with advanced features such as traffic shaping, scheduling and optimization among VMs sharing a network.   While it would be technically possible to add this to the Linux kernel, it is less likely to be accepted as a separately delivered independent software part.  (Of course this is indirectly a consequence of the purely technical aspect that Linux is a monolithic kernel, since a microkernel + user-space implementation of system features would yield a similar effect).   Along the same lines, there are license requirements that are in effect.   Adding code to the Linux kernel requires them to be GPLv2 licensed, whereas independent parts (as VMs or in user-space) do not.   It is easier to assign responsibility to the vendor of this component if it is isolated from the rest of the kernel.

Interaction between general-purpose and dedicated cores is poorly understood.
Can we explain and give examples or is it enough to state this?


3.2 Network Device

Standard networks

...