Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: bugfix

...

Minute takers according to this page 


July 12, 2021

Participants:

  • Stephen (Renesas)
  • Adam (Kernkonzept)
  • Matti (Opensynergy)
  • Gunnar (GENIVI)
  • Oleksandr (EPAM)
  • Kai (EB)

Minutes

  • Gunnar: I met with Arm (Bernhard) and Francois Ozog (active in Linaro).  We discussed virtual platform specification, if there are additional automotive-specific hardware that need VIRTIO support (e.g. do we have a proposal for FM tuner? (No, not currently))

Discussion on the usage of virtual platforms as a hardware-portability solution:

  • Gunnar: We see this trend.  I'd like to analyze this idea further, compared to for example just making stable user-space APIs and creating new kernel drivers for new hardware. 
    • => Is this not just shifting portability work "somewhere else". 
    • Let's analyze how "virtual platform API is stable" is different from "user-space API is stable" (i.e. just port drivers to new hardware..)
  • Adam: BSPs that add a lot of patches to Linux get stuck on older versions.  Not using mainline.  Updating to new kernel drivers difficult (N.B. officially no APIs are stable inside Linux kernel)
  • Matti: Linux kernel in particular, is an operating system kernel and a hardware-abstraction at the same time, which is challenging.
    • Future challenges: Value-add for SoCs could be difficult if the [hardware abstraction] API is limiting.
  • The virtual platform / VIRTIO / Trout and others.  It seems the "new" distinction is a clear focus on a hardware-abstraction layer?
    • (partly agreed, discussed the existence of firmware as one abstraction already used, and on PC side the hardware it self was standardized, etc.   Overall, a complex reality)
  • Summary:
    • Kernel updates are driven by fast development, security issues, etc. 
    • Fast changes, no internal API stability guarantee etc. = updating drivers is a major issue.
    • It seems this situation can't be avoided, and the industry is looking for a solution to the challenge.  
    • The theory seems to be that keeping API stable and efficiently port to new hardware, could be more feasible to do in the virtual platform.
  • Discussion turned to evaluating the effort (that the industry is also considering) to create new kernel(s) instead, as a Linux alternative (for safety, and development concerns as shown above). 
    E.g. if they have a POSIX API, is that "enough".  => No... could still be a large effort to port existing functionality.
    • Gunnar: As an example, consider systemd which early on used Linux-only APIs.  And then a lot of functionality built on systemd.  Such examples makes porting software a challenge.
  • Also discussed how large part of Linux' functionality is actually used in practice in the industry?
  • Matti:  (Explicit example that shows the current situation)
    • io_uring recently added to Linux.  It accelerates applications a lot.  A hardware vendor that is able to use these latest features could have an advantage (hardware itself not faster, but the software runs faster).  

    • => With a virtual platform abstraction, updating to a new kernel with the new features should be feasible.


June 2021

  • Regular meetings held every week.  No specific meeting minutes written.  Results in AVPS and topic planning (link below)
  • AVPS v2.0 release.
  • Working on deep-dive topic planning

December 14 2020 → May 2021

December 7

  • Participants: Stephen, Gunnar, Dmitry, Adam, Peter
  • GPIO - clarified misunderstanding that referenced SCMI even though SCMI does not cover GPIO.   Closed open issues and finalized requirements.  Chapter will be done, after final polishing.
  • Initial discussion on the structure of final GPU chapter (bringing together all input from previous hardware studies)
    • We might need a plausible product design scenario to support the inclusion of VIRTIO-GPU / VirGL based requirements.  Also consider how to handle a non-Linux OS, if that is included in the scenario.
      Most other requirements will lean on hardware virtualization support, but the group did not want to lose the idea of support simpler hardware too.
  • Dmitry has done some updates on Camera/Media – postpone discussion to next time.


November 30

     FOCUS : GPU virtualization – Features of the PowerVR/Imagination GPUs (used in Renesas R-Car SoCs).
      - Presentation from Thomas Bruss, Renesas.

    - Discussing the opportunity for a common "model" of modern GPUs that can be the basis for shared requirements in the virtual platform standard.


November 16 - 23

Discussions captured in JIRA tickets and in the document.


November 9

Minutes pending



...

Participation planning:

  • October 19 - GPU focus?  Discussion of new Mali features?
    • (tick) Bernhard + Arm colleagues, wants to present Mali GPU
    • (tick) Adam 
    • (error) Dmitry still vacation            
    • (question) Oleksandr - vacation?
    • (question) Daniel Stone?  Emailed last week.  Send calendar invite also.       
    • (question) Eugen - Emailed
    • (green star) Stephen likely interested
    • (tick) Ozgur - Emailed              
    • (tick) Peter
       
  • October 26 No meeting (AMM week)
  • November 2  
    • (tick) Adam
    • (tick) Oleksandr
    • (question) Dmitry
    • (question) Peter ...
  • November 9   (tick) Adam  (tick) Oleksandr (question) Dmitry (question) Peter ...
  • November 16 (tick) Adam  (tick) Oleksandr (question) Dmitry (question) Peter ...
  • November 23 (tick) Adam  (tick) Oleksandr (question) Dmitry (question) Peter ...


October 12

  • (error) Dmitry vacation, (error) Adam has a conflict this week only
  • (tick) Oleksandr
  • (tick) Stephen join if possible (ad-hoc)
  • (tick) Peter

October 5

Participants:
(tick) Adam (tick) Stephen (error) Dmitry busy (error) Peter busy (error) Oleksandr busy (warning) Bernhard tried joining (missed by Gunnar)

...


August 31 - September 28

  • Another 1-2 discussions on Graphics
  • Continued discussion and work on the specification – results are in working document and in JIRA.

Participants:

  • Primarily Adam, Dmitry, Oleksander and Kai.  Occasionally Bernhard. For graphics also Eugen.  Peter had vacation.


August 24

Participants:

  • Peter (BMW)
  • Daniel Stone (Collabora)
  • Adam (Kernkonzept)
  • Gunnar (GENIVI)
  • Dmitry (OpenSynergy)


Apologies:

  • Adam noted he has a conflicting meeting this week
  • Oleksandr / EPAM colleagues – there is a public holiday


Minutes

  • Producing the generic model of future GPUs.  Adam started writing inside the AVPS as a place to hold the information for now.
    Daniel can view and contribute.
  • Gunnar: That's fine for now.  We should work on cleaning up the AVPS for release in the months to come.
  • Bernhard:  Next generation is fairly fixed but a wish list is useful.
  • Peter: It is very valuable to know how Arm engineers think about this.  We can discuss common parts of future designs 
    Gunnar: ... even if Silicon Vendors have their own unique details and/or use some other GPU cores.

Switching to discuss AVPS planning and open JIRA tickets.   

Gunnar: A least the "Evaluate Status" column is getting fairly empty.  This is good.  There are some big ones there still, e.g. DSP/TSP.

  • We discussed DSPs primarily.   - see notes in JIRA ticket.  
    Jira
    serverJIRA
    columnskey,summary,type,created,updated,due,assignee,reporter,priority,status,resolution
    serverId121ddff2-c571-320f-9e4d-d5b9371533bd
    keyHV-17
  • Gunnar: There is a lot to sort out here
    • 1. Technical organization (like defining the potential software/hardware stack, similar to the GPU discussion) 
    • 2. ...and as far as the technology is still unclear - understand current project organization (how are projects solving this today, what conversations need to be had between product owner (e.g. OEM), Hypervisor implementor, and the silicon vendor?) 
      The idea like for any hardware chapter in AVPS, is at mininimum,  to try to have a few of those conversations proactively, so that there is a better basis to start from when going into a production project, and as far as possible, create some generic solutions/standards that remove obstacles.


August 17

Apologies:

  • Dmitry

Participants:

  • Peter, Adam, Oleksandr, Gunnar


Graphics discussion not fully successful today due to missing key participants, but we collected the thoughts of those who were here.


Graphics

Let's recap our thoughts and conclusions from last week's discussion

Adam: I have always thought you need hardware support to do this properly.  But of course if you are already in another  situation you might have to do something different.
But [the hardware-assisted solutions] involve NDAs and binary blobs (that are more difficult to analyze generally).

Peter: We discussed some similarities and a working model last week, the idea of the command queues that seems to be similar in future solutions.  There are some special support for video and others.   I would like to achieve this model that can progress this without being too specific on one solution.  It may also avoid NDA issues.  Find these common abstractions to work on.  Multiple hardware should be able to be mapped into this.

Peter: We need to consider about composing images also - the display output (2D).  When this is combined with other video feeds, there are safety concerns that need solutions.

Gunnar: There are some patterns for handling safety (e.g. fallback to simpler HMI if the advanced one is fails (recognizing failure in various ways)), and these patterns may drive our standardization approach.

Peter: These patterns are useful.  Also considering that there are different needs - not all platforms require ASIL-B, for example.

Oleksandr: I'm not as focused on the graphics subsystem, so it's hard to say that much and what we are doing concretely is covered by NDAs.  We have experience with hardware that has hardware-assistend virtualization support and we are implementing such things in Xen.

Some hardware has implemented more than one GPU and can use simple pass-through of each of these to for example 2 VMs.

Gunnar:  Discussion last week with Daniel and others led to the conclusion we should start this "general model" discussion with the implementers of future graphics hardware.  I feel like we need to produce a description that we can bring with us to start that conversation.

Adam:  I would like to give it a try.  (This week)

Gunnar: Great, and then Peter may want to add to this.


Power Management

Oleksandr:  I have made some proposed edits.  SCPI should be deprecated because it is now covered in SCMI and we should require the 2.0 version of SCMI.


IOMMU

Oleksandr shares some experience from Xen implementation. 
Further discussion how to bring the chapter to a close, in particular the 2-stage IOMMU question that is still not brought to full conclusion.



...

August 10 – Graphics/GPU focus

Participants

  • Daniel Stone, Ozgur Bulkan, Adam, Dmitry, Peter, Gunnar, Oleksandr, ... ? (I missed recording the full list)

Gunnar introduced the AVPS goals a bit and mentioned that in some hardware areas (and almost surely for GPU), the AVPS specification can/should not reach the point where it requires only one way of doing things.  It can still be useful to dig deeper into a few things, and then describe a few different choices through optional or conditional requirements.  It is still much more useful to have that analysis done when a production project starts than to have a blank page.

Daniel: Requiring Vulkan would make sense but will not be supported everywhere so unfortunately it cannot be the only choice.   GL API is stable.

Peter: [We need to discuss]  technology roadmap for next generation GPUs (HW support for virtualization)

Daniel:
ARM, Renesas, Qualcomm, others would need to discuss this to get the information, but everything might not be shared.

...HV vendors implement this when the technology is ready (NDAs)

Every VM has it’s own virtualized GPU. Dedicated command queue per VM.  Buffer handling can be challenging (sec/safety, etc.)   It gives you a “direct access” to the hardware.  Normal drivers and software can be used, because it looks like a GPU.

Peter:  Can a standard model for GPU be defined which can help development (of specification and code/solutions)?  Command queue is common for all.

Daniel:  Concentrating on the API definition which can be generic, and over time hardware support could make the implementation of it easier.  This is the VIRTIO queue API level.

Ozgur: What about the original VIRTIO-GPU 3D specification

Daniel: It is similar to VirGL but was not quite the same.

Discussion would be needed with the real graphics engineers (within ARM, Imagination, Qualcomm….) to see if they are open to discuss standard APIs in one level.

Ozgur: Vulkan path is yet another choice?

Daniel: Conceptually similar.  Host/HV would still need to manage resources, track all the Vulkan objects, buffers, textures, targets.  This is the same for virtualization of Vulkan and OpenGL.

Ozgur: SoC providers do not want to rewrite their implementation (e.g. to support VIRTIO queues [or other standard virtualization API]  Basically they provide everything from user space APIs to the hardware…
...

Daniel: Yes, I agree.  Hardware independent and hardware dependent APIs will likely be different APIs.  We can (only) expect a few basic things about HW dependent (using channels/queues and buffer handling).

Ozgur: VIRTIO is only concerned with multi client rendering?
Daniel: Yes, it does not handle multi leases and other more complex situations.

Ozgur: Can we start thinking about things like nested compositors?
Daniel: DRM/KMS provides some capability for HW compositors but not the constraints you need (to match the (virtual) hardware capabilities).

Ozgur:  Will it need kernel/driver changes?
Daniel: It’s primarily the API side for virtualization.

Daniel: Way forward.  Split the problem.
1. Write how we think a fully hardware specific path will work. (a generic model for how w e think HW assisted virtualization is done)
2. The fully generic VirGL / vulkan path.  We can start encoding more specific requirements.


Ozgur: What can be implemented to test this now?
Daniel: VirGL already works on top of any OpenGL (GLES) implementation.  Mesa, and also others.

Way forward

Daniel: I’m happy to encode the requirements that we have already today in VirGL.  Over time I can try to do the same for the Vulkan path.   For the HW specific approach, can we reach out to the platform vendors?  or HV implementing companies that are likely implementing some of these things now [with the latest hardwares].

Gunnar asks Dmitry to see if there is a way to have a (private) conversation with potentially ongoing commercial implementation projects, to draw the boundaries between what could drive the specification content (and what could not).

Peter: The model approach is very good because we would then be less dependent on the hardware vendor proprietary information while this is being built out.

Gunnar mentions that ongoing work should be complementary and can together improve the situation, from specification side, and from trying out implementations on AGL and other projects.  The AVPS work provides the API specification, ensures multi-OS aspect is not lost, and aims to influence AUTOSAR and other industry initiatives to align on a common way forward [based on the open development of AVPS].

Daniel: It would be useful to get Panasonic’s input since they drive a lot of implementation in AGL for virtualization.

Gunnar:  Yes.  Both the implementation and AVPS is VIRTIO focused which should help compatibility.






July 20

Participants

  • Dmitry
  • Adam
  • Oleksandr
  • Kai
  • Gunnar

Introduction to Oleksandr (EPAM), working on VIRTIO support in Xen.  He has lately worked specifically on implementation on IOMMU on Renesas platforms.

Discussed again how to rewrite Camera + IOMMU from scratch with all the new information we have gathered
(It is currently disorganized, but has a lot of good knowledge). 

Actions for Dmitry and Adam to do this rewrite.

Gunnar on vacation next week.  Do you want Philippe to open the call?
Adam: I will work on the chapter, I don't need the phone call for that.

=> Next week will be skipped.


Absence planning
See July 6
and 

+ Gunnar, away (this week) and next, and probably again end of August (1 week)
+ Kai - mid August 17 August → 6 September
+ Oleksandr - possibly starting last week of August → 2 weeks

Kai found an interesting paper from Strategy Analytics.  Plan discussion about it later on.



July 13

Additional work on Camera and IOMMU with Dmitry and Adam - results in document.
AI: (Media and Cameras) Gunnar to work on combining original and new text to more logical flow.
AI: (Media and Cameras) Dmitry to also review red text in the same chapter

Info: Let's plan for Graphics oriented meeting some time in the future, inviting Daniel Stone.
Be aware of presentation that Daniel also posted to the GPU related Wiki page.

Info: Gunnar likely to have vacation some time after next week's meeting.


July 6

In-depth review and improvement of Cameras chapter → results are in the working doc.
EPAM colleagues missing, will follow up those topics next week.

Vacation planning

Peter - No vacation until Mid September
Dmitry - same for me, probably October at the earliest.
Adam - similar, nothing planned at the moment...
Gunnar - end of July / beginning August.  Will specify better soon.


June 29

Apologies from Artem + colleagues (public holiday), Peter (unexpected mgmt escalation), Adam, etc.
Have no info from Dmitry or Matti about attendance today.  Also not from Kai.
Other participants (Matt, Bernhard) are more attending on-demand / occasionally and were not expected today. 

=> Meeting is skipped.


June 22

AVPS v2 work:

Continue discussing with Artem and Adam on IOMMU including programmability from guest, and requirements for coprocessors.  We are getting there.  Adam will try to rewrite once more.

Artem: I am sending over details that might be interesting, about the implementation for co-processors on Xen:

Went through comments and minor changes in the whole document and accepted all minor changes.  Open points kept for discussion with a few others.

Need time to discuss with Dmitry on changes in Media chapter and more.

Artem: I intend to involve 2 new colleagues into this work.


April 27 - June 15

Meetings held as usual, except for some bank holiday.

Minutes have not been recorded separately.  Instead, discussions and updates on the Automotive Virtual Platform Specification have been recorded directly in the working document as text and comments.  The JIRA Tracker is also used to track updates and assign tasks.

April 20

Participants

  • Bernhard Rill
  • Adam Lackorzynski
  • Dmitry Sepp
  • Matti Möll
  • Gunnar
  • Alex Agizim (EPAM)
  • Kai Lampka

Apologies

  • Peter (Zoom was blocked?)

Minutes

Went through tickets.

Discussed development support ticket → DLT discussed at some length.  Various "known" discussion points (ID uniqueness, on-disk storage format, common timestamp references) but they become more relevant in a Hypervisor setup.

Kai reviewed Booting chapter and found a few issues/unclear direction.  Moved chapter to "Improve" column.



April 13 – no meeting (Easter Monday)

April 6

Minutes pending


March 30

Participants

  • Bernhard Rill
  • Peter
  • Adam Lackorzynski
  • Dmitry Sepp
  • Matti Möll
  • Gunnar

Apologies

  • Artem (conflicting meeting)


Minutes

  • Peter:  Defining the platform better.  The Arm specification on secure partitioning can also be useful to look at.
    • This also includes use-cases, to better define...
  • Bernhard: Yes, and a few other specifications might be useful also.  Others are not so applicable.  I can look at this.
  • Peter: Starting with VM scheduling, for example.
    • Matti: We can only define the interface...  what other "surfaces" might be described?
    • Peter: Just thinking that going from the top we realise VMs need to be scheduled....   Example - Android + RTOS, how is switching done and what does it lead to?
      • Gunnar: This goes into a kind of system design.  If you are speaking about cores etc.
  • Bernhard: Two prominent examples of interfaces...
  • Matti: Define what is a platform in general, and is it focused on components or interfaces?
    • A platform allows things to be built on top of it.  It cannot be too hard to implement, because then it will not be implemented to the degree that it becomes really usable.
    • Think about the right level.  Not too complicated.  Not narrowing the solution space too much. And not too broad so that portability is not achieved.
  • Peter: We are quite well defined here.  Starting from defining a... hypervisor based platform
  • Matti: If it is an automatic (preemptive) scheduler, it cannot be defined using interfaces.
  • Gunnar: You can set priorities.  Not from the VM, but when configuring the platform.  (Therefore the AVPS could set requirements on it).
  • Peter: We can still write requirements for this.  The behavior can be defined.
  • Peter: Variants of the platform advanced/basic could be an example.
  • Matti: Currently covered in AVPS by "If the platform includes feature X, then it must..."

Short review (on overtime) of the AVPS v2 ticket priorities.


March 24

Whitepaper work and AVPS prioritization

Minutes pending


March 16

Whitepaper work and AVPS prioritization

Minutes pending


March 9

Participants:

  • Dmitry
  • Kai
  • Adam
  • Peter
  • Michael
  • Gunnar
  • Philippe


Minutes


Gunnar Presented Webinar-Report draft.  Purpose and content.  Recommended reading for all, but especially those who did not attend.


AMM plans:   GENIVI All-Member Meeting planned for May 12-14 in Leipzig, Germany

AI (all):  Block 12-14 May (+ possible travel time) in your calendars.

  • Philippe:  Make sure to separate content preparation from any concerns about virus outbreak. 
  • The conference is planned to be F2F as of now.  In any case we do not stop our preparation since content is reusable.  Evaluation of the situation of course continues in parallel.
    Preliminary attendance possibility:
    • Kai - Not travelling – family plans
    • Adam - OK.  Leipzig is near.
    • Dmitry - OK.
    • Peter - OK, most likely, company will clarify attendance and likely to come, also bringing other colleagues.

Discussing the comments given in whitepaper editable document, and many general discussions about the direction and content.

  • Possibly split into more than one document.   Why Virtualization in general vs. deep dive into technical decriptions.
  • or split one document into two sections
  • Need more structure, more headings, more sub-chapters, more summary and leading the reader from intro to details.

AI (all):  Get your (Google) account connected to the document, with edit capability AND everyone expected to give some input until next time.


March 2

Participants:

  • Artem
  • Peter
  • Gunnar
  • Philippe

Minutes

  • Xen project status & current state from Artem
  • Walkthrough AVPS 2.0 prioritization tickets
  • Lots of discussions on plans / purpose / and various input to the Hypervisor Project overall – initiated by Peter, filled in by Artem.


February 17, 21, 28

Minutes pending


February 10

Participants:

  • Kai
  • Dmitry
  • Adam
  • Gunnar


Whitepaper:

  • Main subject as agreed: "Why is Virtualization needed?"
  • But should we also include challenges that are added as a result of choosing virtualization?
    For example shared-memory as discussed in the comment by Michael on Whitepaper - first private draft text


Discussing shared memory challenges:

Locking implies timing interference.  Lock-free implementations are complex but important.  Can hardware support be provided to improve the implementation of shared memory implementation?

Kai: You would sacrifice isolation if different criticality functions access the same memory.

Adam: For video the approach be fine as long if data corruption is less of a concern.  Depends on the use cases of course.

For sharing it is required to consider hardware support for making some access read-only to guarantee that non-critical software cannot change the memory values of critical software.

Gunnar:  What about if there is only one writer (critical task) and multiple readers.  Can non-critical readers negatively affect critical software by starving the memory bandwidth or similar?

Adam: Generally starving is not possible due to hardware setup.  A worst case analysis would be needed.

Adam: Hardware counters usually exist that can enforce budgets on the memory usage.  Usually two counters or so, the number of counters might not be enough.

Kai: How do the lock-free implementations perform in worst-case situations?

Dmitry, Kai:  Locking (handling multiple writers or handling that writing is complete before reading starts) is orthogonal to the performance issue.

Gunnar: One detail, don't forget buffer sealing (as Linux calls it).  It is a security feature (needed for some use cases) to set a buffer read-only as it is being handed over.  This is to be able to guarantee that a writer does not modify the buffer again, after  it has been handed over to the reader(s).  Such late modification could be used by a malicious writer to exploit bugs in the reader implementation.


Conclusion:  Yes, it might be useful to include, in the white paper, some challenges topics and how to solve them (e.g. shared memory).

Gunnar to send out links to webinar.  Participants invite their colleagues.


Weekly meeting, February 3, 2020

Minutes 

  • We went through the slides for upcoming webinar.  Feedback on content.


Weekly meeting, February 27, 2020

No minutes


Weekly meeting, January 20, 2019

Restart after new-year, CES, etc.
Spec has been released.  Dissemination/information given to various automotive industry individuals during CES.

Matti is away from project for a while.

New JIRA items, one per "chapter" to track which parts need update.

Some additional edits will be done by OpenSynergy tech-writer (Susan)

Action to all:  Each participant finds at least one colleague/friend to show specification to for feedback.

Webinar is being planned beginning of February.

Artem:  For Xen we are quite busy atm, working on safety certification, some ELISA project participation, and of course virtio implementation...

No progress on whitepaper - need leadership to assign tasks/chapters in practice.



Weekly meeting, December 9, 2019

Rough minutes: Went through spec from the last page in reverse.  Discussed open points. 
Actions taken by Adam to update 


Weekly meeting, week of December 2, 2019

Participants

  • Gunnar Andersson
  • Eugen Friedrich, ADIT
  • Michael Doering,  Bosch
  • Artem Mygaiev (until 10:45)
  • Kai Lampka
  • Adam Lackorzynski
  • Matti Moell
  • Philippe Robin (part)

Apologies/Absent

  • Michele
  • Dmitry


Agenda

  • Memory/buffer sharing standards for graphics applications (with Eugen Friedrich)
  • AVPS completion work


Discussing graphics/memory buffer sharing.

libvirgilenderer implements the API, including reference implementation of VULKAN API
In practice there will be only one implementation.  It might make sense to require using libvirgilrenderer quite simply.

The driver is part of MESA project. 
Non-virtual operating systems?   QNX apparently ported both driver & renderer part.
Integrity?    MESA driver is a user space library - creates a command stream - should be easily portable (paravirtualization assumed).

Eugen: There was a proprietary API for memory sharing proposed in...
...Other APIs just give you a handle, and you don’t really control what happens below.
...Should there be a defined way how memory can be represented in a generic way?
Matti: VIRTIO basically is this in fact.   The scatter-gather lists are providing this.  Managing the lifecycle of the buffers is the challenge.    Gerd Hoffman / RedHat has proposed a standard - see virtio list. (https://lists.oasis-open.org/archives/virtio-dev/201911/msg00149.html)

Need to build in handling of the particular characteristics of the hardware.  E.g. special alignment or size restrictions. Usually VIRTIO has per-device handling today (GPU, block-device, … ) because of this. 

Intel created a proposal to require host to allocate and give to the guest.  Memory accounting troubles follow. This was not accepted in the community.  

recent proposal VIRTIO-GPU resource attachment.  patches to qemu to get a DMA buffer from a guest target buffer.   udma (in qemu). This could be applied to embedded HVs.

create_memfd - filedescriptor.  udma driver - ioctls to control it.   See DMAbuf in Linux source. drivers/dmabuf/udmabuf.  Note memfds are not part of POSIX standard.

Gunnar: These are Linux specific APIs then. Any different consideration for non-Linux OS (in a virtualization environment).

Matti: Should even be easier to implement in simpler OSes (since user space code may be more privileged to access the details).

Matti: One answer is that the GPU 2D specification already allows sharing buffers.  You have to keep into account some details about the memory model of the HV.  
VMs can tell the HV environment about buffers they would like to show, including giving this buffer to another VM.

Some kind of global compositor and it is given the buffers, e.g. that compositor might be within one VM, but it could be a hardware display device...

How to handle the lifecycle of the buffers?

Matti: VIRTIO specifies a low level way to communicate between VMs with virtqueues.  The other is the description of the device implementation.  How to handle the scatter gather lists of buffers.

Gunnar: The discussion involves different standards, in addition to standard Linux/Wayland, there is for example the Android graphics stack, potentially others...

Wayland can display a DMA-buf so if only you can get a handle to that, it's one way.


AVPS completion work

  • Went through more comments.  Some parts have been proposed as resolved after rewriting the section (Gunnar)
  • Matti Technical writer input, can the document be freely changed?
  • Agreed to welcome technical writer changes, but with some track-changes capability so we can see if any part was changed in a way that makes it misleading.
  • Gunnar:  ...it sometimes happens when unclear explanations are clarified that they end up to say the wrong thing instead.


...

Weekly meeting, week of November 18, 2019

  • Stephen Lawrence, Renesas
  • Gaurav Sinha (Micron)
  • Michael Doering,  Bosch
  • Adam Lackorzynski
  • Dmitry Sepp
  • Philippe Robin (part)
  • Gunnar Andersson


Minutes

Intro and discussion with Gaurav.   AVPS work - documented in the google doc itself.


...


Weekly meeting, week of November 18, 2019

Participants

  • Artem Mygaiev (part)
  • Gunnar Andersson
  • Michael Doering,  Bosch
  • Kai Lampka

Apologies/Absent

  • Michele
  • OpenSynergy x 2
  • Adam

Minutes

  • Introduction Michael Doering, Bosch
    • Proficiency in networking especially TSN, and other things
  • Introduction of work, purpose, results for new participant Michael
  • Haven't heard from OpenSynergy this week.  GA will follow up.
  • Artem:  Note recent feature list for Android R – VIRTIO mentioned.  Follow-up with Google.
  • Gunnar:  Yes, following up on HV standardization is a good idea when we discuss Android-SIG activities with Google
  • Showing comments in draft document from Michele
  • Power management text under Hardware considerations sparked discussion on if we have concluded the situation for Power Managment feature requirements.  Actual chapter is not written? 
  • Michael will read through whole document and get familiar with the work in general
    • Possibility to pick any chapter and improve it.
    • Gunnar: We appreciate input on TSN but it is not critical for first draft so focus might be elsewhere first 
  • Kai has limited time next week/weeks due to teaching obligations, as well as planned personal training


Weekly meeting, week of November 11, 2019

Cancelled due to Tech Summit.  Not everyone was aware though.


Weekly meeting, week of November 4, 2019

Participants

  • Artem Mygaiev
  • Michele Paolino
  • Philippe Robin
  • Dmitry Sepp
  • Gunnar Andersson

Apologies

  • Matti Möll (office guests)
  • Kai Lampka (other obligations)
  • Adam Lackorzynski (other obligations)

Minutes:

  • Working on draft spec.  Most discussion was on IOMMU with requested input from Artem 
  • Michele to look over web pages, project introduction, and Google Doc (editing, commenting...).  Consider hardware specifics and ensuring t
  • Artem to look over draft spec, perhaps mostly IOMMU.  Focus on narrowing spec into a draft release.

Weekly meeting, week of October 27, 2019


Weekly meeting, week of October 20, 2019


Weekly meeting, week of October 14, 2019

New meeting time used today, Monday 10:00 CEST

Participants

  • Adam Lackorzynski
  • Artem Mygaiev
  • Matti Möll
  • Jens Uwe Schaefer
  • Kai Lampka
  • Michele Paolino
  • Gunnar Andersson

Minutes

  • Introduction and welcome to new participants
  • Review current work topics and purpose with new participants
  • We went through the status of the the Virtual Platform definition work, updating the table and some few changes in spec text


Weekly meeting, week of October 9, 2019

Meeting missed/skipped due to ongoing rescheduling


Weekly meeting, October 1, 2019

Participants

  • Artem
  • Matti
  • Dmitry
  • Gunnar
  • Kai

Apologies

  • Adam

Minutes

Tech Summit, November 12-13

  • Matti: Someone from OpenSynergy based in USA (but not Detroit) might be interested to join, but not familiar with the GENIVI work so far.
  • Artem: Checking my november planning - I might have a trip to USA
  • Kai:  I could check with EB subsidiaries in USA.  There are people with HV expertise.

New meeting time

  • Kai free all Tuesday but not 10.00 → around 11.
  • Matti busy Tuesday 11-11.30
  • Adam wants to change day.  (Kai says lunch around 1200)
  • Monday 10.00 is an option
  • Afternoons:  Matti more likely to be busy (also at short notice).  3PM maybe...(any day)
  • Kai: around 3PM should be OK.  Most options sound OK.
  • Most options seem to work for Artem.
  • (AI) Gunnar: Set up a doodle poll with these constraints.

Virtual Platform Status

  • Short discussion: What is "networking"?  Does vsock apply in that concept, or elsewhere?
  • Artem: Look at ULSNet for another view on what fits into OSI model. (joke) (smile)


Other

  • Artem: I would like to connect the safety-critical Linux community with the work we are doing here.  There are a lot of discussions of when hypervisors are needed and when safety-critical Linux can suffice.
  • Gunnar: Sounds good, let's make the connections.  The whitepaper subject ("Why Hypervisors are needed") seems related.

Adjourned (late) at ~11.35



...

F2F meeting, September 24-25 2019

(warning)  Some rough notes from F2F were captured here.  Cleanup in progress

Expand

F2F September 24 - notes to be copied to the right place...


https://etherpad.net/p/hvws


Motivation: Why to use HV:


WHAT IS THE PURPOSE OF THE WHITE PAPER?

1)  ...We need to explain why virtualization is actually needed.  (It is still not fully accepted as necessary by all)

This is the agreed main topic.

2)  Explaining concepts ->   1 chapter  pro/cons?
   Microkernel, Monolithic <- multiple privilege level of modern CPUs , Type 1, Type 2, Paravirt, HW emulation, Linux Container
try not to rehash too much of existing data on this, but make introduction to the reader who does not know it and needs the basics to understand the rest of the paper.

3)   How do improve the usage of virtualization to meet the goals previously stated.
    E.g. chapter on needed hardware support  --> At least 1 chapter
           standardization of interfaces               --> At least 1 paragraph
           training, education, explanation...        --> This is basically just a sentence We need more training.
          Sample implementations -->  ... also just a sentence.

HERE ARE THE REASONS:

 → Certain concrete security/safety issues that can be shown clearly and that HV can counteract
 → System flexibility is another very important point.

Consolidation of functions into a single SoC (multiple ECUs on a single platform), this includes legacy systems and new SWC
This calls for a clear interface between SW stacks of different vendors and shall allow provider of SWC to make their choices on the execution env./OS, examples to this are different combinations of OS/adaptive Autosar implementations or combinations of classic Autosar modules, e.g., SafetyOS / Com Module / RTE provider.
To this and, certification, let it be for security or safety is lost, as soon as componentes are re-arranged into new setups.
This clear calls for flexibility in the sense allowing SWC provider to integrate their complete SW stack into the platform and still gurntee fundamental isolation properties.

Technical:
    
* Single-kernel systems provide isolation but there are limits.
    --> Although capability of configuring for example the Linux kernel for isolation is ever improving, there are counter examples (see other chapter).
     In addition, a single Linux kernel solves only some scenarios since putting all SWCs on top of a single Linux kernel is still limited by the requested OS diversity (see next chapter).
     

Other realities:
* More freedom in choosing operating system kernel.  (The automotive industry has a reality of using multiple OS choices)
* Multiple software component parts coming from different vendors combined on common hardware.  (The automotive industry has a reality of business setups with multiple vendors, each wishing for a defined and limited responsibility)
     -->  Virtualization enables the use of common hardware but have freedom from interference

VMs provide a small, clear interface:
The virtual platform is a relatively speaking small and simple interface to separate different software deliveries from different vendors.  
--> Compare scenario where different vendors deliver programs to be  on a single kernel -> containers -> messy chapter, try to avoid mess...
--> Other options include single kernel + isolation (container approach)
     or advanced runtime/platform definition for software component interoperability (Classic AUTOSAR)
     
*Standardization of this interface will improve this further (reference Virtual Platform definition)
    write a bit about how this improves things
    

Containers vs VMs
------------------------
Are there concrete examples of isolation (resource sharing/interference) scenarios not handled by containers.
Avoid the either-or discussion since the future may be in container + VMs combinations.

Kai: Performance counters required to manage resource interference <--  details needed

HVs that can do resource management on a more detailed level than any kernel (currently) can. 

As an example:  A hypervisor implements limits on the rate of cache misses allowed for a particular VM.  This might be implemented to protect another critical part from being cache starved (.e.g. a real time core which shares memory cache).  Excessive cache misses would cause the offending VM to be throttled.
  
Are such requirements possible or likely to be accepted (into Linux mainline kernel)?

Contention on implicitly shared resources (memory buses, caches, interconnects...)

Opinions on the high level purpose of the paper.
 → Certain concrete security/safety issues that can be shown clearly and that HV can 
 → System flexibility is another very important point.

Fundamental system components can be replacable and pluggable.  

E.g. it is possible to insert (delivered by a separate vendor) a virtual network switch with advanced features such as traffic shaping, scheduling and optimization among VMs sharing a network.   While it would be technically possible to add this to the Linux kernel, it is less likely to be accepted as a separately delivered independent software part.  (Of course this is indirectly a consequence of the purely technical aspect that Linux is a monolithic kernel, since a microkernel + user-space implementation of system features would yield a similar effect).   Along the same lines, there are license requirements that are in effect.   Adding code to the Linux kernel requires them to be GPLv2 licensed, whereas independent parts (as VMs or in user-space) do not.   It is easier to assign responsibility to the vendor of this component if it is isolated from the rest of the kernel.

Interaction between general-purpose and dedicated cores is poorly understood.
Can we explain and give examples or is it enough to state this?


3.2 Network Device

Standard networks

Standard networks include those that are not automotive specific, but instead frequently used in the computing world.  In other words these are typically IP based networks, but some of them simulate this level through other means (e.g. vsock which does not use IP addressing).  The physical layer is normally some variation of the Ethernet/WiFi standard(s) (according to standards 802.*) or other transport that transparently exposes a similar network socket interface 

virtio-net  = Layer 2  (Ethernet / MAC addresses)
virtio-vsock = Layer 4.  Has its own socket type.  Optimized by stripping away the IP stack.  Possibility to address VMs without using IP addresses. Primary function is Host (HV) to VM communication.

Discussion:
Vsock: Each VM has logical ID but the VM normally does not know about it.  Example usage: Running a particular agent in the VM that does something on behalf of the HV.  There is also the possibility to use this for VM-to-VM communication, but since this is a special socket type it would involve writing code that is custom for the virtualization case, as opposed to native.

vsock is the application API.   Multiple different named transport variations exist in different hypervisors which means the driver implementation differs depending on chosen hypervisor.  Virtio-vsock however locks this down to one chosen method.

Requirements:

  • If the platform implements virtual networking, it shall also use the virtio-net required interface between drivers and Hypervisor.
  • If the platform implements vsock, it shall also use the virtio-vsock required API between drivers and Hypervisor.
  • Virtual network interfaces shall be exposed as the operating system's standard network interface concept, i.e. they should show up as a normal network device.
  • The hypervisor/equivalent shall provide the ability to dedicate and expose any hardware network interface to one virtual machine.
  • The hypervisor/equivalent shall(?) be able to configure virtual inter-vm networking interfaces.
  • Implementations of virtio-net shall support the    


Discussion:
    Virtual network interfaces ought to be exposed to user space code in the guest OS as standard network interfaces.   This minimizes custom code appearing because of the usage of virtualization is minimized.  

MTU may differ on the actual network being used.   There is a feature flag that a network device can state its maximum (advised) MTU and the guest application code might make use of this to avoid segmented messages.

-------------
Parked questions
- What could a common test suite look like?
- Google virtual ethernet driver - merged into mainline?

1. Introduction


Automotive requirements lead to particular choices and needs from the underlying software stack.  Existing standards for device drivers in virtualization need to be augmented because they are often not focused on automotive or even embedded, systems.  Much of the progression comes from the IT/server consolidation and in the Linux world, some come from virtualization of workstation/desktop systems.

A collection of virtual device driver APIs constitute the defined interface between virtual machines and the virtualization layer, i.e. the hypervisor or virtualization "host system".  Together they make up a definition of a virtual platform.  

This has a number of advantages:

  • Device drivers (for paravirtualization) for the kernel (Linux in particular), don't need to be maintained uniquely for different hypervisors


  • Simplify moving hypervisor guests between different hypervisor environments


  • Support guest and host interoperability:


  •  .... Programming against a specification simplifies developing local/custom variants of systems (e.g. Infotainment systems tailored for a certain geographical market)


  • .... Simplify moving legacy systems to a virtual environment


  • Some potential for shared implementation across guest operating systems


  • Some potential for shared implementation across hypervisors with different license models


  • Industry shared requirements and potential for shared test-suites


  • A common vocabulary and understanding to reduce complexity of virtualization. 


  • Similarly, guests VMs can be engineered to match the specification.


In comparison, the OCI initiative for containers serves a similar purpose.  There are many compatible container runtimes and the opportunity enabled by virtual platform definition is to have standardized "hypervisor runtime environments" that allow a standards compliant virtual (guest) machine to run with less integration efforts.

The specification shall enable all of the above while still enabling the ability for different implementations to differentiate, add additional features, optimize the implementation and put focus on different topics.


2. Architecture


-------------
Parked questions
- What could a common test suite look like?
- Google virtual ethernet driver - merged into mainline?



...


September 17, 2019

Participants:

  • Adam
  • Kai
  • Gunnar


Minutes

Short meeting because of few attendees. We expect some are preparing for next week F2F ;-)

Sync up on whitepaper work and review with Kai after absence. 

Discussing F2F logistics and whitepaper planning.


September 10, 2019

Participants:

  • Adam
  • Bernhard
  • Gunnar
  • Philippe
  • Matti
  • Artem

Apologies:

  • Kai (vacation / out of office)

Minutes:

Matti: I think the draft covers a good outline fairly well

Gunnar: Conclusion chapter

point out what is out of scope

Gunnar: Reasonable point not to include everything

Bernhard: Several ARM features supporting this, we have provided

Currently, includes when to use paravirt, hardware support, isolation generally.

Artem: Useful to include some example of what different HW vendors include (e.g. for inter-core communication as discussed last week)

Artem: Power management is also important

Bernhard:  Also S-MMU is such a related feature.

Matti: Travelling next week, 1AM time slot.  Will try to join.

Discussion on F2F agenda.

Action (Gunnar): It is time to send out invitation/information about the F2F to mailing list.

More discussion on whitepaper content and plans for who will write which chapters.


September 03, 2019


Participants:

  • Dmitry
  • Adam
  • Bernhard
  • Gunnar
  • Philippe
  • Matti
  • Artem

Apologies:

  • Kai (vacation)
  • Bruno (conflict)


Minutes

  • Discussed the Whitepaper draft content and the result was added using green text.
  • Outline/content of the inter-core communication was added with Matti's input
  • The document purpose was futher discussed and ideas were noted in the beginning of the document


August 27, 2019

Participants:

  • Dmitry
  • Adam
  • Bernhard
  • Gunnar
  • Philippe

Apologies:

  • Kai (vacation)
  • Matti (busy)


  Minutes


Discussing the "outline" (general content and order of content) of the first draft in the page Review of content/outline for the early draft of HV whitepaper
Most notes taken on that page directly...

Action: Dmitry to try to find time to draft some I/O challenges leading to need for IOMMU and eventually need of Virtual IOMMU... (as needed by whitepaper)
... but this week is very busy....

Action: All to look over/adjust F2F agenda

Bernhard:  Still considering what we can contribute to chapter 2 in terms of new coming hardware chapters.



August 20, 2019

  Minutes pending


August 13, 2019

Apologies

  • Matti
  • Dmitry

Participants:

  • Bernhard
  • Gunnar
  • Philippe
  • Kai
  • Bruno

Minutes

  • Bernhard/Gunnar:  ARM is still waiting for input what ARM can bring to the F2F.
  • Bernhard: On the whitepaper side, Chapter 2 seems like we can help.  But it is very hardware dependent.  There should be some info how hardware features evolve.  In this ARM can contribute.
  • Gunnar/Bernhard: Open question for team/writers to answer:  HW needs could be reflected differently in the architecture - more generalized, or specific.  Likely there might be a few specific examples from ARM.  Sometimes examples are even more specific than what is described in generic-ARM (in other words a specific SoC might have added a unique feature).
  • Bernhard: Example SCMI presented before, we can extend on that and/or TrustZone
  • Gunnar: Start Attendance list.  I will add it on F2F agenda. 
  • Bruno: I cannot join the F2F in September - travelling for 3 months.
  • Gunnar: Please consider if another engineer near you can support networking chapter finalization in Virtual Platform specification
  • AI:(all)  Review F2F agenda and give feedback on the content and alotted time for each topic.


August 6, 2019

Participants

  • Dmitry
  • Bernhard
  • Kai
  • Adam
  • Philippe

Apologies

  • Matti
  • Gunnar

Agenda:

  • whitepaper projectization
  • Discuss F2F agenda

Minutes

  • Whitepaper projectization in Jira
    • Philippe: points to the proposed document outline available at the bottom of [HV] Next Generation Multi-OS System Design Whitepaper

    • sections are assigned as follows
      • section 1: introduction - Kai has already produced a 2-page intro and will add it to the wiki
      • section 2: What does the HW need to fulfill to support unmodified execution of complete SW stacks,
        • what are the problems ? what is in the pipe ? might reuse elements of Adam's talk at the AMM, assigned to Berhnard
      • section 3: isolation & partitioning : Berhnard can provide high levels definition a,d an "academic view", which will have to cross-checked with reality later, Adam & Kai can provide a text on isolation & timing
      • section 4:  Inter-core communication - assigned to Dmitry and Matti
      • section 5:  summary - assigned to Kai then Gunnar
    • upcoming F2F is scheduled on 23-24 September, it would be good to have a version 1 of the paper at the beginning of September in 4-5 week time
    • Philippe: will create Jira tickets to track and initiate to those tickets with a 4-5 week sprint timeline
  • Virtual platform: projectization in Jira - assignment of sections to contributors, doc ownership
    • Matt has populated Jira with a set of tickets corresponf to the sections of the specs
    • discussion on the assignement of those tickets shifted to next week's call
  • OEM feedback on HV work
    • Philippe shared Renault software expert feedback on HV work in GENIVI
      • background information: slidedeck on HV work prepared by Gunnar
      • Multi-OS System design whitepaper: "I guess everybody has already produced similar reports internally, but the landscape moves fast, and generating something @Genivi is certainly a good idea. I remain convinced that there is no serious use case for hypervisors proper until AD (Autonomous Driving), but study of alternative paths for multicore Socs, especially asymmetric ones, is certainly useful."
      • Automotive virtual platform standardization "This one looks much more ambitious, expensive to achieve and also very useful if it works, assuming convincing use cases show up"
      • the interesting point is that we should be able to ask Renault to review the docs in the future
  • F2F agenda preparation
    • Kai: IMHO 60-70% time should be assigned to virtio => to be checked with Matti (Philippe will send a note to Matti)
      • 1 hour each in each section
      • how ambitious can the project be ?
    • 30-40 % time should be devoted to the white paper, go chapter by chapter
    • Philippe: will create a draft agenda in the wiki
    • Philippe: logistics details need to be added to the project front page
    • /TODO/ Dmitry add logistics details to the wilki (meeting location, direction to opensynergy, hotel recommendation and deal if any)
  • Next week agenda
    • review and assignment of backlog and sprint tickets
    • F2F agenda finalization


...

July 30, 2019

Apologies

Participants

  • Matti
  • Bernhard
  • Bruno
  • Dmitry
  • Gunnar
  • Philippe
  • Kai (last 5 minutes)

Minutes

  • JIRA project work
  • Matti convert the virtual platform "table" to tickets
  • Kai still interested to write but ran out of time.  New attempt for next week.
  • Break down the work by writing some JIRA tickets (Kai).
  • Next week: also Discuss F2F agenda


...

July 23, 2019

Apologies

  • Matti (vacation)

Participants

  • Dmitry
  • Bernhard
  • Vasco
  • Philippe
  • Gunnar

Minutes

  • Link to whitepaper planning (scroll down to see proposed document outline) [HV] Next Generation Multi-OS System Design Whitepaper
  • Vasco offered to start writing a chapter, or much more detailed outline of the chapter.  I.e. what's the actual content.
  • Proposal to switch meeting time came up again


...

July 16th, 2019

Participants

  • Dmitry
  • Adam
  • Gunnar
  • (Kai Lampka - see end of minutes)

Apologies

  • (Kai Lampka)
  • Bernhard

Minutes

Video virtualization standards update (Dmitry):

V4L2 is stable but Android is the main challenge because of a lot of changes and new development.  Changing from OMX (older API) to Codec2 - new HAL/API, operating on top of OMX (backward compatible) and V4L2 and other.
Some video accelerator technology coming in from ChromeOS / Chromium.

Dmitry:  We haven't approached VIRTIO with any proposals yet.  Need a bit more stability first.

Gunnar: In your opinion how is Android regarded in the upstream VIRTIO work?   Or are most of the participants primarily focused on Linux / QEMU ?
... seems primarily Linux / QEMU is driving still.

Dmitry/Gunnar: CrosVM is a hypervisor monitor in ChromeOS.  It uses KVM as the control interface to the kernel.

Discussion about project organization.  Should we get our ongoing work up on JIRA tickets for more clarity?
Gunnar: We started this a bit more ad-hoc and driving forward with tables and minutes.  [...but we have not so good delivery accuracy...]
Most other GENIVI collaboration projects use a lightweight SCRUM with some defined sprint content, depending on people availability.
JIRA is successful in several other projects (Android SIG, Connected Services, GPRO/Franca-ARA:COM and others) (NOTE: some links might require you to log into JIRA)

Discussing F2F agenda
Gunnar: Originally, we said to focus on the virtual platform specification.
Dmitry: Agreed, primarily specification.  But what do the others think?

Gunnar: Note that Participation from Kai will be not every week - Adam is supporting on similar topics.

Adam: My main area of knowledge around VIRTIO is also block device and storage, like discussed with Kai before.
Gunnar: Please consider if you can help out to write the Block device chapter into the specification.

AI (Gunnar):  Contact Artem again.


Meeting adjourned 10:40.
...then Kai joined. (smile)

[We had a quick sync up between Kai & Gunnar]

Kai: I have a conflicting meeting every week but sometimes I can join a bit late like today.
Gunnar: OK, let's try to work with that somehow.

... mentioned JIRA idea as above
Kai: ... OK, maybe useful for whitepaper delivery.

AI (Gunnar): Look at setting up JIRA project for HVWS project


F2F agenda
Gunnar: What do you think should be the main agenda for the F2F?  Virtual Platform Specification?
Kai: Yes, primary focus on specification, some whitepaper sync up (assuming draft is done before September)

Kai: I think everyone can write a chapter in the whitepaper, we could get a draft ready by September.  Let's agree on the
Gunnar: Currently the primary outline is based on your combined proposal.   I only left the other chapters at the end as a kind of history and possible to select some ideas from.
It also includes our brainstorm at the top, so the page is a bit messy now.  Let's move the agreed outline to a separate page/document for clarity.

Kai: I will work on this.

Kai: Next week I expect to be able to join also, perhaps a little earlier.

Future absences:
Kai:
Last week August, first of September


...


July 9, 2019

Participants

  • Bernhard
  • Dmitry
  • Franz
  • Gunnar

Apologies

  • Kai Lampka
  • Matti (vacation)
  • Philippe

Minutes

  • Recap of previous minutes = Informing Bernhard and Franz about previous meetings.
  • Informing about last week's discussion on Whitepaper scope and outline.  Useful, but not perfect since we could not get Kai to repeat his thoughts directly instead only discussing what the rest of us remembered.  The participants looked forward to hear from Kai more directly at a later time.
  • Dmitry:  I was looking forward to continue/complete that discussion...
  • Further discussion about whitepaper outline and content.
  • Gunnar:  The agenda for F2F needs to be set.  As far as I know we decided a F2F was needed primarily to finalize the virtual platform specification.
  • Bernhard: Interested in the hardware requirements part of course (from whitepaper & virtual platform discussion)
  • Gunnar: When can we get an update on audio interface standardization? (from OpenSynergy)
  • Dmitry: Final approval is stuck on a few details.  Should be ready soon.
  • Planning absences and summer-time meeting schedule.
  • Gunnar: We might need to check availability and see if some slot should be cancelled.


Known absences as of today:

Franz - last week of August and also first week.
Bernhard -  much later (November)
Dmitry - no plans so far
Gunnar - TBC. Likely last week July and beginning/mid of August


July 2, 2019

Apologies

  • Bernhard (conference)


F2F workshop decided : Sept 24-25.

Kai presenting starting point for Outline

  • Some discussions about MCAL.  Vendors deliver MCAL with a quality statement and applicability for specific safety requirements.
  • On Linux the drivers are not given with such quality statements.
  • Costs of qualifying final systems not always considered.

Matti:  Some parts of the system such as clock control need to be isolated from...  E.g. clock controller for Ethernet network needs to be under the control of an equally safe part of the system.  Use a safety island or a VM responsible for this.   Some tension between hardware vendors providing such features and the proponents of hypervisors.

Adam:  You can also mix this stuff.  Lay it out as you need it.

Gunnar: This is what I mean about design guidance.   Present choices, present consequences of choosing, and then

Lots of discussion on scope and possibility to include the Design Guidance (mostly between Gunnar and Kai)

Kai wrote some additional points down during discussion and will send them over.  Most likely these will be integrated into the whitepaper guideline.




June 25th, 2019

Participants

  • Philippe Robin
  • Dmitry Morozov
  • Gunnar Andersson
  • Deventra T
  • Kai

Apologies

  • Matti (vacation)
  • Vasco (travel)
  • Bernhard (conference, also next week)

Minutes

Dmitry: We follow VIRTIO block device standard.

Adam: Other than the trim/discard stuff we have noted is missing, we have no issue with VIRTIO block device standard.  It is fairly small after all.  There's a patch for Linux, it should be merged now.  Eventually it should show up in VIRTIO

Adam: The trim/discard has been added to VIRTIO 1.1

Gunnar: Let's complete the spec - write a few requirements into the block device chapter.

Kai: We can do that

Dmitry: We don't really use it but VIRTIO should cover this.  It is quite mature.

Gunnar: Let's review the crypto support chapter in VIRTIO

Dmitry: 2D is fine.

Dmitry: 3D is still changing.  Android will require Vulkan.  New versions should be based on Vulkan.  Someone needs to introduce Vulkan support in EGL renderer or everything move to Vulkan.

Gunnar: Not moving fast enough to get to a stable point yet, then?

Gunnar: Vulkan support on the driver side?

Dmitry: Android Emulator should need it.  Google might be working on it?

Vulkan support exists on bare metal hardware (GPU vendors provide it) but not yet for virtualization.  This is a kind of showstopper for Android in virtualization in the future.

Gunnar: Is there a minimal set of requirements to write down today?

Dmitry: 3D part is still a big question.  It's hard to decide on the requirement set.

Input

Dmitry: We have some implementation of this spec.

Adam: vsock can be used for VM to VM communication.

Adam: The user level APIs are normally standard socket APIs so that is convenient

Gunnar: But can you assume all features work?  Let's say I select/poll on the vsock, and other file descriptors, will it work:
Adam: Impl by kernel...

...Alternative: VIRTIO console/character device.  But that's a different interface.

Dmitry: One of the stty needed to change to RAW data transfer and then it's available.

Gunnar:

9pfs

IOMMU -

Dmitry: Lots of updates still, upstreaming.  When that is done I will look at the final specification and update chapter, should be about 1 month or so.

Gunnar: Can you provide links to upstream / blog etc.?
Dmitry: Already in Wiki see IOMMU Summary


Dmitry: Matti will be in office next week, then away for 3 weeks.  Planning to upstream more patches.

Adam:

Sensors

Dmitry:  From mailing list: OASIS don't want to accept any sensors.  In the end it's just a byte transfer.

...

June 18th, 2019

Participants

  • Philippe Robin
  • Adam Lackorzynski
  • Dmitry Morozov
  • Matti Möll
  • Stephen Lawrence
  • Vasco Fachin
  • Bernhard Rill
  • Gunnar Andersson

Apologies

  • Kai Lampka

Minutes

Whitepaper scope, followup

  • Whitepaper focus:  Explain what can be done with current hardware, vs. wish list for the future
  • Adam will discuss with Kai when he is back


F2F

  • 20-22 Sept. is All Systems Go Conference in Berlin,  Fri-Sun
  • HV Conference – see Doodle.

Doodle for F2F:

https://dudle.inf.tu-dresden.de/Genivi_HVWS_F2F_Workshop_September_2019/

Number of days?  2 or 3, let's create another poll

https://dudle.inf.tu-dresden.de/v5fxnz31/


AI(all): Fill in both of the 2 Doodles


MCU Hypervisors

Bernhard showing 2 slides (taken from a 162 slide presentation - there is more info of course)

Cores The R7 well known, R52, brand new.

Can control who has access to physical memory
RTOS1 & 2 in the picture accesses physical memory directly.  Note still NO address translation.

Multiple RTOS, multiple classic AUTOSAR stacks, for example.

EL-2 MPU is the new one.

You could integrate a rich OS without letting them know they run on the HV.
Applicable only for Cortex A profile only

Hypervisor could be used but trapping accesses would be costly
Better to have operating systems are fully aware/designed.

Separation Kernel might be a more apt name for this simple partitioning (academic discussion)

Changing timing of RTE on a classic autosar stack may need recertification (for critical functions).
With this add on, a safety-critical (ASIL B) can be isolated and guaranteed its resources. – by running more than one complete AUTOSAR stack, in partitions.

Another case: Software updatability – some parts are updatable through SOTA and others cannot be affected.

Note Double or triple memory requirements.  (because multiple AUTOSAR stacks) but it might be worth it.

Another use case:  Heterogeneous designs.  Safety Islands (often implemented in R7).

Known/publicly available info about licensees:   NXP, ST and DENSO. 
(i.e. R52 silicon is available now)


Adam:  On the term "Hypervisor"  Some call it Virtualization even when there are only very simple hardware separation features built in.
Note that Other MCU vendors with even less capability built into hardware are using the term "Virtualization support".

From Matti Möll to Everyone: 01:48 AM
http://www.projekt-aramis.de/
From Bernhard Rill to Everyone: 01:49 AM
https://www.aramis2.org/

Matti: A related info from OpenSynergy

https://www.opensynergy.com/wp-content/uploads/2018/06/Hypervisor-for-latest-NXP-microcontroller.pdf

Stephen:  Renesas related info: Trustzone security extensions, a similar concept was applied in the R7.  See documentation in the Lifecycle documentation for the SoC.


...


June 11th, 2019

Participants

  • Gunnar
  • Adam
  • Alex
  • Artem
  • Bernhard
  • Dmitry
  • Stephen
  • Vasco
  • Phillippe
  • Franz

Apologies

  • Kai Lampka

Whitepaper Discussion

  • Recap from the previous week
    • See outline here: HV
    • Multi OS system design on heterogeneous multi-cores as a general topic
    • System wide QoS
    • Protocols
    • The need for hardware device sharing ()
    • Hardware wish list for virtualization friendly SoCs
      • BSP drivers in general 
      • firmware wishes
      • Hardware wishlists could become outdated at some point
    • We should avoid scope creep
    • Finding/Using the right terms
    • Target audiences...
  • Hardware requirements
    • System design is unique and can be used in so many different ways
    • Hard to find a wish list of features
    • Make sure that mandatory and optional hardware features
    • How hard can the requirements be, what is the guiding function behind a certain hardware feature
    • Hardware requirement scope?
      • Trustzone/firmware interfaces
      • Architecture coverage (arm, x86...)
      • hardware virtualization support
    • There is a huge need to a virtualization hardware wish list, maybe is also makes sense to start a separate wish list already?
    • If hardware doesn't behave nicely, software needs to do more work
    • Expand the target audience to IP/HW vendors
    • Stephen: We might have two different topics here
    • Gunnar: The wish list is probably going to be spread out in the whole document
    • The whitepaper should convey the thinking behind the wish list 
    • AI: Create a wish list, not a a content provider
    • Bernhard can provide a description of heterogeneous system design
  • Gunnar can check, where the doodle for the working session is


...


June 4, 2019

Minutes by Kai Lampka


a)   White paper: discussion on potential sections. -everybody is aksed to llok again at https://www.automotivelinux.org/wp-content/uploads/sites/4/2018/06/GoogleDrive_The-AGL-software-defined-connected-car-architecture.pdf

For inspiration.

(i)                  Why are we doing this motivation also addressing heterogenous multicores:

(ii)                Use-case of SoC partitioning into safety- and security islands.

(iii)               Clarification of terminologies, para-virtulization, TCB, microkernel-based approaches, monolithic HV, type-1 and type-2 and embedded HV

    1. what is needed in HW to achieve this. Detail on a “wish-list” for HW-vendors to support SoC virtualization
    2. Differentiation to containers and drawback, a critical view on containers do’s and don’t, same holds for HV.

(iv)               What HW can do for isolation resp. platform partitioning

    1. Spatial isolation
    2. Timing Isolation: Contention on (implicitly shared infra-structure) and explicitly shared devices.
    3. Coming to future HW-based solutions, e.g., MPAM

Please consult https://static.docs.arm.com/ddi0598/a/DDI0598_MPAM_supp_armv8a.pdf
for inspiration.

(v)                VirtIO as mean of

    1. interaction of VM to VM, HV to VM and HV-off partitions to HV/VMs
    2. Sharing of devices in the above setup.    Define also different capabilities of devices, vfunctions and “virtualization-ignorant” devices

 

b) Meeting on Virtio in Berlin AI: Kai sends doodle link  to Gunnar):

    1. Planned to meeting CW 38 for addressing on
      1. VirIO spec contribution
      2. White paper as discussed above

c) Status of technical discussion (needs attendance of Artem), we defer this.

 


...


May 28, 2019

Participants

  • Gunnar
  • Phillipe
  • Dmitry
  • Vasco
  • Matti

Agenda

  • Discussion on white paper
    • Recap: Idea to write a white paper about the system design implications of having a hypervisor
      • Possible white paper could also better explain what virtio is?
      • How do containers play into system design?
      • We need Kai's input on this
    • Containers in the cloud usually use virtualization to enhance the separation
    • Hypervisors and containers and how they interact with system design for ECUs
    • Communication in asymmetric multi-core designs
    • Distributed systems and hypervisors
    • Software defined system architectures
      • Similar to what classic autosar was meant to achieve
      • Possible future outlook
      • possible goal for all the hypervisor standardization issues
    • How to run with and without hypervisors?
    • We should start writing a simple outline and see where it takes us
  • Feedback from AMM
    • Workshop was very interesting
    • Maybe a little bit rushed through the topics → created a lot of food for thought
  • Possible Workshop for working on the spec & whitepaper
    • Good date would be week 38, just before "All systems Go" conference in Berlin or week 37
    • Next step, send out a poll for date options (Gunnar)
  • Meeting next week
    • Vasco will be there, Matti hopes, Dmitry is off


May 21 30, 2019

Participants

...

  • Lots of topics (5 presentations), lots of discussion
  • Little time for Q&A on each - basically every topic needs deeper investigation and discussions
  • See presentations


May 14 - cancelled due to AMM

...