Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Today's meeting minutes

(star) This contains meeting minutes.  If meetings are cancelled this would be noted here.  Some dates are missing minutes but ideas are always captured on the main project pages here.

Minute takers according to this page to this page 


February 26th, 2019

Participants:

  • Matti (OpenSynergy)
  • Guru (Bosch)
  • Kai (Elektrobit)
  • Matt (ARM)
  • Dmitry (OpenSynergy
  • Philippe
  • Gunnar
  • Albert (Continental), last minutes of meeting

Apologies:

  • Artem (EPAM)
  • Anup (this time slot always a problem)

Minutes:



VIRTIO for "non-virtual" interfaces


This idea concerns communication between operating systems, especially running on different cores in an soc, with or without a hypervisor.

Kai: for example classic autosar rtos owning an ethernet connection running on smaller dedicated cpu core. ethernet connection can be exposed to multimedia cores using virtio network queues.
...idea extended? block devices? what else?
...needs an interface that exposes addresses, so that knows where to fetch the ethernet frames. i.e. shared memory, i.e. physical addresses

Matti: are you usingn openAMP for this?
...you could use virtio protocol also.

Matti: (previous work)   Xilinx + mentor published openAMP for this purpose?   I think TI showed something like it on Jacinto 6 .
...VIRTIO network device exposed to multimedia processor.

Kai: It ties into Multi-OS integration focus of the AMM


Audio in virtualization

Gunnar: I'd like to get to a conclusion on this topic, based on previous discussions but hopefully new info
...Let's start with, Hypervisor exposes an interface to VMs. What does it look like?

Matti: Let's start with how hardware works. Audio is a continuous bit stream. Hardware usually has multiple channels. Different hardware knows different buffer layouts. The HW walk through buffers in a timed manner. Take out the data and convert to analog audio and play it.

Gunnar: We got pointers before, quite a while ago, from Artem on the implementation support in Xen - including some parts that are in the kernel tree. 
...(But we need to bring this up a level, to standards)

Matti: Xen, as far as I understood. It sets buffers, allows you to access PCM streams. Real hardware works typically with plain PCM format. Also virtual hardware usually accept PCM.

The most common model in virtualization:
Emulating Intel sound card, HD97 audio controller. Specify PCM, bit, length, sample rate. Then dump stuff (audio data) in there.

...It's similar for USB audio. You register a couple of buffers. You send data over. (To summarize, meta data + content, much like any other device to be honest). Not that complicated. The only tricky thing is it is timing sensitive.

Gunnar: Yes let's talk about the real time demands, and virtual machines?

Matti: Hardware signals overflow or underflow. Sends interrupts, so that software starts filling buffers.
...But there is often a lot of slack in audio. Fast microcontrollers can nowadays keep up with what human ear allows for.

Gunnar/Matti: ... Human ear allows for up to several tens of milliseconds, at least.

(Brief sidebar on audio/video sync, a.k.a. lip-sync)

Matti:  On things like latency, note for example Android compatibility document CDD, has strict audio (timing) requirements. 
...(The latencies) are hard to measure and evaluate.

Gunnar: And this would apply also to "virtual hardware"?

Matti: Yes, if you're running Android as a guest, the requirements are there - they are more about pipeline latency all the way to the end user. They are there to guarantee everything,
...you may have an external amplifier on a network like MOST.  The whole chain must fulfil the requirements.
... Once all the network delays are considered, the remaining time for virtualization delays is very small.

Gunnar: (OK... standards)

Matti: So, we built a PoC for VIRTIO. We want to send this upstream. Talked to the "guardians" of the specification.  We are still finalizing details.  Expect to send out first RFCs (to VIRTIO mailing list) in the coming months or so.
...Prepared this with Michael Z. and Gert H. from RedHat.  Relatively positive.

Matti: The PoC has good enough timing for video calls.. We have not measured exact delay.

It is a protocol specification for VIRTIO Audio.
...(From VIRTIO mailing list) Note that Google emulator guys are also looking into audio virtualiztion. Used for Android emulator (development environment). Want to add support to mainline QEMU, to reduce their own maintenance.

(more details, what's in it?)

Matti:  Buffer exchange.  Define which audio streams can be supported? Discoverable, number of channels, direction, layout, etc. Some hardware prefers certain buffer layouts, certain sample rates, etc. Application should send the right formats.

Gunnar: ... If the hardware cannot process a certain format, it must get what it needs.  It is better to do this conversion close to application then?  Hypervisor layer should not do processing

Matti: Upsampling is easy - so that could be done.  Sample rate conversion with good quality takes more. 
...That is often offloaded to DSP or even more specialized hardware.
... So (that hardware) could be used by a Hypervisor to implement some such features.

Gunnar: So we should leave it open how advanced Hypervisor implements processing of audio. This could be unique feature per implementation?
... (Virtual) Device can report its capabilities and it will either be rudimentary or advanced, the guest have to adapt...

Matti: (I'd say so). 
...Better hardware support means less interrupts, less waking up the CPU.  So hardware could support a deep pipeline, e.g. take MP3 input instead of PCM. But it is complicated and doesn't even save that much battery any longer, so it has mostly been abandoned. It doesn't work well with content protection (DRM) .

Matti/Gunnar:  Summarizing:  DRM/security is too complex in hardware pipeline, and it gets out-of-date).

Matt S (ARM): ... let's talk a bit more about DRM. How is it covered?

Matti: Yeah, I think those questions are out of scope. ...It's a pure buffer (PCM)
... (and the other details) Precedence, priority, loudness, mixing...

Matti: (Note that) Mixing doesn't fit into the device protocol.

Matt S: Any Focus on Functional safety?

Matti: Yes, but ....

Gunnar: But DRM needs to be in scope, right. At least to protect the unencrypted buffers?

Matti: Audio DRM systems seem to go (protect) until they reach a PCM buffer, because on most systems it is possible to record this final stream in some way anyway.

Gunnar: Guess most protection today focuses on video content.

Matt S:  I'd think that Premium Dolby sound is still considered important (to protect)
... I'd like to reach out to Dolby to get their input on the whole discussion.

Matti: That would surely be very appreciated input.

Matt: (for such protection) Key exchange - does it need to be part of the protocol?
...If we have a mean to talk to an end point - we need to make sure the keys are in place. Dolby will probably have a special codec, probably an encrypted stream. When does key exchange happen?

Gunnar: (Agree, at least the occurence of this exchange should be in the protocol, (or explained when it happens). 

Gunnar: Then method itself I guess uses standard methods, Diffie-Hellman, etc.

...

Gunnar: OK how to define how we share or pass through DSPs and custom hardware (like codec support and Sample Rate Conversion)

Matti: It is also very hardware specific. What  can be done. Some (hardware) allows pass-through of these decoders, some make it very difficult to do so. Some discourage it because they don't have an IOMMU.
...DSP is more likely to be used for post-processing (than for MP3 decoding). It's a power saving feature and not a performance thing.

Gunnar: OK. We discussed earlier meeting if (at least) the method of configuration for pass-through devices can be standardized (across HVs)
... Although it's clear and open in Xen (standard Linux format device trees, for example).

Matti: I agree/confirm that how that device tree is created is proprietary. Until it reaches the guest, things are highly design/implementation specific.
...I think most OSes use a device-tree approach. e.g. some RTOS are using device trees too.

Gunnar: But other OSes have different formats (of DT)?

Matti: Yes. But I think basic structures are fairly similar at least, but I'm not too familiar with other OSes.

Gunnar: So a type of device tree is the specification for pass-through devices?  And the creation of it is proprietary. 
(consensus)

Wrap-up.


...


February 19th, 2019

Participants:

  • Bernhard Rill
  • Gunnar Andersson
  • Dimitri Morozov
  • Vasco Fachin
  • Kai Lampka
  • Sang-Bum Suh
  • Artem Mygaiev

...

AI(Artem): Provide references go global platform

Planning SBSA timing

12 March: CET 11-12 is OK?
Sang-Bum: Yes, 7PM for me, it is OK
Artem: I might be travelling on 12th... I will know in a few days.
Gunnar: I think this time was generally OK with Anup. I will check.

...

Gunnar: How is it done in OpenSynergy, if you want to share that detail...

Dmitry: I think some XML based configuration files. And special compiler to capture this input. I agree this is implementation specific

...