You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »


1## Introduction
2

Philippe Robin welcomed everyone and handover to Wassim

60 attendees

Wassim describes the agenda (slide #2)

slide deck for this workshop is here

3

## Project overview & proof-of-concept demo

4

Project overview presented by Wassim (slides #4 to #6)

  • Two strategies for system level audio
  • Combining both of them
  • Expecting to take opinion from the people on what topics are currently relevant

Topics relevant for all strategies and pre-requisites for the proof-of-concept for further discussion today (slides #7 to #9)

5

Proof-of-concept presented by Piotr (slides #13 to #15)

  • Presentation about the PoC and what have been done
  • Presentation of the demo, followed by demo running using the emulator, look at demo.md
  • Lessons learned
6

Q&A on Proof-of-concept

  • Q: What about low latency streams ?
    • (not captured) >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
  • Q: CPU offloading, what do you mean exactly by this ? Is it considered ?
    • For now the POC is not considering CPU offloading, we know it's feasible but it's not yet considered in this POC
    • CPU offloading is when the DSP can take care of some action instead of the CPU
    • Android has such a feature that allows the DSP to directly decode some streams
    • Android would like to isolate 3rd party developers for such decisions and they inform that such streams can be offloaded.
  • Q: if audio is played by Android application but external mixer for some reason does not given priority due to other sources , how app layer gets notified about error ?
    • The external mixer can be used for chimes and warnings, usually they are mixed on top (they play along music)
  • Q: Can you please explain the diagram on the right?
    • (not captured)
  • Q: What are the socket kind of ports shown in the lower block of slide #13 ?
    • One of the first findings we made is that the network sockets cannot be used directly by Audio HAL.
    • So the sockets are deprecated (they are realized by 3rd service and audio relays)
  • Q. How does this Proof-of-concept work in case of fail-safe scenario ? Can you explain more on System Proxy part considering start-up and shutdown use case?
    • The fail-safe scenario solution is as follows: during the start-up the audio can be available already because the ALSA is already available
7## Topics Relevant for All Strategies
8### Audio Management
9

GENIVI has an audio manager in Linux environment. It's already in multiple production software. It handles multi OS and has early startup features, etc. It's only a framework that allows vendors to write and link their logic in a form of a plugin. The plugin can run everywhere there is a GENIVI AudioManager. Therefore we have two Audio Managers

  • Android Audio Management
  • GENIVI AudioManager and related App.

GENIVI AudioManager will take over when the Android Audio management cannot handle certain properties of external streams

GENIVI Android App would be just an assistant that is able to interact directly with external audios (audio subsystems and this can be the GENIVI AudioManager)

10

Q&A on Audio Management

  • Q: Will there be a conflict between having two Audio HALs (Android and GENIVI) ?
    • No the App should be able to decide
  • Q: What use cases are being considered for this ? Is there some use cases that Android cannot fulfill ?
    • Android 11 would have some ways to control the global effects but for now for example, the global effects and equalization are not available, they can be handled by GENIVI Audio Manager
    • This idea can also be used for pre-development models as well, where we have a reliable framework that can support us during pre-development until the feature is supported by Android. This can be treated as a transition phase.
  • Q: Concern: is the question now is: Do we need to make any change outside of Android ?
    • The fact that we need to have a framework outside of Android is understood, there are many safety features that need to be handled outside of Android.
    • There is no valued use case yet to make the GENIVI AudioManager proxy (or App)
    • So only the Apps that can "see" the GENIVI App can use it. Is that correct ?
  • Q: Concern: if we are making something outside of Android we need to really make sure we need it
    • The reason why we are starting the POC is to answer this question: do we need this or not?
    • By the end of PoC we have the following :
      • An answer to the question whether we need such an outside system
      • Proof points for Google Android team that we need an outside system
      • A learning experience as well as a knowledge on how to deal with such cases
      • A learning experience about the numbers behind the low latency
      • A more detailed understanding of the use cases
    • The PoC is kind of a placeholder where we can collect all the features that the Android cannot support.
    • PoC is a good point. We can understand the Audio system and its shortcomings
  • Q: Low latency question
    • Door open, sounds need latency. App needs low latency sound
    • Yes, there are two ways to look at low latency but we should first check the numbers and verify whether the GENIVI Audio HAL does solve the low latency before we can say that it's a solution
  • Q: Is there a plan to sync various audio streams, if we are relying on external device for audio processing ?
    • Yes that was one of the topics that we wanted to consider during PoC
  • Q: How latency info is passed to other decoders like video ? In order to sync video and audio ? Will this be handled by an external audio mixer ?
  • Q: BT telephony: we have a problem here: we need to be part of the audio management but we need as low latency as we can.
    • This can be realized on DSP so we cannot include this in the general architecture
    • But maybe the GENIVI AudioManager or PoC can be a placeholder for this
    • Because now the problem with more than 1 phone: DSP handles it but we still need to be connected to the audio management
    • There is also a problem with the speech capturing
11### Extracting Raw Streams
12

Extracting Raw Streams, a pre-requisite for the PoC

  • Q: Do we need some sort of a GUI as well to control different external raw streams ?
    • Maybe for testing
    • And learning how it works
  • We have the same problem with GENIVI AudioManager, it's a complicated state machine and maybe we should visualize it.
  • It would be good to have some sanity check for such complex logic
13## Next steps
14
  • We haven't talked about hardware support so this can be a point of discussion
    • Well maybe if there is a particular platform in mind from the participants
    • for now on we will go with the platform that is mostly used
  • The audio configuration is a problem,  many streams should be treated differently, not sure if Android supports such logic [recapture this question by Pawel]
    • Such cases can be handled by the global effects
    • But for example mp3/mp4 would be handled outside of Android
  • Slide of the strategies
    • ○ Fail early approach: try, fail and learn
    • ○ Not all hardware can be taken into consideration
  • Milestones:
    • PoC on test hardware (only for external mixer)
    • Full audio path in real hardware
    • Goal of the demo: partitioning of features
  • What hardware do you have in mind?
    • Raspberry pi ? not used anymore
    • how about HiKey960, one participant said they stopped using HiKey boards
    • Renesas or Intel NUC 6 & 7 are also options
    • Renesas R-Car H3 Starter Kit with Kingfisher board fitted for AOSP development.
    • Quallcom 8195 and 820 can also be used
  • We are free to take the code and port it to whatever board is available
15## Review of List of prioritarized topics for the Audio HAL
16
  • Introduction: this list enumerates the activities that as a group the Audio HAL will try to tackle
  • For the next milestones,  we investigate these topics using the proof-of-concept and see if the proof-of-concept can help solving them and which strategy is better
  • Point 13: Bluetooth
    • Telephony because of latency is handled in DSP (not even going to OS, just maybe control messages)
    • How can we consider it in the audio management ? This is can be done with routing
  • Maybe it would be good to state which topics are hardware dependent and then we list the constraints for the solutions and we can have in the end a solution that is hardware dependent
  • Do we needa full software solution actually ?
    • We need to balance between performance/latency and power consumption
    • Higher power requirement. Higher CPU load. Therefore less processes. Functionality trade off if codec has very heavy CPU load. HW support for safety cases, QoS and functional partitioning.
    • Hardware dependency is added and now everyone knows what it means.
  • List updated online
    • 1, 2, 3, 4, 7, 8, 9, 11, 13 - High
    • 5, 6, 12, 14, 15, 16 - Medium
    • Rest - Low



  • No labels