You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »


1## Introduction
2

Philippe Robin welcomed everyone and handover to Wassim

60 attendees

Wassim describes the agenda (slide #2)

slide deck for this workshop is here

3

## Project overview & proof-of-concept demo

4

Project overview presented by Wassim (slides #4 to #6)

Google reference design only covers the interface to an external system but does not define it. By creating a proof of concept our goal is to demonstrate the feasibility of the split of an audio system (internal and external to Android) and to learn lessons on which limitations could possibly the Android interface have, how it could be extended in the future in order to facilitate integration with external automotive systems.

Topics relevant for all strategies (internal, external, mixed) and pre-requisites for the proof-of-concept are detailed in slides #7 to #9

We expect today to take opinion from the people on what topics are currently relevant for further discussion today.

5

Proof-of-concept presented by Piotr (slides #13 to #15)

  • Presentation about the PoC and what have been done
  • Presentation of the demo, followed by demo running using the emulator, look at demo.md
  • Lessons learned (look at read.md in github demo repo, (Piotr Krawczyk can you add the link to demo repo in github, please ?)
6

Q&A on Proof-of-concept

  • Q: What about low latency streams ? Are we confident enough that the low latency streams can be handled in the external system ?
    • proof-of-concept overview, on the right diagram (slide #11), GENIVI audio management can provide the apis to serve low latency audio
    • so there are 3 possible approaches for low latency audio: use the external audio system, use the GENIVI (custom) audio management or use AA audio management
    • yes, there are these 3 options for the third parties developers, however there is a need to sync audio and calculate audio latencies, but this can be done
  • Q: CPU offloading, what do you mean exactly by this ? Is it considered ?
    • CPU offloading is when the DSP can take care of some action instead of the CPU
    • some audio streams can be decoded using DSP, AA has such a feature to feed DSP with encoded data to play
    • Android would like to prevent 3rd party developers from taking such decisions and rather inform that such streams can be offloaded.
    • For now the proof-of-concept is not considering CPU offloading, we know it's feasible but it's not yet considered in this proof-of-concept
  • Q: if audio is played by Android application but external mixer for some reason does not given priority due to other sources , how app layer gets notified about the error ?
    • ideally the external mixer would be used for things that go smootlhy in the audio system like chimes and warnings, usually warnings are overlays that are mixed "on top" meaning  they would not stop the entertainment music completely but would be played along with it, in that case the application would be notified that it is an error
    • I am not sure there is a case where the external system will completely overrides the source, if this happens, I admit there must be a notification mechanism toward the app layer
    • there are some internal mechanisms that covers this, this feedback mechanism is to be checked in the proof-of-concept
  • Q: Can you please explain the diagram on the right (slide #13)?
    • the poc is just starting, this is not a finalized design yet, iteration process
    • our objective is not to modify Android (audio management box in the middle of the diagram) to follow Treble and use it as it is with the logic it provides
    • system level audio management is the name we gave to any system that provides additional logic Android might not be aware of like safety signals or other signals
    • the system level audio management proxy (on the top right of slide #13) is an entity that supports any logic that is not available in Android or any low latency audio that is not supported by Android well enough, this entity will complement Android Audio HAL with an additional HAL that the application could use
    • the way Android can communicate with the system level audio management is explained by the following diagram
      • Piotr Krawczyk can you add the diagram you showed at this step in the web conference to the workshop slide deck please ?
  • Q: What are the socket kind of ports shown in the lower block of slide #13 ?
    • One of the first findings we made with the proof-of-concept is that the network sockets cannot be used directly by the Audio HAL(so the proof-of-concept played its role of experimenting a solution that was not successful), so the network sockets have been deprecated in the proof-of-concept, and we use currently a different connection system
    • the sockets are now realized by the audio relay service that is able to get data from local service and forward them to external TCP sockets (look at slide #15), code for this audio relay is available but quite trivial
  • Q. How does this proof-of-concept work in case of a fail-safe scenario ? Can you explain more on system proxy part considering start-up and shutdown use case ?
    • this question cannot be answered in one or two sentences, having an external proxy actually enables us to have early audio before Android is available, we will look into this with our proof-of-concept but we have no detailed answer yet
    • actually in the proof-of-concept, the signal (PCM data) is redirected to 2 devices, i.e. to the audio relay that later on sends the audio data to the PCM writer (slide #15) but also to the original audio device connected to the emulator, during start-up time when network is not available this gives us the possibility to play audio signals because ALSA is already there, this is how we might implement a fail-safe scenario
      • also it is worth mentioning that in a production system, there will be 2 types of HALs, one which is available at start-up with minimum functionalities, that will replaced by a full-fledge HAL when start-up is completed
    • we need to validate this approach with the proof-of-concept
7## Next Step for the Proof-of-Concept
8### Audio Manager Integration
9

GENIVI has an audio manager framework used in Linux environment. It's already in multiple production products. It handles both the operating system audio and the system level audio, it fulfills all the reliability and safety requirements of automotive systems such the early startup features, etc. It's only a framework that allows each vendor to write and link their own logic in a form of a plugin and differentiate w.r.t. use case support. The plugin can run everywhere there is a GENIVI Audio Manager framework.

In our work, we are using Android but still we would like to benefit from both Android and GENIVI Audio Manager, this is why on slide #13, we have the following two Audio Managers

  • Android Audio Management
  • GENIVI Audio Manager Framework and related App in the system level audio management proxy.

How to integrate these two Audio Managers is still a topic of investigation using the proof-of-concept, we would be interested in getting feedback from the participants

At first glance, GENIVI Audio Manager would take over when the Android Audio management cannot handle certain properties of external streams, GENIVI Android App would be just an assistant that is able to interact directly with external audio subsystems and this will use the GENIVI Audio Manager Framework

10

Q&A on Audio Management

  • Q: Will there be a conflict between having two Audio HALs (Android and GENIVI) ?
    • No the App should be able to decide
  • Q: What use cases are being considered for this GENIVI Audio Framework ? Is there some use cases that Android cannot fulfill ?
    • Android 11 would have some ways to control the global effects but for now for example, the global effects and equalization are not available system-wise, they can be handled by GENIVI Audio Manager Framework
    • It is valuable to have the GENIVI Audio Manager as an existing solution and reliable framework that can be used for pre-development models until the feature is supported by Android. This can be useful in a transition phase during system development
    • This contradicts to some extent what we said before about the need to have features implemented outside of Android
  • Q: The question now is: do we need to make any change outside of Android ?
    • The fact that we need to have an external audio management is understood, there are safety features that need to be handled outside of Android.
    • the question now is do we need an additional interface to control any custom function that is not provided by the standard Android interface, i.e. do we need this proxy box (look at slide #13) ?
    • the question of the blue box (external audio system) is understood,
    • the point is that there is no valuable use case to make the GENIVI Audio Manager proxy (or App) established yet, this is the concern
  • Q: Concern: if we are making something outside of Android we need to really make sure we need it, we need to rationalize our proposal
    • The reason why we are starting the proof-of-concept is to answer this question: do we need this or not?
    • By the end of proof-of-concept we should have the following :
      • An answer to the question whether we need such an outside system
      • Proof points for Google Android team that we need an outside system
      • A learning experience as well as a knowledge on how to deal with such cases
      • A learning experience about the numbers behind the low latency
      • A more detailed understanding of the use cases
    • The proof-of-concept is kind of a placeholder where we can collect all the features that the Android cannot support.
    • >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
    • PoC is a good point. We can understand the Audio system and its shortcomings
  • Q: Low latency question
    • Door open, sounds need latency. App needs low latency sound
    • Yes, there are two ways to look at low latency but we should first check the numbers and verify whether the GENIVI Audio HAL does solve the low latency before we can say that it's a solution
  • Q: Is there a plan to sync various audio streams, if we are relying on external device for audio processing ?
    • Yes that was one of the topics that we wanted to consider during PoC
  • Q: How latency info is passed to other decoders like video ? In order to sync video and audio ? Will this be handled by an external audio mixer ?
  • Q: BT telephony: we have a problem here: we need to be part of the audio management but we need as low latency as we can.
    • This can be realized on DSP so we cannot include this in the general architecture
    • But maybe the GENIVI AudioManager or PoC can be a placeholder for this
    • Because now the problem with more than 1 phone: DSP handles it but we still need to be connected to the audio management
    • There is also a problem with the speech capturing
11### Extracting Raw Streams
12

Extracting Raw Streams, a pre-requisite for the PoC

  • Q: Do we need some sort of a GUI as well to control different external raw streams ?
    • Maybe for testing
    • And learning how it works
  • We have the same problem with GENIVI AudioManager, it's a complicated state machine and maybe we should visualize it.
  • It would be good to have some sanity check for such complex logic
13## Next steps
14
  • We haven't talked about hardware support so this can be a point of discussion
    • Well maybe if there is a particular platform in mind from the participants
    • for now on we will go with the platform that is mostly used
  • The audio configuration is a problem,  many streams should be treated differently, not sure if Android supports such logic [recapture this question by Pawel]
    • Such cases can be handled by the global effects
    • But for example mp3/mp4 would be handled outside of Android
  • Slide of the strategies
    • ○ Fail early approach: try, fail and learn
    • ○ Not all hardware can be taken into consideration
  • Milestones:
    • PoC on test hardware (only for external mixer)
    • Full audio path in real hardware
    • Goal of the demo: partitioning of features
  • What hardware do you have in mind?
    • Raspberry pi ? not used anymore
    • how about HiKey960, one participant said they stopped using HiKey boards
    • Renesas or Intel NUC 6 & 7 are also options
    • Renesas R-Car H3 Starter Kit with Kingfisher board fitted for AOSP development.
    • Quallcom 8195 and 820 can also be used
  • We are free to take the code and port it to whatever board is available
15## Review of List of prioritized topics for the Audio HAL
16
  • Introduction: this list enumerates the activities that as a group the Audio HAL will try to tackle
  • For the next milestones,  we investigate these topics using the proof-of-concept and see if the proof-of-concept can help solving them and which strategy is better
  • Point 13: Bluetooth
    • Telephony because of latency is handled in DSP (not even going to OS, just maybe control messages)
    • How can we consider it in the audio management ? This is can be done with routing
  • Maybe it would be good to state which topics are hardware dependent and then we list the constraints for the solutions and we can have in the end a solution that is hardware dependent
  • Do we needa full software solution actually ?
    • We need to balance between performance/latency and power consumption
    • Higher power requirement. Higher CPU load. Therefore less processes. Functionality trade off if codec has very heavy CPU load. HW support for safety cases, QoS and functional partitioning.
    • Hardware dependency is added and now everyone knows what it means.
  • List updated online
    • 1, 2, 3, 4, 7, 8, 9, 11, 13 - High
    • 5, 6, 12, 14, 15, 16 - Medium
    • Rest - Low



  • No labels