Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
 
EE Times-Asia > Interface
 
 
Interface  

Configuring complex audio use cases

Posted: 03 Jul 2014  Print Version  Bookmark and Share

Keywords:smartphone  beamforming  Applications Processor  Wolfson  WISCE 

Do you often wonder about the complexity of your smartphone? A modern smartphone is a marvel of complexity packed into a very small package.

Even if you just think about audio, supporting many different use cases, each of which requires different combinations of links between components.

Take the most basic use case as an example – the humble phone call. The voice signal comes in from the microphone on the handset, is digitised and sent to the modem and hence to the mobile phone network. The return voice comes across the network, is decoded, converted back to analogue and sent to the ear speaker where the user hears it.

But maybe the user has a headset plugged in, in which case the voice signals in both directions need to go through the headphone jack. They could have a Bluetooth headset – now the signals need to be routed through the Bluetooth radio instead.

But modern smartphones are more intelligent than that. The voice signal is cleaned up to remove background noise making it more audible. Processing on the received signal counteracts the background noise, allowing users to hear the person they are talking to over the street noise.

Both of these need another microphone to monitor the background noise so it can be removed. Dynamic range compressors boost quiet parts so they can be heard above the background noise. Further filtering removes harmonics which generate resonances in the case which otherwise might cause unwanted buzzing.

So many possibilities, and we've only looked at a single-person phone call. When bringing in speakerphone there is further processing to detect and focus on the main speaker and filter out background noise, known as beamforming. This works more effectively with a third or fourth microphone, bringing in yet more signals. Or what about music playback – the audio now comes off the SD card on the phone, via the Applications Processor before being decoded and played for the user.

Figure 1: A simple phone call route.

Connecting up the audio
For many years, smartphones have used an Audio Hub to manage all the routing and connections. The different audio signals come into the Audio Hub, containing a series of mixers and multiplexors (muxes) to connect the signals from one place to another. Until recently, there have been maybe twenty different routing components, each with only a handful of options, so they are set up by selecting the appropriate option from a list, or even just looking at the datasheet and typing in the correct value to the register.

As smartphones have become more complex, the number of options has increased exponentially, requiring a complete change in routing paradigm.

No longer is it sufficient to have a few standard routes and select between them. Now the model is much more like an old-fashioned telephone switchboard, where blocks can take their input from any signal going. Suddenly the number of options for each register value has increased by an order of magnitude, from four to 62, making the routing much harder to work with, much harder to visualise and much easier to get wrong without realising.

To manage this complexity and make sense of all the options, you need a graphical tool where you can draw out your routes, such as Wolfson's WISCE interface. For example, figure 1 shows a simple route possibly representative of a phone call.

We have a stereo signal coming in from IN2, plus a noise signal from the microphone attached to IN3L. The signals are processed to remove ambient noise on the DSP core, then sent to the base band via Audio Interface AIF2. The far call comes in on AIF2 and is shaped via Equaliser blocks before being output to the headphones on OUT4. The local voice signal is also attenuated and mixed into the signal being played from the headphones as a side tone.

This route involves writes to 11 of the nearly 200 register fields dedicated to signal routing, selecting the appropriate one of the 62 options. WISCE tracks the writes which have been made in a history, which can be saved to a file for later reference, either by loading it back into WISCE or when setting up the driver on the end product.

1 • 2 Next Page Last Page



Article Comments - Configuring complex audio use cases
Comments:  
*  You can enter [0] more charecters.
*Verify code:
 
 
 
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

 
 
Back to Top