Web Audio API


Introduction:



           Audio on the web was never thought of without a flash or a plugin in the formative years. An excellent audio element of HTML 5 “The Web Audio API” has brought in a great revolution without the help of the above mentioned elements and is capable of supporting sophisticated games and interactive applications.

Web Audio API

             The Web Audio API is basically a high-level JavaScript API that processes and integrated audio in web apps.

       The two different types: In general there are two audio API’s one being the Audio data API and the other is the Web Audio API .The creation, the processing and the controlling of audio within the webapps is much simpler in Web Audio API when compared to the low-level access of the audio data by the Audio Data API.

        Amazing Feature: The attractive feature if that it does not require a helper application such as plug-in or flash. The main purpose of this API is to include the capabilities that are found in modern game audio engines and the excellent tasks such as mixing ,  processing and filtering tasks that are found in modern desktop audio production application.

The Working:

      The AudioContext instance: This is one of the most powerful instance used for managing and playing all the sounds. Initially an audio context has to be created which is like a container. Inside the context one can create sources and then connect them to the nodes and finally connected to the destination in the form a chain.


     var context = new AudioContext() ;
     //Audio Container created.


Since WebAudioAPI is a based on webkit technology and is supposed to be supported on IOS6,’webkit’ should be appended to the audio context.


var context = new WebKitAudioContext();


Once the audio context container is created a source node can also be created. A source node can be an Oscillator which generates sound from scratch or audio files such as wav, mp3 can be loaded and played back.

To create an oscillator


Oscillator = context.createOscillator();


Then the oscillator is supposed to be connected to the destination which can be speakers.


Oscillator.connect(context.destination);


To generate sound instantly


Oscillator.noteOn(0);


Now once the audio context is created between the source and the destination there is another interface called the audionode. They act as intermediate modules used for processing.

There are several audio nodes such as the-

1. Gainnode: This is used for volume which can be created in the following way-


volumeNode = context.createGainNode();


One can set the volume using


volumeNode.gain.value = 0.3;


Now this can be connected to the source


Oscillator.connect(volumeNode);


2. The Biquad filter: To add complex effects to your sound forms a biquad filter is used. It is low-order filter.

3. The delay node: It delays the incoming node signal.

4. The audiobuffer node: It consists of an in-memory audio data stored in an audio buffer.

5. Panner node: It describes its position with right –hand Cartesian co-ordinates and positions an incoming audio stream in 3D space.

6. Convolver node: Given an impulse response it applies a linear convolution effect.

7. Dynamic Compressor Node: When multiple sounds are played together, it helps to prevent distortion by lowering the volume of the loudest part of the signal.

8. Wave Shaper node: Inspite of the distortion effect, it applies a curve distorter to add a soothing feeling to the signal. It is a non-linear distorter.

By combining the above mentioned code one can find a working demo.

Exploring the WEB AUDIO API:


The AudioPara: It is like the AudioNode. It can be set to a specific value or a different value.

Ended: It is an event that can be called when the media has reached the end of the play.

The media elements:


MediaElementAudioSource Node: It is an audio node consisting of <audio> and <video> element of the HTML 5.

MediaStreamAudioSource Node: It is an audio node that acts as an audio source consisting of webcam or microphone.

MediaStreamAudioDestination Node: It is an audio node consisting of the audio destination node.

   How to extract time and frequency?

Then check out the Analyser node.
Assume that Source ‘S’ is connected to Destination ‘D’


var analyser = context.createAnalyser();
S.connect(analyser);
analyser.connect(D);
var freqDomain = new Float32Array(analyser.frequencyinCount);
analyser.getFloatFrequencyData(freqDomain);


A channel Splitter and a Merger?
A channel splitter separated different channels of an audio source into a set of mono outputs. It is created using a method called createChannelSplitter

A channel merger unites the mono inputs into a single output. It is created using a method called createChannelMerger

AudioListener: It gives the position and orientation of a person listening to the audio.

In c# it is created in the following way:


AudioListener listener = new AudioListener();


Script Process Node: This helps in the generation, processing or analysing of audio used in JavaScript.

Audio Process (event): When a script process node is ready to be processed the event is fired.

AudioProcessing Event: When the script processor node Input Buffer is ready to be processed, the event is fired.

Offline Audio Context: It does not render the audio but generates it as fast as it can in the buffer.

Complete: When the Offline Audio Context is terminated it is fired.

Offline Audio Completion Event: The complete event implements the interface.

Share this

Related Posts

Previous
Next Post »