JavaScript Heatmap Spectrogram Chart - Editor

This example shows a simple use case for Heatmaps as a spectrogram.

Spectrogram is a visual representation of the spectrum of frequencies. Spectrograms can be used to visualize any wave form. Most often spectrograms are used to display audio signals.

This example loads a audio file and shows spectrograms for each channel on the audio file.

The spectrogram shows frequency on one axis (Y Axis) and time on the other (X Axis). The color of a the heatmap at any point is the amplitude of the frequency at the specified time.

Getting the data

First the audio file that will be shown is loaded with fetch and converted into an ArrayBuffer.

const response = await fetch('url')
const buffer = await response.arrayBuffer()

This example uses the Web Audio APIs to retrieve the frequency data to display in the heatmap. These APIs make it easy to work with audio files and manipulate the files. For spectrogram use the AnalyzerNode is the most useful part of the API as it provides getByteFrequencyData method which is a implementation of Fast Fourier Transform.
The AudioContext contains method to convert an ArrayBuffer into an AudioBuffer.

const audioBuffer = await audioContext.decodeAudioData(buffer)

Now that the audio file is converted into a AudioBuffer it's possible to start extracting data from it.

To process the full audio buffer as fast as possible, a OfflineAudioContext is used. The OfflineAudioContext doesn't output the data to a audio device, instead it will go through the audio as fast as possible and outputs an AudioBuffer with the processed data. In this example the processed audio buffer is not used, but the processing is used to calculate the FFT data we need to display the intensities for each frequency in the spectrogram. The audio buffer we have created is used as a buffer source for the OfflineAudioContext.

The buffer source only has a single output but we want to be able to process each channel separately, to do this a ChannelSplitter is used with the output count matching the source channel count.

const splitter = offlineCtx.createChannelSplitter(source.channelCount)

This makes it possible to process each channel separately by making it possible to create AnalyzerNodes for each channel and only piping a single channel to each analyzer.

A [ScriptProcessorNode][createscriptprocessor] is used to go through the audio buffer in chuncks. For each chunk, the FFT data is calculated for each channel and stored in large enough buffers to fit the full data.

Last startRendering() method is called to render out the audio buffer. This is when all of the FFT calculation is done.

Showing the data

Each channel of the audio file is shown in it's own chart inside a single dashboard. When the data is calculated, a dashboard is made. This dashboard is then passed to functions that setup the charts inside the dashboard and create the heatmap series based on the script processor buffer size and fft resolution.

The data from the audio APIs is in wrong format to display in the heatmap without editing it. The heatmap data has to be mapped from the one dimensional array it was generated in to a two dimensional array. This mapping in this example is done with remapDataToTwoDimensionalMatrix function, this function maps the data in columns. If the heatmap was displayed vertically, then the mapping would be easier as each stride of data could just be placed as a row.

// Datamapping for vertical spectrogram
const output = Array.from(Array(tickCount)).map(() => Array.from(Array(strideSize)))
for (let row = 0; row < strideSize; row += 1) {
    output[row] = arr.slice(row * strideSize, row * strideSize + strideSize)
}