The sound data is returned as a ByteArray object containing
512 four-byte sets of data, each of which represents a floating
point value between -1 and 1. These values represent the amplitude
of the points in the sound waveform being played. The values are
delivered in two groups of 256, the first group for the left stereo
channel and the second group for the right stereo channel.
The
SoundMixer.computeSpectrum()
method returns
frequency spectrum data rather than waveform data if the
FFTMode
parameter
is set to
true
. The frequency spectrum shows amplitude
arranged by sound frequency, from lowest frequency to highest. A
Fast Fourier Transform (FFT) is used to convert the waveform data
into frequency spectrum data. The resulting frequency spectrum values
range from 0 to roughly 1.414 (the square root of 2).
The following diagram compares the data returned from the
computeSpectrum()
method
when the
FFTMode
parameter is set to
true
and
when it is set to
false
. The sound used for this
diagram contains a loud bass sound in the left channel and a drum
hit sound in the right channel.
The
computeSpectrum()
method can also return
data that has been resampled at a lower bit rate. Generally, this
results in smoother waveform data or frequency data at the expense
of detail. The
stretchFactor
parameter controls
the rate at which the
computeSpectrum()
method
data is sampled. When the
stretchFactor
parameter
is set to 0, the default, the sound data is sampled at a rate of
44.1 kHz. The rate is halved at each successive value of the
stretchFactor
parameter.
So a value of 1 specifies a rate of 22.05 kHz, a value of 2 specifies
a rate of 11.025 kHz, and so on. The
computeSpectrum()
method
still returns 256 floating point values per stereo channel when
a higher
stretchFactor
value is used.