Now that we have code to actually render onto the cube, let’s animate the cube to the sound of music.
Here’s what the end result will look like (make sure to enable audio!). Apologies for the crappy webcam quality, but that’s all I had available:
To do that, we will need a ton of new resources:
type AudioTimer = stm32f4xx_hal::timer::Timer<stm32f4xx_hal::stm32::TIM2>;
type AudioAdc = stm32f4xx_hal::adc::Adc<stm32f4xx_hal::stm32::ADC1>;
type AudioLineInPin = gpio::gpioa::PA1<gpio::Analog>;
type AudioMicPin = gpio::gpioa::PA2<gpio::Analog>;
type AudioSampler = audio::sampler::Sampler;
type AudioFft = audio::spectrum::Fft;
type AudioSpectrum = audio::spectrum::Spectrum;
type AudioSpectrumCube = audio::spectrum::SpectrumCube;
#[rtfm::app(device = stm32f4xx_hal::stm32)]
const APP: () = {
struct Resources {
// ...
// Audio sampling related resources
audio_timer: AudioTimer,
audio_adc: AudioAdc,
audio_line_in_pin: AudioLineInPin,
audio_mic_pin: AudioMicPin,
audio_sampler: AudioSampler,
audio_fft: AudioFft,
audio_spectrum: AudioSpectrum,
audio_spectrum_cube: AudioSpectrumCube,
}
// ...
}
The timer will control how often we collect audio samples. It will run extremely quickly at 44100Hz. Try to do that reliably on a non-realtime system! Because we are operating at such a low level, we are actually able to sample sound at that frequency.
When the timer ticks, we need to use an ADC (analog-to-digital converter) to convert the analog signal into a digital one. This will sample either the Line-In signal, or the Mic signal (since my board has both an audio jack and a simple microphone on it).
Each time we read a value from the ADC, it gets processed by a sampler, which decides how to “store” the sample for processing.
We then perform a fast Fourier transform (FFT) on the sampled data every so often. When we do this, an audio spectrum is updated with new frequency information. The spectrum will have higher values according to the intensity of each frequency.
Finally, we want to map the spectrum onto a cube. We will use a simpler model than the fully colored cube to describe that information. This spectrum cube is then used to render onto the real cube as you will see later.
Sampling
Let’s start by creating the sampler. For now, it will not do anything fancy and just use an array as a ring buffer to store audio samples:
pub struct Sampler {
pub samples: [f32; config::AUDIO_SAMPLE_BUFFER_SIZE],
index: usize,
count: usize,
}
pub enum Status {
NeedsMore,
Complete,
}
impl Sampler {
// ...
pub fn add_sample(&mut self, sample: f32) -> Status {
self.samples[self.index] = sample;
self.index += 1;
self.index %= config::AUDIO_SAMPLE_BUFFER_SIZE;
self.count += 1;
if self.count == config::AUDIO_SPECTRUM_INTERVAL_SAMPLES {
self.count = 0;
Status::Complete
} else {
Status::NeedsMore
}
}
}
There are two values in a config
module that I’m able to tweak: how many samples we want to keep, and at which
interval we consider the sampler “complete” so that we are ready to compute its spectrum. This approach is a bit
prototype-y but it will be good enough for now.
We can already start collecting samples into the sampler. In our init
method, let’s create the audio_timer
,
audio_sampler
and audio_adc
to be able to do that:
let mut audio_timer = stm32f4xx_hal::timer::Timer::tim2(
device.TIM2,
(config::AUDIO_SAMPLE_HZ as u32).hz(),
clocks,
);
audio_timer.listen(stm32f4xx_hal::timer::Event::TimeOut);
let audio_line_in_pin = gpioa.pa1.into_analog();
let audio_mic_pin = gpioa.pa2.into_analog();
let mut audio_adc = stm32f4xx_hal::adc::Adc::adc1(
device.ADC1,
false,
stm32f4xx_hal::adc::config::AdcConfig::default(),
);
audio_adc.calibrate();
audio_adc.set_end_of_conversion_interrupt(stm32f4xx_hal::adc::config::Eoc::Sequence);
let audio_sampler = audio::sampler::Sampler::new();
What this will do is that it will run the timer at a certain rate (the configuration value is 44100 Hz). At each tick, it will generate an interrupt. We bind a task to that interrupt that will start the ADC sampling process:
// in our task definitions
#[task(binds = TIM2, priority = 3, resources = [audio_adc, audio_timer, audio_line_in_pin, audio_mic_pin])]
fn sample_audio_start(cx: sample_audio_start::Context) {
sample_audio_start_impl(
cx.resources.audio_timer,
cx.resources.audio_adc,
cx.resources.audio_mic_pin,
cx.resources.audio_line_in_pin,
)
}
// ... and further below
fn sample_audio_start_impl(
audio_timer: &mut AudioTimer,
audio_adc: &mut AudioAdc,
audio_mic_pin: &AudioMicPin,
audio_line_in_pin: &AudioLineInPin,
) {
audio_timer.clear_interrupt(stm32f4xx_hal::timer::Event::TimeOut);
audio_adc.enable();
audio_adc.configure_channel(
audio_line_in_pin,
stm32f4xx_hal::adc::config::Sequence::One,
stm32f4xx_hal::adc::config::SampleTime::Cycles_28,
);
audio_adc.start_conversion();
}
For now, let’s hard-code whether the line-in or mic pin is being used. When the conversion is done, another interrupt gets triggered, that lets us read out the sampled value:
// in our task definitions
#[task(binds = ADC, priority = 3, resources = [audio_adc, audio_sampler])]
fn sample_audio_done(cx: sample_audio_done::Context) {
sample_audio_done_impl(cx.resources.audio_adc, cx.resources.audio_sampler)
}
// ... and further below
fn sample_audio_done_impl(
audio_adc: &mut AudioAdc,
audio_sampler: &mut AudioSampler,
) {
audio_adc.clear_end_of_conversion_flag();
let value = audio_adc.current_sample();
audio_sampler.add_sample(value);
}
Okay, so now we have a bunch of audio samples in an array, by basically doing busywork. How do we use that?
Analysis
Let’s start processing the samples. One of the more straight-forward ways is to use a real-to-complex fast Fourier
transform (FFT). I patched the chfft
crate to work in an embedded
environment. However, it requires a heap.
Adding a heap
What do you mean, we don’t have a heap?! Yes, so far we haven’t used any heap allocation whatsoever. However, this library requires heap allocation and I can’t be bothered to reimplement FFT without heap allocations right now.
In Rust, in a no-std environment, we need to depend on the alloc
crate to introduce dynamic memory allocation.
Unfortunately, this means we can no longer use stable rust and need to switch to one of the unstable branches, like
nightly:
$ echo nightly > rust-toolchain
Now we can add the alloc crate in main.rs
extern crate alloc;
This crate pulls in some compiler magic, and now our build will start failing:
error: no global memory allocator found but one is required; link to std or add `#[global_allocator]` to a static item that implements the GlobalAlloc trait.
Luckily, there is a library we can use to manage a heap allocator for Cortex-M devices in particular.
$ cargo add alloc-cortex-m
This lets us register a global allocator at the top of main.rs
:
#[global_allocator]
static ALLOCATOR: alloc_cortex_m::CortexMHeap = alloc_cortex_m::CortexMHeap::empty();
By default, the heap is empty and every allocation attempt will give an out-of-memory error.
We can initialize the heap with some actual memory, by putting this code at the beginning of the init_impl()
function.
let start = cortex_m_rt::heap_start() as usize;
let size = 65536; // in bytes
unsafe { ALLOCATOR.init(start, size) }
The heap_start()
function reads a magic variable that is set by our linker script. The number of bytes to reserve
is pretty arbitrary, here I created a 64k heap.
The compiler still complains, because we haven’t told it what to do on out-of-memory errors.
error: `#[alloc_error_handler]` function required, but not found
This requires us to activate an unstable compiler feature (hence why we switched to the nightly compiler) and register a function:
#![feature(alloc_error_handler)]
// ... further down in main.rs
#[alloc_error_handler]
fn handle_alloc_error(_: core::alloc::Layout) -> ! {
loop {}
}
So, when we run out of memory, we loop forever :)
Running FFT
Now we can compute spectrums! We will add a audio/spectrum.rs
file:
#[derive(Debug)]
pub struct Spectrum {
pub data: alloc::vec::Vec<num_complex::Complex32>,
}
#[derive(Debug)]
pub struct Fft {
fft: chfft::RFft1D<f32>,
}
There is a Spectrum
struct that will contain spectrum data using heap-allocated memory (a Vec
). The spectrum
can be produced by the Fft
struct which keeps state needed by the FFT algorithm.
This allows us to implement an actual FFT algorithm:
impl Fft {
pub fn new() -> Self {
let fft = chfft::RFft1D::new(config::AUDIO_SAMPLE_BUFFER_SIZE);
Self { fft }
}
pub fn compute_spectrum(
&mut self,
samples: &[f32; config::AUDIO_SAMPLE_BUFFER_SIZE],
) -> Spectrum {
let data = self.fft.forward(&samples[..]);
Spectrum { data }
}
}
We simply run the FFT forward algorithm tuned for our buffer size. To actually run this code, we define a new task:
#[task(priority = 1, resources = [audio_sampler, audio_fft, audio_spectrum])]
fn compute_audio_spectrum(cx: compute_audio_spectrum::Context) {
compute_audio_spectrum_impl(
cx.resources.audio_sampler,
cx.resources.audio_fft,
cx.resources.audio_spectrum,
)
}
You may notice that this #[task()]
declaration doesn’t contain a binds
, so it is not bound to any interrupt.
Instead we have to manually spawn it as part of a different task. It also has a pretty low priority (1
), which means
that it can run in the background during the idle time when no other interrupts are running, but it will be
interrupted regularly by other tasks. Hence, we need to use a rtfm::Mutex
for the resources that might be contended
by higher-priority tasks. In this case, only the sampler will be contended (since it’s used by the 44100Hz task).
fn compute_audio_spectrum_impl(
mut audio_sampler: impl rtfm::Mutex<T = AudioSampler>,
audio_fft: &mut AudioFft,
audio_spectrum: &mut AudioSpectrum,
) {
// Create a local copy of samples
let samples = audio_sampler.lock(|s| s.samples);
*audio_spectrum = audio_fft.compute_spectrum(&samples);
}
The way these mutexes work is that we will block any higher-priority tasks while holding the mutex lock, hence this
could cause “stutter” if we hold the lock for too long. The idea is that we want to create a local copy of the samples
array while holding the “lock”, which should be pretty efficient, the compiler will generate some efficient memcpy()
code here. Then, we can afford to run the expensive FFT algorithm without holding any locks, which means other tasks
are free to interrupt us all the time.
Now we should spawn this task when we have enough spectrum samples!
// Note the spawn = [...] which was added
#[task(binds = ADC, priority = 3, resources = [audio_adc, audio_sampler], spawn = [compute_audio_spectrum])]
fn sample_audio_done(cx: sample_audio_done::Context) {
let spawn = cx.spawn;
sample_audio_done_impl(cx.resources.audio_adc, cx.resources.audio_sampler, || {
// Ignore if already scheduled
let _ = spawn.compute_audio_spectrum();
})
}
// ...
fn sample_audio_done_impl(
audio_adc: &mut AudioAdc,
audio_sampler: &mut AudioSampler,
spawn_compute_audio_spectrum: impl FnOnce(),
) {
audio_adc.clear_end_of_conversion_flag();
let value = audio_adc.current_sample();
if let audio::sampler::Status::Complete = audio_sampler.add_sample(value as f32) {
spawn_compute_audio_spectrum();
}
}
Visualizing
Now we have an array of complex numbers, but how do we actually visualize it onto the cube?
Let’s create a new struct which contains a simplified version of what we want to plot onto the cube. Instead of full color information, we only store a single float which determines the intensity of each cell. Then we can later use the intensity information to colorize it.
#[derive(Debug, Default)]
pub struct SpectrumCube {
// indexed as [y][x][z] for performance reasons
pub cells: [[[f32; matrix::CUBE_SIZE]; matrix::CUBE_SIZE]; matrix::CUBE_SIZE],
pub cycle: usize,
}
What this code should do is to have one plane of the cube (the “rightmost one”) contain the latest computed spectrum, and over time create a copy of that layer that slowly moves to the left. The algorithm for that is pretty straight forward:
impl SpectrumCube {
pub fn new() -> Self {
Default::default()
}
pub fn update(&mut self, spectrum: &Spectrum) {
use num_traits::float::Float;
let spectrum = &spectrum.data;
let spectrum_len = spectrum.len() / 2;
// We haven't computed FFT yet!
if spectrum_len == 0 {
return;
}
self.cycle += 1;
// Time to shift layers to the left!
if self.cycle == config::AUDIO_SPECTRUM_CYCLES {
self.cycle = 0;
for y in (1..matrix::CUBE_SIZE).rev() {
self.cells[y] = self.cells[y - 1];
}
}
// TODO: update the rightmost layer here
}
}
To populate the rightmost layer, we need to map the spectrum (which is a sequence of complex numbers) into the plane. We can let the X axis represent the frequency, and then “plot” the amplitude (i.e. magnitude of the complex number) of each frequency onto the plane. By doing it in a “plotting” manner (i.e. mark points onto the grid of the plane) instead of a “tracing” manner (i.e. for each cell in the plane, determine whether it should be lit), we get a very “ghostly” looking spectrum.
// Replaces the TODO above
let mut new_y_plane = [[0.0f32; matrix::CUBE_SIZE]; matrix::CUBE_SIZE];
for x in 0..matrix::CUBE_SIZE {
let idx_lo =
(x as f32 * spectrum_len as f32 / matrix::CUBE_SIZE as f32).floor() as usize;
let idx_hi =
((x + 1) as f32 * spectrum_len as f32 / matrix::CUBE_SIZE as f32).floor() as usize;
for idx in idx_lo..idx_hi {
const CLAMP_POINT: f32 = 3.0;
let amp: f32 = spectrum[idx].norm_sqr().sqrt();
let amp = amp.log10();
let amp = if amp > CLAMP_POINT {
amp - CLAMP_POINT
} else {
0.0
};
let amp = amp * 4.0;
let amp_min = (amp.floor() as usize).clamp(0, matrix::CUBE_SIZE - 1);
new_y_plane[x][amp_min] += 1.0 / (idx_hi - idx_lo) as f32;
}
}
self.cells[0] = new_y_plane;
We remove insignificant frequencies by having a “clamp point” below which we discard all data. Then we amplify the
data by a constant factor (here, it uses 4.0
).
To finally draw this onto the cube, we can use the gradient code from before but use the spectrum cube cells as weights for each cell:
fn generate_next_led_frame_impl(
mut led_next_matrix: impl rtfm::Mutex<T=LedNextMatrix>,
audio_spectrum: &AudioSpectrum,
audio_spectrum_cube: &mut AudioSpectrumCube,
) {
audio_spectrum_cube.update(audio_spectrum);
let mut new_matrix = led::matrix::LedMatrix::new();
for z in 0..led::matrix::CUBE_SIZE {
for y in 0..led::matrix::CUBE_SIZE {
for x in 0..led::matrix::CUBE_SIZE {
let intensity = audio_spectrum_cube.cells[y][x][z];
*new_matrix.xyz_mut(x, y, z) = led::matrix::Color::rgb(
(intensity * x as f32 * 255.0 / led::matrix::CUBE_SIZE as f32) as u8,
(intensity * y as f32 * 255.0 / led::matrix::CUBE_SIZE as f32) as u8,
(intensity * z as f32 * 255.0 / led::matrix::CUBE_SIZE as f32) as u8,
);
}
}
}
led_next_matrix.lock(|m| *m = Some(new_matrix));
}
I won’t show the scheduling code for this task, but it simply runs in the background to generate the next frame while the current frame is being rendered.
Hence we get this final result! \o/
Next time, we can expand this to some more interesting patterns.