Welcome to this demonstration of our recently published algorithm for informed signal extraction
based on independent vector analysis.
This demo is an outcome of our contributions to the DFG research group Acoustic Sensor
Networks.
My name is Andreas Brendel, I am a PhD student at the Friedrich Alexander University Erlangen
Nürnberg working in the group of Walter Kellermann.
This demonstration has been created to a large extent by our master students Ege Gasnepolou
and Alexander Karatayev.
Observed acoustic signals are typically a mixture of different sources, where some of
them may be desired, for example the partner in a conversation, but others are undesired,
like interferers or noise in the background.
While separating the desired sources from the observed mixture is successfully done
by our human auditory system in our daily life, source separation and extraction is
an important, long-standing and challenging task in audio signal processing.
Considering the large variance in the physical properties of acoustic scenes, unsupervised
machine learning techniques that rely only on statistical properties and spatial diversity
of the observed signals like independent vector analysis, or in short IVA, represent a promising
approach to this problem due to their capability of learning spatial filters to extract the
desired sources and adapting them to unseen situations.
The signal emitted by an acoustic source is recorded by a set of spatially separated microphones
resulting in slightly different microphone recordings.
If multiple sources are simultaneously active, contributions from each source superimpose
with other source contributions at the microphones.
Hence, only a mixture of the sources can be observed.
In order to demix the observed mixture, a demixing system has to be adapted that outputs
estimates of the original source signals.
A widely used criterion for the adaptation of the demixing system is the mutual information
between the output channels.
However, the ordering of the output signals remains arbitrary, as permutation of the outputs
does not change statistical dependencies of the output signals.
This is known as the outer permutation problem.
In order to address this issue, additional prior knowledge like the DOA of the target
source has to be incorporated, resulting in informed source separation and extraction
methods.
Hi, I'm Egegaz Nepoldu.
I was among the students who undertook the development of this demo application.
To demonstrate an online version of our informed IVA algorithm, we have striped for a low-cost
hardware platform, confirming its computational efficiency.
Here, we chose a Raspberry Pi 4, stacked with a RISperk for mic array.
The backend of the demo consists of a non-Pi-based modular overlap and add pipeline, which is
interfaced with OS-recognized audio devices via the Python Sound Device Library.
Separate installation procedures for x86 machines and Raspberry Pi ensure that hardware
capabilities are fully exploited during computations.
The source code of the demo will be made publicly available.
A GUI provides an intuitive way to adjust settings, start operation, and tune the parameters
on the fly.
An overview of the features of the GUI will be given now.
This is how our user interface looks like.
Please follow these steps to try it out.
Here are the hardware-related settings.
Presenters
Andreas Brendel
Zugänglich über
Offener Zugang
Dauer
00:05:56 Min
Aufnahmedatum
2021-12-13
Hochgeladen am
2021-12-13 10:56:02
Sprache
en-US